Datasets:
de-francophones
commited on
Commit
•
3d99c56
1
Parent(s):
1452709
62f8ef76f052ea589ba93cd6db02b6d903aaa603221218941c35ad5692a40ae1
Browse files- en/2709.html.txt +149 -0
- en/271.html.txt +261 -0
- en/2710.html.txt +149 -0
- en/2711.html.txt +110 -0
- en/2712.html.txt +117 -0
- en/2713.html.txt +184 -0
- en/2714.html.txt +184 -0
- en/2715.html.txt +86 -0
- en/2716.html.txt +216 -0
- en/2717.html.txt +216 -0
- en/2718.html.txt +12 -0
- en/2719.html.txt +167 -0
- en/272.html.txt +59 -0
- en/2720.html.txt +216 -0
- en/2721.html.txt +252 -0
- en/2722.html.txt +116 -0
- en/2723.html.txt +280 -0
- en/2724.html.txt +1 -0
- en/2725.html.txt +1 -0
- en/2726.html.txt +183 -0
- en/2727.html.txt +183 -0
- en/2728.html.txt +330 -0
- en/2729.html.txt +33 -0
- en/273.html.txt +172 -0
- en/2730.html.txt +33 -0
- en/2731.html.txt +33 -0
- en/2732.html.txt +33 -0
- en/2733.html.txt +33 -0
- en/2734.html.txt +97 -0
- en/2735.html.txt +98 -0
- en/2736.html.txt +81 -0
- en/2737.html.txt +107 -0
- en/2738.html.txt +107 -0
- en/2739.html.txt +113 -0
- en/274.html.txt +124 -0
- en/2740.html.txt +306 -0
- en/2741.html.txt +306 -0
- en/2742.html.txt +115 -0
- en/2743.html.txt +115 -0
- en/2744.html.txt +75 -0
- en/2745.html.txt +124 -0
- en/2746.html.txt +75 -0
- en/2747.html.txt +52 -0
- en/2748.html.txt +351 -0
- en/2749.html.txt +333 -0
- en/275.html.txt +124 -0
- en/2750.html.txt +198 -0
- en/2751.html.txt +131 -0
- en/2752.html.txt +198 -0
- en/2753.html.txt +58 -0
en/2709.html.txt
ADDED
@@ -0,0 +1,149 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Impressionism is a 19th-century art movement characterized by relatively small, thin, yet visible brush strokes, open composition, emphasis on accurate depiction of light in its changing qualities (often accentuating the effects of the passage of time), ordinary subject matter, inclusion of movement as a crucial element of human perception and experience, and unusual visual angles. Impressionism originated with a group of Paris-based artists whose independent exhibitions brought them to prominence during the 1870s and 1880s.
|
2 |
+
|
3 |
+
The Impressionists faced harsh opposition from the conventional art community in France. The name of the style derives from the title of a Claude Monet work, Impression, soleil levant (Impression, Sunrise), which provoked the critic Louis Leroy to coin the term in a satirical review published in the Parisian newspaper Le Charivari.
|
4 |
+
|
5 |
+
The development of Impressionism in the visual arts was soon followed by analogous styles in other media that became known as impressionist music and impressionist literature.
|
6 |
+
|
7 |
+
Radicals in their time, early Impressionists violated the rules of academic painting. They constructed their pictures from freely brushed colours that took precedence over lines and contours, following the example of painters such as Eugène Delacroix and J. M. W. Turner. They also painted realistic scenes of modern life, and often painted outdoors. Previously, still lifes and portraits as well as landscapes were usually painted in a studio.[1] The Impressionists found that they could capture the momentary and transient effects of sunlight by painting outdoors or en plein air. They portrayed overall visual effects instead of details, and used short "broken" brush strokes of mixed and pure unmixed colour—not blended smoothly or shaded, as was customary—to achieve an effect of intense colour vibration.
|
8 |
+
|
9 |
+
Impressionism emerged in France at the same time that a number of other painters, including the Italian artists known as the Macchiaioli, and Winslow Homer in the United States, were also exploring plein-air painting. The Impressionists, however, developed new techniques specific to the style. Encompassing what its adherents argued was a different way of seeing, it is an art of immediacy and movement, of candid poses and compositions, of the play of light expressed in a bright and varied use of colour.
|
10 |
+
|
11 |
+
The public, at first hostile, gradually came to believe that the Impressionists had captured a fresh and original vision, even if the art critics and art establishment disapproved of the new style. By recreating the sensation in the eye that views the subject, rather than delineating the details of the subject, and by creating a welter of techniques and forms, Impressionism is a precursor of various painting styles, including Neo-Impressionism, Post-Impressionism, Fauvism, and Cubism.
|
12 |
+
|
13 |
+
In the middle of the 19th century—a time of change, as Emperor Napoleon III rebuilt Paris and waged war—the Académie des Beaux-Arts dominated French art. The Académie was the preserver of traditional French painting standards of content and style. Historical subjects, religious themes, and portraits were valued; landscape and still life were not. The Académie preferred carefully finished images that looked realistic when examined closely. Paintings in this style were made up of precise brush strokes carefully blended to hide the artist's hand in the work.[3] Colour was restrained and often toned down further by the application of a golden varnish.[4]
|
14 |
+
|
15 |
+
The Académie had an annual, juried art show, the Salon de Paris, and artists whose work was displayed in the show won prizes, garnered commissions, and enhanced their prestige. The standards of the juries represented the values of the Académie, represented by the works of such artists as Jean-Léon Gérôme and Alexandre Cabanel.
|
16 |
+
|
17 |
+
In the early 1860s, four young painters—Claude Monet, Pierre-Auguste Renoir, Alfred Sisley, and Frédéric Bazille—met while studying under the academic artist Charles Gleyre. They discovered that they shared an interest in painting landscape and contemporary life rather than historical or mythological scenes. Following a practice that had become increasingly popular by mid-century, they often ventured into the countryside together to paint in the open air,[5] but not for the purpose of making sketches to be developed into carefully finished works in the studio, as was the usual custom.[6] By painting in sunlight directly from nature, and making bold use of the vivid synthetic pigments that had become available since the beginning of the century, they began to develop a lighter and brighter manner of painting that extended further the Realism of Gustave Courbet and the Barbizon school. A favourite meeting place for the artists was the Café Guerbois on Avenue de Clichy in Paris, where the discussions were often led by Édouard Manet, whom the younger artists greatly admired. They were soon joined by Camille Pissarro, Paul Cézanne, and Armand Guillaumin.[7]
|
18 |
+
|
19 |
+
During the 1860s, the Salon jury routinely rejected about half of the works submitted by Monet and his friends in favour of works by artists faithful to the approved style.[8] In 1863, the Salon jury rejected Manet's The Luncheon on the Grass (Le déjeuner sur l'herbe) primarily because it depicted a nude woman with two clothed men at a picnic. While the Salon jury routinely accepted nudes in historical and allegorical paintings, they condemned Manet for placing a realistic nude in a contemporary setting.[9] The jury's severely worded rejection of Manet's painting appalled his admirers, and the unusually large number of rejected works that year perturbed many French artists.
|
20 |
+
|
21 |
+
After Emperor Napoleon III saw the rejected works of 1863, he decreed that the public be allowed to judge the work themselves, and the Salon des Refusés (Salon of the Refused) was organized. While many viewers came only to laugh, the Salon des Refusés drew attention to the existence of a new tendency in art and attracted more visitors than the regular Salon.[10]
|
22 |
+
|
23 |
+
Artists' petitions requesting a new Salon des Refusés in 1867, and again in 1872, were denied. In December 1873, Monet, Renoir, Pissarro, Sisley, Cézanne, Berthe Morisot, Edgar Degas and several other artists founded the Société Anonyme Coopérative des Artistes Peintres, Sculpteurs, Graveurs ("Cooperative and Anonymous Association of Painters, Sculptors, and Engravers") to exhibit their artworks independently.[11] Members of the association were expected to forswear participation in the Salon.[12] The organizers invited a number of other progressive artists to join them in their inaugural exhibition, including the older Eugène Boudin, whose example had first persuaded Monet to adopt plein air painting years before.[13] Another painter who greatly influenced Monet and his friends, Johan Jongkind, declined to participate, as did Édouard Manet. In total, thirty artists participated in their first exhibition, held in April 1874 at the studio of the photographer Nadar.
|
24 |
+
|
25 |
+
The critical response was mixed. Monet and Cézanne received the harshest attacks. Critic and humorist Louis Leroy wrote a scathing review in the newspaper Le Charivari in which, making wordplay with the title of Claude Monet's Impression, Sunrise (Impression, soleil levant), he gave the artists the name by which they became known. Derisively titling his article The Exhibition of the Impressionists, Leroy declared that Monet's painting was at most, a sketch, and could hardly be termed a finished work.
|
26 |
+
|
27 |
+
He wrote, in the form of a dialog between viewers,
|
28 |
+
|
29 |
+
The term Impressionist quickly gained favour with the public. It was also accepted by the artists themselves, even though they were a diverse group in style and temperament, unified primarily by their spirit of independence and rebellion. They exhibited together—albeit with shifting membership—eight times between 1874 and 1886. The Impressionists' style, with its loose, spontaneous brushstrokes, would soon become synonymous with modern life.[4]
|
30 |
+
|
31 |
+
Monet, Sisley, Morisot, and Pissarro may be considered the "purest" Impressionists, in their consistent pursuit of an art of spontaneity, sunlight, and colour. Degas rejected much of this, as he believed in the primacy of drawing over colour and belittled the practice of painting outdoors.[15] Renoir turned away from Impressionism for a time during the 1880s, and never entirely regained his commitment to its ideas. Édouard Manet, although regarded by the Impressionists as their leader,[16] never abandoned his liberal use of black as a colour (while Impressionists avoided its use and preferred to obtain darker colours by mixing), and never participated in the Impressionist exhibitions. He continued to submit his works to the Salon, where his painting Spanish Singer had won a 2nd class medal in 1861, and he urged the others to do likewise, arguing that "the Salon is the real field of battle" where a reputation could be made.[17]
|
32 |
+
|
33 |
+
Among the artists of the core group (minus Bazille, who had died in the Franco-Prussian War in 1870), defections occurred as Cézanne, followed later by Renoir, Sisley, and Monet, abstained from the group exhibitions so they could submit their works to the Salon. Disagreements arose from issues such as Guillaumin's membership in the group, championed by Pissarro and Cézanne against opposition from Monet and Degas, who thought him unworthy.[18] Degas invited Mary Cassatt to display her work in the 1879 exhibition, but also insisted on the inclusion of Jean-François Raffaëlli, Ludovic Lepic, and other realists who did not represent Impressionist practices, causing Monet in 1880 to accuse the Impressionists of "opening doors to first-come daubers".[19] The group divided over invitations to Paul Signac and Georges Seurat to exhibit with them in 1886. Pissarro was the only artist to show at all eight Impressionist exhibitions.
|
34 |
+
|
35 |
+
The individual artists achieved few financial rewards from the Impressionist exhibitions, but their art gradually won a degree of public acceptance and support. Their dealer, Durand-Ruel, played a major role in this as he kept their work before the public and arranged shows for them in London and New York. Although Sisley died in poverty in 1899, Renoir had a great Salon success in 1879.[20] Monet became secure financially during the early 1880s and so did Pissarro by the early 1890s. By this time the methods of Impressionist painting, in a diluted form, had become commonplace in Salon art.[21]
|
36 |
+
|
37 |
+
French painters who prepared the way for Impressionism include the Romantic colourist Eugène Delacroix, the leader of the realists Gustave Courbet, and painters of the Barbizon school such as Théodore Rousseau. The Impressionists learned much from the work of Johan Barthold Jongkind, Jean-Baptiste-Camille Corot and Eugène Boudin, who painted from nature in a direct and spontaneous style that prefigured Impressionism, and who befriended and advised the younger artists.
|
38 |
+
|
39 |
+
A number of identifiable techniques and working habits contributed to the innovative style of the Impressionists. Although these methods had been used by previous artists—and are often conspicuous in the work of artists such as Frans Hals, Diego Velázquez, Peter Paul Rubens, John Constable, and J. M. W. Turner—the Impressionists were the first to use them all together, and with such consistency. These techniques include:
|
40 |
+
|
41 |
+
New technology played a role in the development of the style. Impressionists took advantage of the mid-century introduction of premixed paints in tin tubes (resembling modern toothpaste tubes), which allowed artists to work more spontaneously, both outdoors and indoors.[22] Previously, painters made their own paints individually, by grinding and mixing dry pigment powders with linseed oil, which were then stored in animal bladders.[23]
|
42 |
+
|
43 |
+
Many vivid synthetic pigments became commercially available to artists for the first time during the 19th century. These included cobalt blue, viridian, cadmium yellow, and synthetic ultramarine blue, all of which were in use by the 1840s, before Impressionism.[24] The Impressionists' manner of painting made bold use of these pigments, and of even newer colours such as cerulean blue,[4] which became commercially available to artists in the 1860s.[24]
|
44 |
+
|
45 |
+
The Impressionists' progress toward a brighter style of painting was gradual. During the 1860s, Monet and Renoir sometimes painted on canvases prepared with the traditional red-brown or grey ground.[25] By the 1870s, Monet, Renoir, and Pissarro usually chose to paint on grounds of a lighter grey or beige colour, which functioned as a middle tone in the finished painting.[25] By the 1880s, some of the Impressionists had come to prefer white or slightly off-white grounds, and no longer allowed the ground colour a significant role in the finished painting.[26]
|
46 |
+
|
47 |
+
Prior to the Impressionists, other painters, notably such 17th-century Dutch painters as Jan Steen, had emphasized common subjects, but their methods of composition were traditional. They arranged their compositions so that the main subject commanded the viewer's attention. J. M. W. Turner, while an artist of the Romantic era, anticipated the style of impressionism with his artwork.[27] The Impressionists relaxed the boundary between subject and background so that the effect of an Impressionist painting often resembles a snapshot, a part of a larger reality captured as if by chance.[28] Photography was gaining popularity, and as cameras became more portable, photographs became more candid. Photography inspired Impressionists to represent momentary action, not only in the fleeting lights of a landscape, but in the day-to-day lives of people.[29][30]
|
48 |
+
|
49 |
+
The development of Impressionism can be considered partly as a reaction by artists to the challenge presented by photography, which seemed to devalue the artist's skill in reproducing reality. Both portrait and landscape paintings were deemed somewhat deficient and lacking in truth as photography "produced lifelike images much more efficiently and reliably".[31]
|
50 |
+
|
51 |
+
In spite of this, photography actually inspired artists to pursue other means of creative expression, and rather than compete with photography to emulate reality, artists focused "on the one thing they could inevitably do better than the photograph—by further developing into an art form its very subjectivity in the conception of the image, the very subjectivity that photography eliminated".[31] The Impressionists sought to express their perceptions of nature, rather than create exact representations. This allowed artists to depict subjectively what they saw with their "tacit imperatives of taste and conscience".[32] Photography encouraged painters to exploit aspects of the painting medium, like colour, which photography then lacked: "The Impressionists were the first to consciously offer a subjective alternative to the photograph".[31]
|
52 |
+
|
53 |
+
Another major influence was Japanese ukiyo-e art prints (Japonism). The art of these prints contributed significantly to the "snapshot" angles and unconventional compositions that became characteristic of Impressionism. An example is Monet's Jardin à Sainte-Adresse, 1867, with its bold blocks of colour and composition on a strong diagonal slant showing the influence of Japanese prints[34]
|
54 |
+
|
55 |
+
Edgar Degas was both an avid photographer and a collector of Japanese prints.[35] His The Dance Class (La classe de danse) of 1874 shows both influences in its asymmetrical composition. The dancers are seemingly caught off guard in various awkward poses, leaving an expanse of empty floor space in the lower right quadrant. He also captured his dancers in sculpture, such as the Little Dancer of Fourteen Years.
|
56 |
+
|
57 |
+
Impressionists, in varying degrees, were looking for ways to depict visual experience and contemporary subjects.[36] Women Impressionists were interested in these same ideals but had many social and career limitations compared to male Impressionists. In particular, they were excluded from the imagery of the bourgeois social sphere of the boulevard, cafe, and dance hall.[37] As well as imagery, women were excluded from the formative discussions that resulted in meetings in those places; that was where male Impressionists were able to form and share ideas about Impressionism.[37] In the academic realm, women were believed to be incapable of handling complex subjects which led teachers to restrict what they taught female students.[38] It was also considered unladylike to excel in art since women's true talents were then believed to center on homemaking and mothering.[38]
|
58 |
+
|
59 |
+
Yet several women were able to find success during their lifetime, even though their careers were affected by personal circumstances – Bracquemond, for example, had a husband who was resentful of her work which caused her to give up painting.[39] The four most well known, namely, Mary Cassatt, Eva Gonzalès, Marie Bracquemond, and Berthe Morisot, are, and were, often referred to as the 'Women Impressionists'. Their participation in the series of eight Impressionist exhibitions that took place in Paris from 1874 to 1886 varied: Morisot participated in seven, Cassatt in four, Bracquemond in three, and Gonzalès did not participate.[39][40]
|
60 |
+
|
61 |
+
|
62 |
+
|
63 |
+
The critics of the time lumped these four together without regard to their personal styles, techniques, or subject matter.[41] Critics viewing their works at the exhibitions often attempted to acknowledge the women artists' talents but circumscribed them within a limited notion of femininity.[42] Arguing for the suitability of Impressionist technique to women's manner of perception, Parisian critic S.C. de Soissons wrote:
|
64 |
+
|
65 |
+
One can understand that women have no originality of thought, and that literature and music have no feminine character; but surely women know how to observe, and what they see is quite different from that which men see, and the art which they put in their gestures, in their toilet, in the decoration of their environment is sufficient to give is the idea of an instinctive, of a peculiar genius which resides in each one of them.[43]
|
66 |
+
|
67 |
+
While Impressionism legitimized the domestic social life as subject matter, of which women had intimate knowledge, it also tended to limit them to that subject matter. Portrayals of often-identifiable sitters in domestic settings (which could offer commissions) were dominant in the exhibitions.[44] The subjects of the paintings were often women interacting with their environment by either their gaze or movement. Cassatt, in particular, was aware of her placement of subjects: she kept her predominantly female figures from objectification and cliche; when they are not reading, they converse, sew, drink tea, and when they are inactive, they seem lost in thought.[45]
|
68 |
+
|
69 |
+
The women Impressionists, like their male counterparts, were striving for "truth," for new ways of seeing and new painting techniques; each artist had an individual painting style.[46] Women Impressionists (particularly Morisot and Cassatt) were conscious of the balance of power between women and objects in their paintings – the bourgeois women depicted are not defined by decorative objects, but instead, interact with and dominate the things with which they live.[47] There are many similarities in their depictions of women who seem both at ease and subtly confined.[48] Gonzalès' Box at the Italian Opera depicts a woman staring into the distance, at ease in a social sphere but confined by the box and the man standing next to her. Cassatt's painting Young Girl at a Window is brighter in color but remains constrained by the canvas edge as she looks out the window.
|
70 |
+
|
71 |
+
Despite their success in their ability to have a career and Impressionism's demise attributed to its allegedly feminine characteristics (its sensuality, dependence on sensation, physicality, and fluidity) the four women artists (and other, lesser-known women Impressionists) were largely omitted from art historical textbooks covering Impressionist artists until Tamar Garb's Women Impressionists published in 1986.[49] For example, Impressionism by Jean Leymarie, published in 1955 included no information on any women Impressionists.
|
72 |
+
|
73 |
+
The central figures in the development of Impressionism in France,[50][51] listed alphabetically, were:
|
74 |
+
|
75 |
+
Frédéric Bazille, Paysage au bord du Lez, 1870, Minneapolis Institute of Art
|
76 |
+
|
77 |
+
Alfred Sisley, Bridge at Villeneuve-la-Garenne, 1872, Metropolitan Museum of Art
|
78 |
+
|
79 |
+
Berthe Morisot, The Cradle, 1872, Musée d'Orsay
|
80 |
+
|
81 |
+
Armand Guillaumin, Sunset at Ivry (Soleil couchant à Ivry), 1873, Musée d'Orsay
|
82 |
+
|
83 |
+
Édouard Manet, Boating, 1874, Metropolitan Museum of Art
|
84 |
+
|
85 |
+
Alfred Sisley, La Seine au Point du jour, 1877, Museum of modern art André Malraux - MuMa, Le Havre
|
86 |
+
|
87 |
+
Édouard Manet, The Plum, 1878, National Gallery of Art, Washington, D.C.
|
88 |
+
|
89 |
+
Édouard Manet, A Bar at the Folies-Bergère (Un Bar aux Folies-Bergère), 1882, Courtauld Institute of Art
|
90 |
+
|
91 |
+
Edgar Degas, After the Bath, Woman Drying Herself, c. 1884–1886 (reworked between 1890 and 1900), MuMa, Le Havre
|
92 |
+
|
93 |
+
Edgar Degas, L'Absinthe, 1876, Musée d'Orsay, Paris
|
94 |
+
|
95 |
+
Edgar Degas, Dancer with a Bouquet of Flowers (Star of the Ballet), 1878, Getty Center, Los Angeles
|
96 |
+
|
97 |
+
Edgar Degas, Woman in the Bath, 1886, Hill–Stead Museum, Farmington, Connecticut
|
98 |
+
|
99 |
+
Edgar Degas, Dancers at The Bar, 1888, The Phillips Collection, Washington, D.C.
|
100 |
+
|
101 |
+
Gustave Caillebotte, Paris Street; Rainy Day, 1877, Art Institute of Chicago
|
102 |
+
|
103 |
+
Pierre-Auguste Renoir, La Parisienne, 1874, National Museum Cardiff
|
104 |
+
|
105 |
+
Pierre-Auguste Renoir, Portrait of Irène Cahen d'Anvers (La Petite Irène), 1880, Foundation E.G. Bührle, Zürich
|
106 |
+
|
107 |
+
Pierre-Auguste Renoir, Two Sisters (On the Terrace), 1881, Art Institute of Chicago
|
108 |
+
|
109 |
+
Pierre-Auguste Renoir, Girl with a Hoop, 1885, National Gallery of Art, Washington, D.C.
|
110 |
+
|
111 |
+
Claude Monet, The Cliff at Étretat after the Storm, 1885, Clark Art Institute, Williamstown, Massachusetts
|
112 |
+
|
113 |
+
Mary Cassatt, The Child's Bath (The Bath), 1893, oil on canvas, Art Institute of Chicago
|
114 |
+
|
115 |
+
Berthe Morisot, Portrait of Mme Boursier and Her Daughter, c. 1873, Brooklyn Museum
|
116 |
+
|
117 |
+
Claude Monet, Le Grand Canal, 1908, Museum of Fine Arts, Boston
|
118 |
+
|
119 |
+
The Impressionists
|
120 |
+
|
121 |
+
Among the close associates of the Impressionists were several painters who adopted their methods to some degree. These include Jean-Louis Forain (who participated in Impressionist exhibitions in 1879, 1880, 1881 and 1886)[54] and Giuseppe De Nittis, an Italian artist living in Paris who participated in the first Impressionist exhibit at the invitation of Degas, although the other Impressionists disparaged his work.[55] Federico Zandomeneghi was another Italian friend of Degas who showed with the Impressionists. Eva Gonzalès was a follower of Manet who did not exhibit with the group. James Abbott McNeill Whistler was an American-born painter who played a part in Impressionism although he did not join the group and preferred grayed colours. Walter Sickert, an English artist, was initially a follower of Whistler, and later an important disciple of Degas; he did not exhibit with the Impressionists. In 1904 the artist and writer Wynford Dewhurst wrote the first important study of the French painters published in English, Impressionist Painting: its genesis and development, which did much to popularize Impressionism in Great Britain.
|
122 |
+
|
123 |
+
By the early 1880s, Impressionist methods were affecting, at least superficially, the art of the Salon. Fashionable painters such as Jean Béraud and Henri Gervex found critical and financial success by brightening their palettes while retaining the smooth finish expected of Salon art.[56] Works by these artists are sometimes casually referred to as Impressionism, despite their remoteness from Impressionist practice.
|
124 |
+
|
125 |
+
The influence of the French Impressionists lasted long after most of them had died. Artists like J.D. Kirszenbaum were borrowing Impressionist techniques throughout the twentieth century.
|
126 |
+
|
127 |
+
As the influence of Impressionism spread beyond France, artists, too numerous to list, became identified as practitioners of the new style. Some of the more important examples are:
|
128 |
+
|
129 |
+
The sculptor Auguste Rodin is sometimes called an Impressionist for the way he used roughly modeled surfaces to suggest transient light effects.[57]
|
130 |
+
|
131 |
+
Pictorialist photographers whose work is characterized by soft focus and atmospheric effects have also been called Impressionists.
|
132 |
+
|
133 |
+
French Impressionist Cinema is a term applied to a loosely defined group of films and filmmakers in France from 1919–1929, although these years are debatable. French Impressionist filmmakers include Abel Gance, Jean Epstein, Germaine Dulac, Marcel L’Herbier, Louis Delluc, and Dmitry Kirsanoff.
|
134 |
+
|
135 |
+
Musical Impressionism is the name given to a movement in European classical music that arose in the late 19th century and continued into the middle of the 20th century. Originating in France, musical Impressionism is characterized by suggestion and atmosphere, and eschews the emotional excesses of the Romantic era. Impressionist composers favoured short forms such as the nocturne, arabesque, and prelude, and often explored uncommon scales such as the whole tone scale. Perhaps the most notable innovations of Impressionist composers were the introduction of major 7th chords and the extension of chord structures in 3rds to five- and six-part harmonies.
|
136 |
+
|
137 |
+
The influence of visual Impressionism on its musical counterpart is debatable. Claude Debussy and Maurice Ravel are generally considered the greatest Impressionist composers, but Debussy disavowed the term, calling it the invention of critics. Erik Satie was also considered in this category, though his approach was regarded as less serious, more musical novelty in nature. Paul Dukas is another French composer sometimes considered an Impressionist, but his style is perhaps more closely aligned to the late Romanticists. Musical Impressionism beyond France includes the work of such composers as Ottorino Respighi (Italy), Ralph Vaughan Williams, Cyril Scott, and John Ireland (England), Manuel De Falla and Isaac Albeniz (Spain), and Charles Griffes (America).
|
138 |
+
|
139 |
+
The term Impressionism has also been used to describe works of literature in which a few select details suffice to convey the sensory impressions of an incident or scene. Impressionist literature is closely related to Symbolism, with its major exemplars being Baudelaire, Mallarmé, Rimbaud, and Verlaine. Authors such as Virginia Woolf, D.H. Lawrence, and Joseph Conrad have written works that are Impressionistic in the way that they describe, rather than interpret, the impressions, sensations and emotions that constitute a character's mental life.
|
140 |
+
|
141 |
+
During the 1880s several artists began to develop different precepts for the use of colour, pattern, form, and line, derived from the Impressionist example: Vincent van Gogh, Paul Gauguin, Georges Seurat, and Henri de Toulouse-Lautrec. These artists were slightly younger than the Impressionists, and their work is known as post-Impressionism. Some of the original Impressionist artists also ventured into this new territory; Camille Pissarro briefly painted in a pointillist manner, and even Monet abandoned strict plein air painting. Paul Cézanne, who participated in the first and third Impressionist exhibitions, developed a highly individual vision emphasising pictorial structure, and he is more often called a post-Impressionist. Although these cases illustrate the difficulty of assigning labels, the work of the original Impressionist painters may, by definition, be categorised as Impressionism.
|
142 |
+
|
143 |
+
Georges Seurat, A Sunday Afternoon on the Island of La Grande Jatte, 1884–1886, The Art Institute of Chicago
|
144 |
+
|
145 |
+
Vincent van Gogh, Cypresses, 1889, Metropolitan Museum of Art
|
146 |
+
|
147 |
+
Paul Gauguin, The Midday Nap, 1894, Metropolitan Museum of Art
|
148 |
+
|
149 |
+
Paul Cézanne, The Card Players, 1894–1895, Musée d'Orsay, Paris
|
en/271.html.txt
ADDED
@@ -0,0 +1,261 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Antarctica (/ænˈtɑːrtɪkə/ or /æntˈɑːrktɪkə/ (listen))[note 1] is Earth's southernmost continent. It contains the geographic South Pole and is situated in the Antarctic region of the Southern Hemisphere, almost entirely south of the Antarctic Circle, and is surrounded by the Southern Ocean. At 14,200,000 square kilometres (5,500,000 square miles), it is the fifth-largest continent and nearly twice the size of Australia. At 0.00008 people per square kilometre, it is by far the least densely populated continent. About 98% of Antarctica is covered by ice that averages 1.9 km (1.2 mi; 6,200 ft) in thickness,[5] which extends to all but the northernmost reaches of the Antarctic Peninsula.
|
6 |
+
|
7 |
+
Antarctica, on average, is the coldest, driest, and windiest continent, and has the highest average elevation of all the continents.[6] Most of Antarctica is a polar desert, with annual precipitation of 200 mm (7.9 in) along the coast and far less inland; there has been no rain there for almost 2 million years, yet 80% of the world freshwater reserves are stored there, enough to raise global sea levels by about 60 metres (200 ft) if all of it were to melt.[7] The temperature in Antarctica has reached −89.2 °C (−128.6 °F) (or even −94.7 °C (−135.8 °F) as measured from space[8]), though the average for the third quarter (the coldest part of the year) is −63 °C (−81 °F). Anywhere from 1,000 to 5,000 people reside throughout the year at research stations scattered across the continent. Organisms native to Antarctica include many types of algae, bacteria, fungi, plants, protista, and certain animals, such as mites, nematodes, penguins, seals and tardigrades. Vegetation, where it occurs, is tundra.
|
8 |
+
|
9 |
+
Antarctica is noted as the last region on Earth in recorded history to be discovered, unseen until 1820 when the Russian expedition of Fabian Gottlieb von Bellingshausen and Mikhail Lazarev on Vostok and Mirny sighted the Fimbul ice shelf. The continent, however, remained largely neglected for the rest of the 19th century because of its hostile environment, lack of easily accessible resources, and isolation. In 1895, the first confirmed landing was conducted by a team of Norwegians.
|
10 |
+
|
11 |
+
Antarctica is a de facto condominium, governed by parties to the Antarctic Treaty System that have consulting status. Twelve countries signed the Antarctic Treaty in 1959, and thirty-eight have signed it since then. The treaty prohibits military activities and mineral mining, prohibits nuclear explosions and nuclear waste disposal, supports scientific research, and protects the continent's ecology. Ongoing experiments are conducted by more than 4,000 scientists from many nations.
|
12 |
+
|
13 |
+
The name Antarctica is the romanised version of the Greek compound word ἀνταρκτική (antarktiké), feminine of ἀνταρκτικός (antarktikós),[9] meaning "opposite to the Arctic", "opposite to the north".[10]
|
14 |
+
|
15 |
+
Aristotle wrote in his book Meteorology about an Antarctic region in c. 350 BC.[11] Marinus of Tyre reportedly used the name in his unpreserved world map from the 2nd century CE. The Roman authors Hyginus and Apuleius (1–2 centuries CE) used for the South Pole the romanised Greek name polus antarcticus,[12][13] from which derived the Old French pole antartike (modern pôle antarctique) attested in 1270, and from there the Middle English pol antartik in a 1391 technical treatise by Geoffrey Chaucer (modern Antarctic Pole).[14]
|
16 |
+
|
17 |
+
The long-imagined (but undiscovered) south polar continent was originally called Terra Australis, sometimes shortened to Australia as seen in a woodcut illustration titled "Sphere of the winds", contained in an astrological textbook published in Frankfurt in 1545.[15]
|
18 |
+
|
19 |
+
Then in the nineteenth century, the colonial authorities in Sydney removed the Dutch name from New Holland. Instead of inventing a new name to replace it, they took the name Australia from the south polar continent, leaving it nameless for some eighty years. During that period, geographers had to make do with clumsy phrases such as "the Antarctic Continent". They searched for a more poetic replacement, suggesting various names such as Ultima and Antipodea.[16] Eventually Antarctica was adopted as continental name in the 1890s—the first use of the name is attributed to the Scottish cartographer John George Bartholomew.[17]
|
20 |
+
|
21 |
+
United Kingdom 1908–present
|
22 |
+
|
23 |
+
New Zealand 1923–present
|
24 |
+
|
25 |
+
Norway 1931–present
|
26 |
+
|
27 |
+
Australia 1933–present
|
28 |
+
|
29 |
+
Chile 1940–present
|
30 |
+
|
31 |
+
Argentina 1943–present
|
32 |
+
|
33 |
+
Antarctica has no indigenous population. In February 1775, during his second voyage, Captain Cook called the existence of such a polar continent "probable" and in another copy of his journal he wrote: "[I] firmly believe it and it's more than probable that we have seen a part of it".[18]
|
34 |
+
|
35 |
+
However, belief in the existence of a Terra Australis—a vast continent in the far south of the globe to "balance" the northern lands of Europe, Asia and North Africa—had prevailed since the times of Ptolemy in the 1st century AD. Even in the late 17th century, after explorers had found that South America and Australia were not part of the fabled "Antarctica", geographers believed that the continent was much larger than its actual size. Integral to the story of the origin of Antarctica's name is that it was not named Terra Australis—this name was given to Australia instead, because of the misconception that no significant landmass could exist further south. Explorer Matthew Flinders, in particular, has been credited with popularising the transfer of the name Terra Australis to Australia. He justified the titling of his book A Voyage to Terra Australis (1814) by writing in the introduction:
|
36 |
+
|
37 |
+
There is no probability, that any other detached body of land, of nearly equal extent, will ever be found in a more southern latitude; the name Terra Australis will, therefore, remain descriptive of the geographical importance of this country and of its situation on the globe: it has antiquity to recommend it; and, having no reference to either of the two claiming nations, appears to be less objectionable than any other which could have been selected.[19]
|
38 |
+
|
39 |
+
European maps continued to show this hypothetical land until Captain James Cook's ships, HMS Resolution and Adventure, crossed the Antarctic Circle on 17 January 1773, in December 1773 and again in January 1774.[20] Cook came within about 120 km (75 mi) of the Antarctic coast before retreating in the face of field ice in January 1773.[21]
|
40 |
+
|
41 |
+
According to various organisations (the National Science Foundation,[22] NASA,[23] the University of California, San Diego,[24] the Russian State Museum of the Arctic and Antarctic,[25] among others),[26][27] ships captained by three men sighted Antarctica or its ice shelf in 1820: Fabian Gottlieb von Bellingshausen (a captain in the Imperial Russian Navy), Edward Bransfield (a captain in the Royal Navy), and Nathaniel Palmer (a sealer from Stonington, Connecticut).
|
42 |
+
|
43 |
+
The First Russian Antarctic Expedition led by Bellingshausen and Mikhail Lazarev on the 985-ton sloop-of-war Vostok ("East") and the 530-ton support vessel Mirny ("Peaceful") reached a point within 32 km (20 mi) of Queen Maud's Land and recorded the sight of an ice shelf at 69°21′28″S 2°14′50″W / 69.35778°S 2.24722°W / -69.35778; -2.24722,[28] on 27 January 1820,[29] which became known as the Fimbul ice shelf. This happened three days before Bransfield sighted the land of the Trinity Peninsula of Antarctica, as opposed to the ice of an ice shelf, and ten months before Palmer did so in November 1820. The first documented landing on Antarctica was by the American sealer John Davis, apparently at Hughes Bay, near Cape Charles, in West Antarctica on 7 February 1821, although some historians dispute this claim.[30][31] The first recorded and confirmed landing was at Cape Adair in 1895 (by the Norwegian-Swedish whaling ship Antarctic).[32]
|
44 |
+
|
45 |
+
On 22 January 1840, two days after the discovery of the coast west of the Balleny Islands, some members of the crew of the 1837–40 expedition of Jules Dumont d'Urville disembarked on the highest islet[33] of a group of coastal rocky islands about 4 km from Cape Géodésie on the coast of Adélie Land where they took some mineral, algae, and animal samples, erected the French flag and claimed French sovereignty over the territory.[34]
|
46 |
+
|
47 |
+
Discovery and claim of French sovereignty over Adélie Land by Jules Dumont d'Urville, in 1840.
|
48 |
+
|
49 |
+
Painting of James Weddell's second expedition in 1823, depicting the brig Jane and the cutter Beaufoy
|
50 |
+
|
51 |
+
Nimrod Expedition South Pole Party (left to right): Wild, Shackleton, Marshall and Adams
|
52 |
+
|
53 |
+
Explorer James Clark Ross passed through what is now known as the Ross Sea and discovered Ross Island (both of which were named after him) in 1841. He sailed along a huge wall of ice that was later named the Ross Ice Shelf. Mount Erebus and Mount Terror are named after two ships from his expedition: HMS Erebus and Terror.[35] Mercator Cooper landed in East Antarctica on 26 January 1853.[36]
|
54 |
+
|
55 |
+
During the Nimrod Expedition led by Ernest Shackleton in 1907, parties led by Edgeworth David became the first to climb Mount Erebus and to reach the South Magnetic Pole. Douglas Mawson, who assumed the leadership of the Magnetic Pole party on their perilous return, went on to lead several expeditions until retiring in 1931.[37] In addition, Shackleton and three other members of his expedition made several firsts in December 1908 – February 1909: they were the first humans to traverse the Ross Ice Shelf, the first to traverse the Transantarctic Mountains (via the Beardmore Glacier), and the first to set foot on the South Polar Plateau. An expedition led by Norwegian polar explorer Roald Amundsen from the ship Fram became the first to reach the geographic South Pole on 14 December 1911, using a route from the Bay of Whales and up the Axel Heiberg Glacier.[38] One month later, the doomed Scott Expedition reached the pole.
|
56 |
+
|
57 |
+
Richard E. Byrd led several voyages to the Antarctic by plane in the 1930s and 1940s. He is credited with implementing mechanised land transport on the continent and conducting extensive geological and biological research.[39] The first women to set foot on Antarctica did so in the 1930s with Caroline Mikkelsen landing on an island of Antarctica in 1935,[40] and Ingrid Christensen stepping onto the mainland in 1937.[41][42][43]
|
58 |
+
|
59 |
+
It was not until 31 October 1956, that anyone set foot on the South Pole again; on that day a U.S. Navy group led by Rear Admiral George J. Dufek successfully landed an aircraft there.[44] The first women to step onto the South Pole were Pam Young, Jean Pearson, Lois Jones, Eileen McSaveney, Kay Lindsay and Terry Tickhill in 1969.[45]
|
60 |
+
|
61 |
+
On 28 November 1979, Air New Zealand Flight 901, a McDonnell Douglas DC-10-30, crashed into Mount Erebus, killing all 257 people on board.[46]
|
62 |
+
|
63 |
+
In the southern hemisphere summer of 1996–97 the Norwegian explorer Børge Ousland became the first human to cross Antarctica alone from coast to coast.[47] Ousland got aid from a kite on parts of the distance. All attempted crossings, with no kites or resupplies, that have tried to go from the true continental edges, where the ice meets the sea, have failed due to the great distance that needs to be covered.[48] For this crossing, Ousland also holds the record for the fastest unsupported journey to the South Pole, taking just 34 days.[49]
|
64 |
+
|
65 |
+
Roald Amundsen and his crew looking at the Norwegian flag at the South Pole, 1911
|
66 |
+
|
67 |
+
The French Dumont d'Urville Station, an example of modern human settlement in Antarctica
|
68 |
+
|
69 |
+
In 1997 Børge Ousland became the first person to make a solo crossing.
|
70 |
+
|
71 |
+
Positioned asymmetrically around the South Pole and largely south of the Antarctic Circle, Antarctica is the southernmost continent and is surrounded by the Southern Ocean; alternatively, it may be considered to be surrounded by the southern Pacific, Atlantic, and Indian Oceans, or by the southern waters of the World Ocean. There are a number of rivers and lakes in Antarctica, the longest river being the Onyx. The largest lake, Vostok, is one of the largest sub-glacial lakes in the world. Antarctica covers more than 14 million km2 (5,400,000 sq mi),[1] making it the fifth-largest continent, about 1.3 times as large as Europe. The coastline measures 17,968 km (11,165 mi)[1] and is mostly characterised by ice formations, as the following table shows:
|
72 |
+
|
73 |
+
Antarctica is divided in two by the Transantarctic Mountains close to the neck between the Ross Sea and the Weddell Sea. The portion west of the Weddell Sea and east of the Ross Sea is called West Antarctica and the remainder East Antarctica, because they roughly correspond to the Western and Eastern Hemispheres relative to the Greenwich meridian.[citation needed]
|
74 |
+
|
75 |
+
About 98% of Antarctica is covered by the Antarctic ice sheet, a sheet of ice averaging at least 1.6 km (1.0 mi) thick. The continent has about 90% of the world's ice (and thus about 70% of the world's fresh water). If all of this ice were melted, sea levels would rise about 60 m (200 ft).[51] In most of the interior of the continent, precipitation is very low, down to 20 mm (0.8 in) per year; in a few "blue ice" areas precipitation is lower than mass loss by sublimation, and so the local mass balance is negative. In the dry valleys, the same effect occurs over a rock base, leading to a desiccated landscape.[citation needed]
|
76 |
+
|
77 |
+
West Antarctica is covered by the West Antarctic Ice Sheet. The sheet has been of recent concern because of the small possibility of its collapse. If the sheet were to break down, ocean levels would rise by several metres in a relatively geologically short period of time, perhaps a matter of centuries.[citation needed] Several Antarctic ice streams, which account for about 10% of the ice sheet, flow to one of the many Antarctic ice shelves: see ice-sheet dynamics.
|
78 |
+
|
79 |
+
East Antarctica lies on the Indian Ocean side of the Transantarctic Mountains and comprises Coats Land, Queen Maud Land, Enderby Land, Mac. Robertson Land, Wilkes Land, and Victoria Land. All but a small portion of this region lies within the Eastern Hemisphere. East Antarctica is largely covered by the East Antarctic Ice Sheet.[citation needed]
|
80 |
+
|
81 |
+
Vinson Massif, the highest peak in Antarctica at 4,892 m (16,050 ft), is located in the Ellsworth Mountains. Antarctica contains many other mountains, on both the main continent and the surrounding islands. Mount Erebus on Ross Island is the world's southernmost active volcano. Another well-known volcano is found on Deception Island, which is famous for a giant eruption in 1970. Minor eruptions are frequent, and lava flow has been observed in recent years. Other dormant volcanoes may potentially be active.[52] In 2004, a potentially active underwater volcano was found in the Antarctic Peninsula by American and Canadian researchers.[53]
|
82 |
+
|
83 |
+
Antarctica is home to more than 70 lakes that lie at the base of the continental ice sheet. Lake Vostok, discovered beneath Russia's Vostok Station in 1996, is the largest of these subglacial lakes. It was once believed that the lake had been sealed off for 500,000 to one million years, but a recent survey suggests that, every so often, there are large flows of water from one lake to another.[54]
|
84 |
+
|
85 |
+
There is some evidence, in the form of ice cores drilled to about 400 m (1,300 ft) above the water line, that Lake Vostok's waters may contain microbial life. The frozen surface of the lake shares similarities with Jupiter's moon, Europa. If life is discovered in Lake Vostok, it would strengthen the argument for the possibility of life on Europa.[55][56] On 7 February 2008, a NASA team embarked on a mission to Lake Untersee, searching for extremophiles in its highly alkaline waters. If found, these resilient creatures could further bolster the argument for extraterrestrial life in extremely cold, methane-rich environments.[57]
|
86 |
+
|
87 |
+
In September 2018, researchers at the National Geospatial-Intelligence Agency released a high resolution terrain map (detail down to the size of a car, and less in some areas) of Antarctica, named the "Reference Elevation Model of Antarctica" (REMA).[58]
|
88 |
+
|
89 |
+
More than 170 million years ago, Antarctica was part of the supercontinent Gondwana. Over time, Gondwana gradually broke apart, and Antarctica as we know it today was formed around 25 million years ago. Antarctica was not always cold, dry, and covered in ice sheets. At a number of points in its long history, it was farther north, experienced a tropical or temperate climate, was covered in forests,[59] and inhabited by various ancient life forms.
|
90 |
+
|
91 |
+
During the Cambrian period, Gondwana had a mild climate. West Antarctica was partially in the Northern Hemisphere, and during this period large amounts of sandstones, limestones and shales were deposited. East Antarctica was at the equator, where sea floor invertebrates and trilobites flourished in the tropical seas. By the start of the Devonian period (416 Ma), Gondwana was in more southern latitudes and the climate was cooler, though fossils of land plants are known from this time. Sand and silts were laid down in what is now the Ellsworth, Horlick and Pensacola Mountains. Glaciation began at the end of the Devonian period (360 Ma), as Gondwana became centred on the South Pole and the climate cooled, though flora remained. During the Permian period, the land became dominated by seed plants such as Glossopteris, a pteridosperm which grew in swamps. Over time these swamps became deposits of coal in the Transantarctic Mountains. Towards the end of the Permian period, continued warming led to a dry, hot climate over much of Gondwana.[60]
|
92 |
+
|
93 |
+
As a result of continued warming, the polar ice caps melted and much of Gondwana became a desert. In Eastern Antarctica, seed ferns or pteridosperms became abundant and large amounts of sandstone and shale were laid down at this time. Synapsids, commonly known as "mammal-like reptiles", were common in Antarctica during the Early Triassic and included forms such as Lystrosaurus. The Antarctic Peninsula began to form during the Jurassic period (206–146 Ma), and islands gradually rose out of the ocean. Ginkgo trees, conifers, bennettites, horsetails, ferns and cycads were plentiful during this period. In West Antarctica, coniferous forests dominated through the entire Cretaceous period (146–66 Ma), though southern beech became more prominent towards the end of this period. Ammonites were common in the seas around Antarctica, and dinosaurs were also present, though only three Antarctic dinosaur genera (Cryolophosaurus and Glacialisaurus, from the Hanson Formation,[61] and Antarctopelta) have been described to date.[62] It was during this era that Gondwana began to break up.
|
94 |
+
|
95 |
+
However, there is some evidence of antarctic marine glaciation during the Cretaceous period.[63]
|
96 |
+
|
97 |
+
The cooling of Antarctica occurred stepwise, as the continental spread changed the oceanic currents from longitudinal equator-to-pole temperature-equalising currents to latitudinal currents that preserved and accentuated latitude temperature differences.
|
98 |
+
|
99 |
+
Africa separated from Antarctica in the Jurassic, around 160 Ma, followed by the Indian subcontinent in the early Cretaceous (about 125 Ma). By the end of the Cretaceous, about 66 Ma, Antarctica (then connected to Australia) still had a subtropical climate and flora, complete with a marsupial fauna.[64] In the Eocene epoch, about 40 Ma Australia-New Guinea separated from Antarctica, so that latitudinal currents could isolate Antarctica from Australia, and the first ice began to appear. During the Eocene–Oligocene extinction event about 34 million years ago, CO2 levels have been found to be about 760 ppm[65] and had been decreasing from earlier levels in the thousands of ppm.
|
100 |
+
|
101 |
+
Around 23 Ma, the Drake Passage opened between Antarctica and South America, resulting in the Antarctic Circumpolar Current that completely isolated the continent. Models of the changes suggest that declining CO2 levels became more important.[66] The ice began to spread, replacing the forests that until then had covered the continent. Since about 15 Ma, the continent has been mostly covered with ice.[67]
|
102 |
+
|
103 |
+
Fossil Nothofagus leaves in the Meyer Desert Formation of the Sirius Group show that intermittent warm periods allowed Nothofagus shrubs to cling to the Dominion Range as late as 3–4 Ma (mid-late Pliocene).[68] After that, the Pleistocene ice age covered the whole continent and destroyed all major plant life on it.[69]
|
104 |
+
|
105 |
+
The geological study of Antarctica has been greatly hindered by nearly all of the continent being permanently covered with a thick layer of ice.[70] However, new techniques such as remote sensing, ground-penetrating radar and satellite imagery have begun to reveal the structures beneath the ice.
|
106 |
+
|
107 |
+
Geologically, West Antarctica closely resembles the Andes mountain range of South America.[60] The Antarctic Peninsula was formed by uplift and metamorphism of sea bed sediments during the late Paleozoic and the early Mesozoic eras. This sediment uplift was accompanied by igneous intrusions and volcanism. The most common rocks in West Antarctica are andesite and rhyolite volcanics formed during the Jurassic period. There is also evidence of volcanic activity, even after the ice sheet had formed, in Marie Byrd Land and Alexander Island. The only anomalous area of West Antarctica is the Ellsworth Mountains region, where the stratigraphy is more similar to East Antarctica.
|
108 |
+
|
109 |
+
East Antarctica is geologically varied, dating from the Precambrian era, with some rocks formed more than 3 billion years ago. It is composed of a metamorphic and igneous platform which is the basis of the continental shield. On top of this base are coal and various modern rocks, such as sandstones, limestones and shales laid down during the Devonian and Jurassic periods to form the Transantarctic Mountains. In coastal areas such as Shackleton Range and Victoria Land some faulting has occurred.
|
110 |
+
|
111 |
+
The main mineral resource known on the continent is coal.[67] It was first recorded near the Beardmore Glacier by Frank Wild on the Nimrod Expedition, and now low-grade coal is known across many parts of the Transantarctic Mountains. The Prince Charles Mountains contain significant deposits of iron ore. The most valuable resources of Antarctica lie offshore, namely the oil and natural gas fields found in the Ross Sea in 1973. Exploitation of all mineral resources is banned until 2048 by the Protocol on Environmental Protection to the Antarctic Treaty.
|
112 |
+
|
113 |
+
Antarctica is the coldest of Earth's continents. It used to be ice-free until about 34 million years ago, when it became covered with ice.[71] The lowest natural air temperature ever recorded on Earth was −89.2 °C (−128.6 °F) at the Russian Vostok Station in Antarctica on 21 July 1983.[72] For comparison, this is 10.7 °C (20 °F) colder than subliming dry ice at one atmosphere of partial pressure, but since CO2 only makes up 0.039% of air, temperatures of less than −140 °C (−220 °F)[73] would be needed to produce dry ice snow in Antarctica. A lower air temperature of −94.7 °C (−138.5 °F) was recorded in 2010 by satellite—however, it may be influenced by ground temperatures and was not recorded at a height of 7 feet (2 m) above the surface as required for the official air temperature records.[74] Antarctica is a frozen desert with little precipitation; the South Pole receives less than 10 mm (0.4 in) per year, on average. Temperatures reach a minimum of between −80 °C (−112 °F) and −89.2 °C (−128.6 °F) in the interior in winter and reach a maximum of between 5 °C (41 °F) and 15 °C (59 °F) near the coast in summer. Northern Antarctica recorded a temperature of 20.75 °C (69.3 °F) on 9 February 2020, the highest recorded temperature in the continent.[75][76] Sunburn is often a health issue as the snow surface reflects almost all of the ultraviolet light falling on it. Given the latitude, long periods of constant darkness or constant sunlight create climates unfamiliar to human beings in much of the rest of the world.[77]
|
114 |
+
|
115 |
+
East Antarctica is colder than its western counterpart because of its higher elevation. Weather fronts rarely penetrate far into the continent, leaving the centre cold and dry. Despite the lack of precipitation over the central portion of the continent, ice there lasts for extended periods. Heavy snowfalls are common on the coastal portion of the continent, where snowfalls of up to 1.22 metres (48 in) in 48 hours have been recorded.
|
116 |
+
|
117 |
+
At the edge of the continent, strong katabatic winds off the polar plateau often blow at storm force. In the interior, wind speeds are typically moderate. During clear days in summer, more solar radiation reaches the surface at the South Pole than at the equator because of the 24 hours of sunlight each day at the Pole.[1]
|
118 |
+
|
119 |
+
Antarctica is colder than the Arctic for three reasons. First, much of the continent is more than 3,000 m (9,800 ft) above sea level, and temperature decreases with elevation in the troposphere. Second, the Arctic Ocean covers the north polar zone: the ocean's relative warmth is transferred through the icepack and prevents temperatures in the Arctic regions from reaching the extremes typical of the land surface of Antarctica. Third, the Earth is at aphelion in July (i.e., the Earth is farthest from the Sun in the Antarctic winter), and the Earth is at perihelion in January (i.e., the Earth is closest to the Sun in the Antarctic summer). The orbital distance contributes to a colder Antarctic winter (and a warmer Antarctic summer) but the first two effects have more impact.[78]
|
120 |
+
|
121 |
+
The aurora australis, commonly known as the southern lights, is a glow observed in the night sky near the South Pole created by the plasma-full solar winds that pass by the Earth. Another unique spectacle is diamond dust, a ground-level cloud composed of tiny ice crystals. It generally forms under otherwise clear or nearly clear skies, so people sometimes also refer to it as clear-sky precipitation. A sun dog, a frequent atmospheric optical phenomenon, is a bright "spot" beside the true sun.[77]
|
122 |
+
|
123 |
+
Several governments maintain permanent manned research stations on the continent. The number of people conducting and supporting scientific research and other work on the continent and its nearby islands varies from about 1,000 in winter to about 5,000 in the summer, giving it a population density between 70 and 350 inhabitants per million square kilometres (180 and 900 per million square miles) at these times. Many of the stations are staffed year-round, the winter-over personnel typically arriving from their home countries for a one-year assignment. An Orthodox church—Trinity Church, opened in 2004 at the Russian Bellingshausen Station—is manned year-round by one or two priests, who are similarly rotated every year.[79][80]
|
124 |
+
|
125 |
+
The first semi-permanent inhabitants of regions near Antarctica (areas situated south of the Antarctic Convergence) were British and American sealers who used to spend a year or more on South Georgia, from 1786 onward. During the whaling era, which lasted until 1966, the population of that island varied from over 1,000 in the summer (over 2,000 in some years) to some 200 in the winter. Most of the whalers were Norwegian, with an increasing proportion of Britons. The settlements included Grytviken, Leith Harbour, King Edward Point, Stromness, Husvik, Prince Olav Harbour, Ocean Harbour and Godthul. Managers and other senior officers of the whaling stations often lived together with their families. Among them was the founder of Grytviken, Captain Carl Anton Larsen, a prominent Norwegian whaler and explorer who, along with his family, adopted British citizenship in 1910.[citation needed]
|
126 |
+
|
127 |
+
The first child born in the southern polar region was Norwegian girl Solveig Gunbjørg Jacobsen, born in Grytviken on 8 October 1913, and her birth was registered by the resident British Magistrate of South Georgia. She was a daughter of Fridthjof Jacobsen, the assistant manager of the whaling station, and Klara Olette Jacobsen. Jacobsen arrived on the island in 1904 and became the manager of Grytviken, serving from 1914 to 1921; two of his children were born on the island.[81]
|
128 |
+
|
129 |
+
Emilio Marcos Palma was the first person born south of the 60th parallel south, the first born on the Antarctic mainland, and the only living human to be the first born on any continent.[82] He was born in 1978 at Esperanza Base, on the tip of the Antarctic Peninsula;[83][84] his parents were sent there along with seven other families by the Argentine government to determine if the continent was suitable for family life. In 1984, Juan Pablo Camacho was born at the Frei Montalva Station, becoming the first Chilean born in Antarctica. Several bases are now home to families with children attending schools at the station.[85] As of 2009, eleven children were born in Antarctica (south of the 60th parallel south): eight at the Argentine Esperanza Base[86] and three at the Chilean Frei Montalva Station.[87]
|
130 |
+
|
131 |
+
The terrestrial and native all year round species appears to be the descendants of ancestors who lived in geothermally warmed environments during the last ice age, when these areas were the only places on the continent not covered by ice.[88]
|
132 |
+
|
133 |
+
Few terrestrial vertebrates live in Antarctica, and those that do are limited to the sub-Antarctic islands.[89] Invertebrate life includes microscopic mites like the Alaskozetes antarcticus, lice, nematodes, tardigrades, rotifers, krill and springtails. The flightless midge Belgica antarctica, up to 6 mm (1⁄4 in) in size, is the largest purely terrestrial animal in Antarctica.[90] Another member of Chironomidae is Parochlus steinenii.[91] The snow petrel is one of only three birds that breed exclusively in Antarctica.[92]
|
134 |
+
|
135 |
+
Some species of marine animals exist and rely, directly or indirectly, on the phytoplankton. Antarctic sea life includes penguins, blue whales, orcas, colossal squids and fur seals. The emperor penguin is the only penguin that breeds during the winter in Antarctica, while the Adélie penguin breeds farther south than any other penguin.[citation needed] The southern rockhopper penguin has distinctive feathers around the eyes, giving the appearance of elaborate eyelashes. King penguins, chinstrap penguins, and gentoo penguins also breed in the Antarctic.[citation needed]
|
136 |
+
|
137 |
+
The Antarctic fur seal was very heavily hunted in the 18th and 19th centuries for its pelt by sealers from the United States and the United Kingdom. The Weddell seal, a "true seal", is named after Sir James Weddell, commander of British sealing expeditions in the Weddell Sea. Antarctic krill, which congregate in large schools, is the keystone species of the ecosystem of the Southern Ocean, and is an important food organism for whales, seals, leopard seals, fur seals, squid, icefish, penguins, albatrosses and many other birds.[93]
|
138 |
+
|
139 |
+
A census of sea life carried out during the International Polar Year and which involved some 500 researchers was released in 2010. The research is part of the global Census of Marine Life and has disclosed some remarkable findings. More than 235 marine organisms live in both polar regions, having bridged the gap of 12,000 km (7,456 mi). Large animals such as some cetaceans and birds make the round trip annually. More surprising are small forms of life such as sea cucumbers and free-swimming snails found in both polar oceans. Various factors may aid in their distribution – fairly uniform temperatures of the deep ocean at the poles and the equator which differ by no more than 5 °C, and the major current systems or marine conveyor belt which transport eggs and larval stages.[94]
|
140 |
+
|
141 |
+
About 1,150 species of fungi have been recorded from Antarctica, of which about 750 are non-lichen-forming and 400 are lichen-forming.[95][96] Some of these species are cryptoendoliths as a result of evolution under extreme conditions, and have significantly contributed to shaping the impressive rock formations of the McMurdo Dry Valleys and surrounding mountain ridges. The apparently simple morphology, scarcely differentiated structures, metabolic systems and enzymes still active at very low temperatures, and reduced life cycles shown by such fungi make them particularly suited to harsh environments such as the McMurdo Dry Valleys. In particular, their thick-walled and strongly melanised cells make them resistant to UV light. Those features can also be observed in algae and cyanobacteria, suggesting that these are adaptations to the conditions prevailing in Antarctica. This has led to speculation that, if life ever occurred on Mars, it might have looked similar to Antarctic fungi such as Cryomyces antarcticus, and Cryomyces minteri.[97] Some of these fungi are also apparently endemic to Antarctica. Endemic Antarctic fungi also include certain dung-inhabiting species which have had to evolve in response to the double challenge of extreme cold while growing on dung, and the need to survive passage through the gut of warm-blooded animals.[98]
|
142 |
+
|
143 |
+
About 298 million years ago Permian forests started to cover the continent, and tundra vegetation survived as late as 15 million years ago,[99] but the climate of present-day Antarctica does not allow extensive vegetation to form. A combination of freezing temperatures, poor soil quality, lack of moisture, and lack of sunlight inhibit plant growth. As a result, the diversity of plant life is very low and limited in distribution. The flora of the continent largely consists of bryophytes. There are about 100 species of mosses and 25 species of liverworts, but only three species of flowering plants, all of which are found in the Antarctic Peninsula: Deschampsia antarctica (Antarctic hair grass), Colobanthus quitensis (Antarctic pearlwort) and the non-native Poa annua (annual bluegrass).[100] Growth is restricted to a few weeks in the summer.[95][101]
|
144 |
+
|
145 |
+
Seven hundred species of algae exist, most of which are phytoplankton. Multicoloured snow algae and diatoms are especially abundant in the coastal regions during the summer.[101] Bacteria have been found living in the cold and dark as deep as 800 m (0.50 mi; 2,600 ft) under the ice.[102]
|
146 |
+
|
147 |
+
The Protocol on Environmental Protection to the Antarctic Treaty (also known as the Environmental Protocol or Madrid Protocol) came into force in 1998, and is the main instrument concerned with conservation and management of biodiversity in Antarctica. The Antarctic Treaty Consultative Meeting is advised on environmental and conservation issues in Antarctica by the Committee for Environmental Protection. A major concern within this committee is the risk to Antarctica from unintentional introduction of non-native species from outside the region.[103]
|
148 |
+
|
149 |
+
The passing of the Antarctic Conservation Act (1978) in the U.S. brought several restrictions to U.S. activity on Antarctica. The introduction of alien plants or animals can bring a criminal penalty, as can the extraction of any indigenous species. The overfishing of krill, which plays a large role in the Antarctic ecosystem, led officials to enact regulations on fishing. The Convention for the Conservation of Antarctic Marine Living Resources (CCAMLR), a treaty that came into force in 1980, requires that regulations managing all Southern Ocean fisheries consider potential effects on the entire Antarctic ecosystem.[1] Despite these new acts, unregulated and illegal fishing, particularly of Patagonian toothfish (marketed as Chilean Sea Bass in the U.S.), remains a serious problem. The illegal fishing of toothfish has been increasing, with estimates of 32,000 tonnes (35,000 short tons) in 2000.[104][105]
|
150 |
+
|
151 |
+
Several countries claim sovereignty in certain regions. While a few of these countries have mutually recognised each other's claims,[106] the validity of these claims is not recognised universally.[1]
|
152 |
+
|
153 |
+
New claims on Antarctica have been suspended since 1959, although in 2015 Norway formally defined Queen Maud Land as including the unclaimed area between it and the South Pole.[107] Antarctica's status is regulated by the 1959 Antarctic Treaty and other related agreements, collectively called the Antarctic Treaty System. Antarctica is defined as all land and ice shelves south of 60° S for the purposes of the Treaty System. The treaty was signed by twelve countries including the Soviet Union (and later Russia), the United Kingdom, Argentina, Chile, Australia, and the United States.[108] It set aside Antarctica as a scientific preserve, established freedom of scientific investigation and environmental protection, and banned military activity on Antarctica. This was the first arms control agreement established during the Cold War.
|
154 |
+
|
155 |
+
In 1983 the Antarctic Treaty Parties began negotiations on a convention to regulate mining in Antarctica.[109] A coalition of international organisations[110] launched a public pressure campaign to prevent any minerals development in the region, led largely by Greenpeace International,[111] which operated its own scientific station—World Park Base—in the Ross Sea region from 1987 until 1991[112] and conducted annual expeditions to document environmental effects of humans on Antarctica.[113] In 1988, the Convention on the Regulation of Antarctic Mineral Resources (CRAMRA) was adopted.[114] The following year, however, Australia and France announced that they would not ratify the convention, rendering it dead for all intents and purposes. They proposed instead that a comprehensive regime to protect the Antarctic environment be negotiated in its place.[115] The Protocol on Environmental Protection to the Antarctic Treaty (the "Madrid Protocol") was negotiated as other countries followed suit and on 14 January 1998 it entered into force.[115][116] The Madrid Protocol bans all mining in Antarctica, designating Antarctica a "natural reserve devoted to peace and science".
|
156 |
+
|
157 |
+
The Antarctic Treaty prohibits any military activity in Antarctica, including the establishment of military bases and fortifications, military manoeuvres, and weapons testing. Military personnel or equipment are permitted only for scientific research or other peaceful purposes.[117] The only documented military land manoeuvre has been the small Operation NINETY by the Argentine military in 1965.[118]
|
158 |
+
|
159 |
+
|
160 |
+
|
161 |
+
|
162 |
+
|
163 |
+
|
164 |
+
|
165 |
+
The Argentine, British and Chilean claims all overlap, and have caused friction. On 18 December 2012, the British Foreign and Commonwealth Office named a previously unnamed area Queen Elizabeth Land in tribute to Queen Elizabeth II's Diamond Jubilee.[119] On 22 December 2012, the UK ambassador to Argentina, John Freeman, was summoned to the Argentine government as protest against the claim.[120] Argentine–UK relations had previously been damaged throughout 2012 due to disputes over the sovereignty of the nearby Falkland Islands, and the 30th anniversary of the Falklands War.
|
166 |
+
|
167 |
+
The areas shown as Australia's and New Zealand's claims were British territory until they were handed over following the countries' independence. Australia currently claims the largest area. The claims of Britain, Australia, New Zealand, France and Norway are all recognised by each other.[121]
|
168 |
+
|
169 |
+
Other countries participating as members of the Antarctic Treaty have a territorial interest in Antarctica, but the provisions of the Treaty do not allow them to make their claims while it is in force.[122][123]
|
170 |
+
|
171 |
+
There is no economic activity in Antarctica at present, except for fishing off the coast and small-scale tourism, both based outside Antarctica.[1]
|
172 |
+
|
173 |
+
Although coal, hydrocarbons, iron ore, platinum, copper, chromium, nickel, gold and other minerals have been found, they have not been in large enough quantities to exploit.[126] The 1991 Protocol on Environmental Protection to the Antarctic Treaty also restricts a struggle for resources. In 1998, a compromise agreement was reached to place an indefinite ban on mining, to be reviewed in 2048, further limiting economic development and exploitation. The primary economic activity is the capture and offshore trading of fish. Antarctic fisheries in 2000–01 reported landing 112,934 tonnes.[127]
|
174 |
+
|
175 |
+
Small-scale "expedition tourism" has existed since 1957 and is currently subject to Antarctic Treaty and Environmental Protocol provisions, but in effect self-regulated by the International Association of Antarctica Tour Operators (IAATO). Not all vessels associated with Antarctic tourism are members of IAATO, but IAATO members account for 95% of the tourist activity. Travel is largely by small or medium ship, focusing on specific scenic locations with accessible concentrations of iconic wildlife. A total of 37,506 tourists visited during the 2006–07 Austral summer with nearly all of them coming from commercial ships; 38,478 were recorded in 2015–16.[128][129][130] As of 2015, there are two Wells Fargo ATMs in Antarctica.[131]
|
176 |
+
|
177 |
+
There has been some concern over the potential adverse environmental and ecosystem effects caused by the influx of visitors. Some environmentalists and scientists have made a call for stricter regulations for ships and a tourism quota.[132] The primary response by Antarctic Treaty Parties has been to develop, through their Committee for Environmental Protection and in partnership with IAATO, "site use guidelines" setting landing limits and closed or restricted zones on the more frequently visited sites. Antarctic sightseeing flights (which did not land) operated out of Australia and New Zealand until the fatal crash of Air New Zealand Flight 901 in 1979 on Mount Erebus, which killed all 257 aboard. Qantas resumed commercial overflights to Antarctica from Australia in the mid-1990s.
|
178 |
+
|
179 |
+
Antarctic fisheries in 1998–99 (1 July – 30 June) reported landing 119,898 tonnes legally.[133]
|
180 |
+
|
181 |
+
About thirty countries maintain about seventy research stations (40-year-round or permanent, and 30 summer-only) in Antarctica, with an approximate population of 4000 in summer and 1000 in winter.[1]
|
182 |
+
|
183 |
+
The ISO 3166-1 alpha-2 "AQ" is assigned to the entire continent regardless of jurisdiction. Different country calling codes and currencies[134] are used for different settlements, depending on the administrating country. The Antarctican dollar, a souvenir item sold in the United States and Canada, is not legal tender.[1][135]
|
184 |
+
|
185 |
+
Each year, scientists from 28 different nations conduct experiments not reproducible in any other place in the world. In the summer more than 4,000 scientists operate research stations; this number decreases to just over 1,000 in the winter.[1] McMurdo Station, which is the largest research station in Antarctica, is capable of housing more than 1,000 scientists, visitors, and tourists.
|
186 |
+
|
187 |
+
Researchers include biologists, geologists, oceanographers, physicists, astronomers, glaciologists, and meteorologists. Geologists tend to study plate tectonics, meteorites from outer space, and resources from the breakup of the supercontinent Gondwana. Glaciologists in Antarctica are concerned with the study of the history and dynamics of floating ice, seasonal snow, glaciers, and ice sheets. Biologists, in addition to examining the wildlife, are interested in how harsh temperatures and the presence of people affect adaptation and survival strategies in a wide variety of organisms. Medical physicians have made discoveries concerning the spreading of viruses and the body's response to extreme seasonal temperatures. Astrophysicists at Amundsen–Scott South Pole Station study the celestial dome and cosmic microwave background radiation. Many astronomical observations are better made from the interior of Antarctica than from most surface locations because of the high elevation, which results in a thin atmosphere; low temperature, which minimises the amount of water vapour in the atmosphere; and absence of light pollution, thus allowing for a view of space clearer than anywhere else on Earth. Antarctic ice serves as both the shield and the detection medium for the largest neutrino telescope in the world, built 2 km (1.2 mi) below Amundsen–Scott station.[136]
|
188 |
+
|
189 |
+
Since the 1970s an important focus of study has been the ozone layer in the atmosphere above Antarctica. In 1985, three British scientists working on data they had gathered at Halley Station on the Brunt Ice Shelf discovered the existence of a hole in this layer. It was eventually determined that the destruction of the ozone was caused by chlorofluorocarbons (CFCs) emitted by human products. With the ban of CFCs in the Montreal Protocol of 1989, climate projections indicate that the ozone layer will return to 1980 levels between 2050 and 2070.[137]
|
190 |
+
|
191 |
+
In September 2006 NASA satellite data revealed that the Antarctic ozone hole was larger than at any other time on record, at 2,750,000 km2 (1,060,000 sq mi).[138] The impacts of the depleted ozone layer on climate changes occurring in Antarctica are not well understood.[137]
|
192 |
+
|
193 |
+
In 2007 The Polar Geospatial Center was founded. The Polar Geospatial Center uses geospatial and remote sensing technology to provide mapping services to American federally funded research teams. Currently, the Polar Geospatial Center can image all of Antarctica at 500 mm (20 in) resolution every 45 days.[139]
|
194 |
+
|
195 |
+
On 6 September 2007 Belgian-based International Polar Foundation unveiled the Princess Elisabeth station, the world's first zero-emissions polar science station in Antarctica to research climate change. Costing $16.3 million, the prefabricated station, which is part of the International Polar Year, was shipped to the South Pole from Belgium by the end of 2008 to monitor the health of the polar regions. Belgian polar explorer Alain Hubert stated: "This base will be the first of its kind to produce zero emissions, making it a unique model of how energy should be used in the Antarctic." Johan Berte is the leader of the station design team and manager of the project which conducts research in climatology, glaciology and microbiology.[140]
|
196 |
+
|
197 |
+
In January 2008 British Antarctic Survey (BAS) scientists, led by Hugh Corr and David Vaughan, reported (in the journal Nature Geoscience) that 2,200 years ago, a volcano erupted under Antarctica's ice sheet (based on airborne survey with radar images). The biggest eruption in Antarctica in the last 10,000 years, the volcanic ash was found deposited on the ice surface under the Hudson Mountains, close to Pine Island Glacier.[141]
|
198 |
+
|
199 |
+
A study from 2014 estimated that during the Pleistocene, the East Antarctic Ice Sheet (EAIS) thinned by at least 500 m (1,600 ft), and that thinning since the Last Glacial Maximum for the EAIS area is less than 50 m (160 ft) and probably started after c. 14 ka.[142]
|
200 |
+
|
201 |
+
Meteorites from Antarctica are an important area of study of material formed early in the solar system; most are thought to come from asteroids, but some may have originated on larger planets. The first meteorite was found in 1912, and named the Adelie Land meteorite. In 1969, a Japanese expedition discovered nine meteorites. Most of these meteorites have fallen onto the ice sheet in the last million years. Motion of the ice sheet tends to concentrate the meteorites at blocking locations such as mountain ranges, with wind erosion bringing them to the surface after centuries beneath accumulated snowfall. Compared with meteorites collected in more temperate regions on Earth, the Antarctic meteorites are well-preserved.[143]
|
202 |
+
|
203 |
+
This large collection of meteorites allows a better understanding of the abundance of meteorite types in the solar system and how meteorites relate to asteroids and comets. New types of meteorites and rare meteorites have been found. Among these are pieces blasted off the Moon, and probably Mars, by impacts. These specimens, particularly ALH84001 discovered by ANSMET, are at the centre of the controversy about possible evidence of microbial life on Mars. Because meteorites in space absorb and record cosmic radiation, the time elapsed since the meteorite hit the Earth can be determined from laboratory studies. The elapsed time since fall, or terrestrial residence age, of a meteorite represents more information that might be useful in environmental studies of Antarctic ice sheets.[143]
|
204 |
+
|
205 |
+
In 2006 a team of researchers from Ohio State University used gravity measurements by NASA's GRACE satellites to discover the 500-kilometre-wide (300 mi) Wilkes Land crater, which probably formed about 250 million years ago.[144]
|
206 |
+
|
207 |
+
In January 2013 an 18 kg (40 lb) meteorite was discovered frozen in ice on the Nansen ice field by a Search for Antarctic Meteorites, Belgian Approach (SAMBA) mission.[145]
|
208 |
+
|
209 |
+
In January 2015 reports emerged of a 2-kilometre (1.2 mi) circular structure, supposedly a meteorite crater, on the surface snow of King Baudouin Ice Shelf. Satellite images from 25 years ago seemingly show it.
|
210 |
+
|
211 |
+
Due to its location at the South Pole, Antarctica receives relatively little solar radiation except along the southern summer. This means that it is a very cold continent where water is mostly in the form of ice. Precipitation is low (most of Antarctica is a desert) and almost always in the form of snow, which accumulates and forms a giant ice sheet which covers the land. Parts of this ice sheet form moving glaciers known as ice streams, which flow towards the edges of the continent. Next to the continental shore are many ice shelves. These are floating extensions of outflowing glaciers from the continental ice mass. Offshore, temperatures are also low enough that ice is formed from seawater through most of the year. It is important to understand the various types of Antarctic ice to understand possible effects on sea levels and the implications of global cooling.
|
212 |
+
|
213 |
+
Sea ice extent expands annually in the Antarctic winter and most of this ice melts in the summer. This ice is formed from the ocean water and floats in the same water and thus does not contribute to rise in sea level. The extent of sea ice around Antarctica (in terms of square kilometers of coverage) has remained roughly constant in recent decades, although the amount of variation it has experienced in its thickness is unclear.[146][147]
|
214 |
+
|
215 |
+
Melting of floating ice shelves (ice that originated on the land) does not in itself contribute much to sea-level rise (since the ice displaces only its own mass of water). However, it is the outflow of the ice from the land to form the ice shelf which causes a rise in global sea level. This effect is offset by snow falling back onto the continent. Recent decades have witnessed several dramatic collapses of large ice shelves around the coast of Antarctica, especially along the Antarctic Peninsula. Concerns have been raised that disruption of ice shelves may result in increased glacial outflow from the continental ice mass.[148]
|
216 |
+
|
217 |
+
On the continent itself, the large volume of ice present stores around 70% of the world's fresh water.[51] This ice sheet is constantly gaining ice from snowfall and losing ice through outflow to the sea.
|
218 |
+
|
219 |
+
Sheperd et al. 2012, found that different satellite methods for measuring ice mass and change were in good agreement and combining methods leads to more certainty with East Antarctica, West Antarctica, and the Antarctic Peninsula changing in mass by +14 ± 43, −65 ± 26, and −20 ± 14 gigatonnes (Gt) per year.[149] The same group's 2018 systematic review study estimated that ice loss across the entire continent was 43 gigatonnes per year on average during the period from 1992 to 2002 but has accelerated to an average of 220 gigatonnes per year during the five years from 2012 to 2017.[150] NASA's Climate Change website indicates a compatible overall trend of greater than 100 gigatonnes of ice loss per year since 2002.[151]
|
220 |
+
|
221 |
+
A single 2015 study by H. Jay Zwally et al. found instead that the net change in ice mass is slightly positive at approximately 82 gigatonnes per year (with significant regional variation) which would result in Antarctic activity reducing global sea-level rise by 0.23 mm per year.[152] However, one critic, Eric Rignot of NASA's Jet Propulsion Laboratory, states that this outlying study's findings "are at odds with all other independent methods: re-analysis, gravity measurements, mass budget method, and other groups using the same data" and appears to arrive at more precise values than current technology and mathematical approaches would permit.[153]
|
222 |
+
|
223 |
+
A satellite record revealed that the overall increase in Antarctic sea ice extents reversed in 2014, with rapid rates of decrease in 2014–2017 reducing the Antarctic sea ice extents to their lowest values in the 40-y record.[154]
|
224 |
+
|
225 |
+
East Antarctica is a cold region with a ground base above sea level and occupies most of the continent. This area is dominated by small accumulations of snowfall which becomes ice and thus eventually seaward glacial flows. The mass balance of the East Antarctic Ice Sheet as a whole is thought to be slightly positive (lowering sea level) or near to balance.[155][156][157] However, increased ice outflow has been suggested in some regions.[156][158]
|
226 |
+
|
227 |
+
Some of Antarctica has been warming up; particularly strong warming has been noted on the Antarctic Peninsula. A study by Eric Steig published in 2009 noted for the first time that the continent-wide average surface temperature trend of Antarctica is slightly positive at >0.05 °C (0.09 °F) per decade from 1957 to 2006. This study also noted that West Antarctica has warmed by more than 0.1 °C (0.2 °F) per decade in the last 50 years, and this warming is strongest in winter and spring. This is partly offset by autumn cooling in East Antarctica.[159] There is evidence from one study that Antarctica is warming as a result of human carbon dioxide emissions,[160] but this remains ambiguous.[161] The amount of surface warming in West Antarctica, while large, has not led to appreciable melting at the surface, and is not directly affecting the West Antarctic Ice Sheet's contribution to sea level. Instead the recent increases in glacier outflow are believed to be due to an inflow of warm water from the deep ocean, just off the continental shelf.[162][163] The net contribution to sea level from the Antarctic Peninsula is more likely to be a direct result of the much greater atmospheric warming there.[164]
|
228 |
+
|
229 |
+
In 2002 the Antarctic Peninsula's Larsen-B ice shelf collapsed.[165] Between 28 February and 8 March 2008, about 570 km2 (220 sq mi) of ice from the Wilkins Ice Shelf on the southwest part of the peninsula collapsed, putting the remaining 15,000 km2 (5,800 sq mi) of the ice shelf at risk. The ice was being held back by a "thread" of ice about 6 km (4 mi) wide,[166][167] prior to its collapse on 5 April 2009.[168][169] According to NASA, the most widespread Antarctic surface melting of the past 30 years occurred in 2005, when an area of ice comparable in size to California briefly melted and refroze; this may have resulted from temperatures rising to as high as 5 °C (41 °F).[170]
|
230 |
+
|
231 |
+
A study published in Nature Geoscience in 2013 (online in December 2012) identified central West Antarctica as one of the fastest-warming regions on Earth. The researchers present a complete temperature record from Antarctica's Byrd Station and assert that it "reveals a linear increase in annual temperature between 1958 and 2010 by 2.4±1.2 °C".[171]
|
232 |
+
|
233 |
+
In February 2020, the region recorded the highest temperature of 18.3-degree Celsius which was a degree higher than the previous record of 17.5 degrees in March 2015.[172]
|
234 |
+
|
235 |
+
There is a large area of low ozone concentration or "ozone hole" over Antarctica. This hole covers almost the whole continent and was at its largest in September 2008, when the longest lasting hole on record remained until the end of December.[173] The hole was detected by scientists in 1985[174] and has tended to increase over the years of observation. The ozone hole is attributed to the emission of chlorofluorocarbons or CFCs into the atmosphere, which decompose the ozone into other gases.[175] In 2019, the ozone hole was at its smallest in the previous thirty years, due to the warmer polar stratosphere weakening the polar vortex. This reduced the formation of the 'polar stratospheric clouds' that enable the chemistry that leads to rapid ozone loss.[176]
|
236 |
+
|
237 |
+
Some scientific studies suggest that ozone depletion may have a dominant role in governing climatic change in Antarctica (and a wider area of the Southern Hemisphere).[174] Ozone absorbs large amounts of ultraviolet radiation in the stratosphere. Ozone depletion over Antarctica can cause a cooling of around 6 °C in the local stratosphere. This cooling has the effect of intensifying the westerly winds which flow around the continent (the polar vortex) and thus prevents outflow of the cold air near the South Pole. As a result, the continental mass of the East Antarctic ice sheet is held at lower temperatures, and the peripheral areas of Antarctica, especially the Antarctic Peninsula, are subject to higher temperatures, which promote accelerated melting.[174] Models also suggest that the ozone depletion/enhanced polar vortex effect also accounts for the recent increase in sea ice just offshore of the continent.[177]
|
238 |
+
|
239 |
+
Africa
|
240 |
+
|
241 |
+
Antarctica
|
242 |
+
|
243 |
+
Asia
|
244 |
+
|
245 |
+
Australia
|
246 |
+
|
247 |
+
Europe
|
248 |
+
|
249 |
+
North America
|
250 |
+
|
251 |
+
South America
|
252 |
+
|
253 |
+
Afro-Eurasia
|
254 |
+
|
255 |
+
America
|
256 |
+
|
257 |
+
Eurasia
|
258 |
+
|
259 |
+
Oceania
|
260 |
+
|
261 |
+
Coordinates: 90°S 0°E / 90°S 0°E / -90; 0
|
en/2710.html.txt
ADDED
@@ -0,0 +1,149 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Impressionism is a 19th-century art movement characterized by relatively small, thin, yet visible brush strokes, open composition, emphasis on accurate depiction of light in its changing qualities (often accentuating the effects of the passage of time), ordinary subject matter, inclusion of movement as a crucial element of human perception and experience, and unusual visual angles. Impressionism originated with a group of Paris-based artists whose independent exhibitions brought them to prominence during the 1870s and 1880s.
|
2 |
+
|
3 |
+
The Impressionists faced harsh opposition from the conventional art community in France. The name of the style derives from the title of a Claude Monet work, Impression, soleil levant (Impression, Sunrise), which provoked the critic Louis Leroy to coin the term in a satirical review published in the Parisian newspaper Le Charivari.
|
4 |
+
|
5 |
+
The development of Impressionism in the visual arts was soon followed by analogous styles in other media that became known as impressionist music and impressionist literature.
|
6 |
+
|
7 |
+
Radicals in their time, early Impressionists violated the rules of academic painting. They constructed their pictures from freely brushed colours that took precedence over lines and contours, following the example of painters such as Eugène Delacroix and J. M. W. Turner. They also painted realistic scenes of modern life, and often painted outdoors. Previously, still lifes and portraits as well as landscapes were usually painted in a studio.[1] The Impressionists found that they could capture the momentary and transient effects of sunlight by painting outdoors or en plein air. They portrayed overall visual effects instead of details, and used short "broken" brush strokes of mixed and pure unmixed colour—not blended smoothly or shaded, as was customary—to achieve an effect of intense colour vibration.
|
8 |
+
|
9 |
+
Impressionism emerged in France at the same time that a number of other painters, including the Italian artists known as the Macchiaioli, and Winslow Homer in the United States, were also exploring plein-air painting. The Impressionists, however, developed new techniques specific to the style. Encompassing what its adherents argued was a different way of seeing, it is an art of immediacy and movement, of candid poses and compositions, of the play of light expressed in a bright and varied use of colour.
|
10 |
+
|
11 |
+
The public, at first hostile, gradually came to believe that the Impressionists had captured a fresh and original vision, even if the art critics and art establishment disapproved of the new style. By recreating the sensation in the eye that views the subject, rather than delineating the details of the subject, and by creating a welter of techniques and forms, Impressionism is a precursor of various painting styles, including Neo-Impressionism, Post-Impressionism, Fauvism, and Cubism.
|
12 |
+
|
13 |
+
In the middle of the 19th century—a time of change, as Emperor Napoleon III rebuilt Paris and waged war—the Académie des Beaux-Arts dominated French art. The Académie was the preserver of traditional French painting standards of content and style. Historical subjects, religious themes, and portraits were valued; landscape and still life were not. The Académie preferred carefully finished images that looked realistic when examined closely. Paintings in this style were made up of precise brush strokes carefully blended to hide the artist's hand in the work.[3] Colour was restrained and often toned down further by the application of a golden varnish.[4]
|
14 |
+
|
15 |
+
The Académie had an annual, juried art show, the Salon de Paris, and artists whose work was displayed in the show won prizes, garnered commissions, and enhanced their prestige. The standards of the juries represented the values of the Académie, represented by the works of such artists as Jean-Léon Gérôme and Alexandre Cabanel.
|
16 |
+
|
17 |
+
In the early 1860s, four young painters—Claude Monet, Pierre-Auguste Renoir, Alfred Sisley, and Frédéric Bazille—met while studying under the academic artist Charles Gleyre. They discovered that they shared an interest in painting landscape and contemporary life rather than historical or mythological scenes. Following a practice that had become increasingly popular by mid-century, they often ventured into the countryside together to paint in the open air,[5] but not for the purpose of making sketches to be developed into carefully finished works in the studio, as was the usual custom.[6] By painting in sunlight directly from nature, and making bold use of the vivid synthetic pigments that had become available since the beginning of the century, they began to develop a lighter and brighter manner of painting that extended further the Realism of Gustave Courbet and the Barbizon school. A favourite meeting place for the artists was the Café Guerbois on Avenue de Clichy in Paris, where the discussions were often led by Édouard Manet, whom the younger artists greatly admired. They were soon joined by Camille Pissarro, Paul Cézanne, and Armand Guillaumin.[7]
|
18 |
+
|
19 |
+
During the 1860s, the Salon jury routinely rejected about half of the works submitted by Monet and his friends in favour of works by artists faithful to the approved style.[8] In 1863, the Salon jury rejected Manet's The Luncheon on the Grass (Le déjeuner sur l'herbe) primarily because it depicted a nude woman with two clothed men at a picnic. While the Salon jury routinely accepted nudes in historical and allegorical paintings, they condemned Manet for placing a realistic nude in a contemporary setting.[9] The jury's severely worded rejection of Manet's painting appalled his admirers, and the unusually large number of rejected works that year perturbed many French artists.
|
20 |
+
|
21 |
+
After Emperor Napoleon III saw the rejected works of 1863, he decreed that the public be allowed to judge the work themselves, and the Salon des Refusés (Salon of the Refused) was organized. While many viewers came only to laugh, the Salon des Refusés drew attention to the existence of a new tendency in art and attracted more visitors than the regular Salon.[10]
|
22 |
+
|
23 |
+
Artists' petitions requesting a new Salon des Refusés in 1867, and again in 1872, were denied. In December 1873, Monet, Renoir, Pissarro, Sisley, Cézanne, Berthe Morisot, Edgar Degas and several other artists founded the Société Anonyme Coopérative des Artistes Peintres, Sculpteurs, Graveurs ("Cooperative and Anonymous Association of Painters, Sculptors, and Engravers") to exhibit their artworks independently.[11] Members of the association were expected to forswear participation in the Salon.[12] The organizers invited a number of other progressive artists to join them in their inaugural exhibition, including the older Eugène Boudin, whose example had first persuaded Monet to adopt plein air painting years before.[13] Another painter who greatly influenced Monet and his friends, Johan Jongkind, declined to participate, as did Édouard Manet. In total, thirty artists participated in their first exhibition, held in April 1874 at the studio of the photographer Nadar.
|
24 |
+
|
25 |
+
The critical response was mixed. Monet and Cézanne received the harshest attacks. Critic and humorist Louis Leroy wrote a scathing review in the newspaper Le Charivari in which, making wordplay with the title of Claude Monet's Impression, Sunrise (Impression, soleil levant), he gave the artists the name by which they became known. Derisively titling his article The Exhibition of the Impressionists, Leroy declared that Monet's painting was at most, a sketch, and could hardly be termed a finished work.
|
26 |
+
|
27 |
+
He wrote, in the form of a dialog between viewers,
|
28 |
+
|
29 |
+
The term Impressionist quickly gained favour with the public. It was also accepted by the artists themselves, even though they were a diverse group in style and temperament, unified primarily by their spirit of independence and rebellion. They exhibited together—albeit with shifting membership—eight times between 1874 and 1886. The Impressionists' style, with its loose, spontaneous brushstrokes, would soon become synonymous with modern life.[4]
|
30 |
+
|
31 |
+
Monet, Sisley, Morisot, and Pissarro may be considered the "purest" Impressionists, in their consistent pursuit of an art of spontaneity, sunlight, and colour. Degas rejected much of this, as he believed in the primacy of drawing over colour and belittled the practice of painting outdoors.[15] Renoir turned away from Impressionism for a time during the 1880s, and never entirely regained his commitment to its ideas. Édouard Manet, although regarded by the Impressionists as their leader,[16] never abandoned his liberal use of black as a colour (while Impressionists avoided its use and preferred to obtain darker colours by mixing), and never participated in the Impressionist exhibitions. He continued to submit his works to the Salon, where his painting Spanish Singer had won a 2nd class medal in 1861, and he urged the others to do likewise, arguing that "the Salon is the real field of battle" where a reputation could be made.[17]
|
32 |
+
|
33 |
+
Among the artists of the core group (minus Bazille, who had died in the Franco-Prussian War in 1870), defections occurred as Cézanne, followed later by Renoir, Sisley, and Monet, abstained from the group exhibitions so they could submit their works to the Salon. Disagreements arose from issues such as Guillaumin's membership in the group, championed by Pissarro and Cézanne against opposition from Monet and Degas, who thought him unworthy.[18] Degas invited Mary Cassatt to display her work in the 1879 exhibition, but also insisted on the inclusion of Jean-François Raffaëlli, Ludovic Lepic, and other realists who did not represent Impressionist practices, causing Monet in 1880 to accuse the Impressionists of "opening doors to first-come daubers".[19] The group divided over invitations to Paul Signac and Georges Seurat to exhibit with them in 1886. Pissarro was the only artist to show at all eight Impressionist exhibitions.
|
34 |
+
|
35 |
+
The individual artists achieved few financial rewards from the Impressionist exhibitions, but their art gradually won a degree of public acceptance and support. Their dealer, Durand-Ruel, played a major role in this as he kept their work before the public and arranged shows for them in London and New York. Although Sisley died in poverty in 1899, Renoir had a great Salon success in 1879.[20] Monet became secure financially during the early 1880s and so did Pissarro by the early 1890s. By this time the methods of Impressionist painting, in a diluted form, had become commonplace in Salon art.[21]
|
36 |
+
|
37 |
+
French painters who prepared the way for Impressionism include the Romantic colourist Eugène Delacroix, the leader of the realists Gustave Courbet, and painters of the Barbizon school such as Théodore Rousseau. The Impressionists learned much from the work of Johan Barthold Jongkind, Jean-Baptiste-Camille Corot and Eugène Boudin, who painted from nature in a direct and spontaneous style that prefigured Impressionism, and who befriended and advised the younger artists.
|
38 |
+
|
39 |
+
A number of identifiable techniques and working habits contributed to the innovative style of the Impressionists. Although these methods had been used by previous artists—and are often conspicuous in the work of artists such as Frans Hals, Diego Velázquez, Peter Paul Rubens, John Constable, and J. M. W. Turner—the Impressionists were the first to use them all together, and with such consistency. These techniques include:
|
40 |
+
|
41 |
+
New technology played a role in the development of the style. Impressionists took advantage of the mid-century introduction of premixed paints in tin tubes (resembling modern toothpaste tubes), which allowed artists to work more spontaneously, both outdoors and indoors.[22] Previously, painters made their own paints individually, by grinding and mixing dry pigment powders with linseed oil, which were then stored in animal bladders.[23]
|
42 |
+
|
43 |
+
Many vivid synthetic pigments became commercially available to artists for the first time during the 19th century. These included cobalt blue, viridian, cadmium yellow, and synthetic ultramarine blue, all of which were in use by the 1840s, before Impressionism.[24] The Impressionists' manner of painting made bold use of these pigments, and of even newer colours such as cerulean blue,[4] which became commercially available to artists in the 1860s.[24]
|
44 |
+
|
45 |
+
The Impressionists' progress toward a brighter style of painting was gradual. During the 1860s, Monet and Renoir sometimes painted on canvases prepared with the traditional red-brown or grey ground.[25] By the 1870s, Monet, Renoir, and Pissarro usually chose to paint on grounds of a lighter grey or beige colour, which functioned as a middle tone in the finished painting.[25] By the 1880s, some of the Impressionists had come to prefer white or slightly off-white grounds, and no longer allowed the ground colour a significant role in the finished painting.[26]
|
46 |
+
|
47 |
+
Prior to the Impressionists, other painters, notably such 17th-century Dutch painters as Jan Steen, had emphasized common subjects, but their methods of composition were traditional. They arranged their compositions so that the main subject commanded the viewer's attention. J. M. W. Turner, while an artist of the Romantic era, anticipated the style of impressionism with his artwork.[27] The Impressionists relaxed the boundary between subject and background so that the effect of an Impressionist painting often resembles a snapshot, a part of a larger reality captured as if by chance.[28] Photography was gaining popularity, and as cameras became more portable, photographs became more candid. Photography inspired Impressionists to represent momentary action, not only in the fleeting lights of a landscape, but in the day-to-day lives of people.[29][30]
|
48 |
+
|
49 |
+
The development of Impressionism can be considered partly as a reaction by artists to the challenge presented by photography, which seemed to devalue the artist's skill in reproducing reality. Both portrait and landscape paintings were deemed somewhat deficient and lacking in truth as photography "produced lifelike images much more efficiently and reliably".[31]
|
50 |
+
|
51 |
+
In spite of this, photography actually inspired artists to pursue other means of creative expression, and rather than compete with photography to emulate reality, artists focused "on the one thing they could inevitably do better than the photograph—by further developing into an art form its very subjectivity in the conception of the image, the very subjectivity that photography eliminated".[31] The Impressionists sought to express their perceptions of nature, rather than create exact representations. This allowed artists to depict subjectively what they saw with their "tacit imperatives of taste and conscience".[32] Photography encouraged painters to exploit aspects of the painting medium, like colour, which photography then lacked: "The Impressionists were the first to consciously offer a subjective alternative to the photograph".[31]
|
52 |
+
|
53 |
+
Another major influence was Japanese ukiyo-e art prints (Japonism). The art of these prints contributed significantly to the "snapshot" angles and unconventional compositions that became characteristic of Impressionism. An example is Monet's Jardin à Sainte-Adresse, 1867, with its bold blocks of colour and composition on a strong diagonal slant showing the influence of Japanese prints[34]
|
54 |
+
|
55 |
+
Edgar Degas was both an avid photographer and a collector of Japanese prints.[35] His The Dance Class (La classe de danse) of 1874 shows both influences in its asymmetrical composition. The dancers are seemingly caught off guard in various awkward poses, leaving an expanse of empty floor space in the lower right quadrant. He also captured his dancers in sculpture, such as the Little Dancer of Fourteen Years.
|
56 |
+
|
57 |
+
Impressionists, in varying degrees, were looking for ways to depict visual experience and contemporary subjects.[36] Women Impressionists were interested in these same ideals but had many social and career limitations compared to male Impressionists. In particular, they were excluded from the imagery of the bourgeois social sphere of the boulevard, cafe, and dance hall.[37] As well as imagery, women were excluded from the formative discussions that resulted in meetings in those places; that was where male Impressionists were able to form and share ideas about Impressionism.[37] In the academic realm, women were believed to be incapable of handling complex subjects which led teachers to restrict what they taught female students.[38] It was also considered unladylike to excel in art since women's true talents were then believed to center on homemaking and mothering.[38]
|
58 |
+
|
59 |
+
Yet several women were able to find success during their lifetime, even though their careers were affected by personal circumstances – Bracquemond, for example, had a husband who was resentful of her work which caused her to give up painting.[39] The four most well known, namely, Mary Cassatt, Eva Gonzalès, Marie Bracquemond, and Berthe Morisot, are, and were, often referred to as the 'Women Impressionists'. Their participation in the series of eight Impressionist exhibitions that took place in Paris from 1874 to 1886 varied: Morisot participated in seven, Cassatt in four, Bracquemond in three, and Gonzalès did not participate.[39][40]
|
60 |
+
|
61 |
+
|
62 |
+
|
63 |
+
The critics of the time lumped these four together without regard to their personal styles, techniques, or subject matter.[41] Critics viewing their works at the exhibitions often attempted to acknowledge the women artists' talents but circumscribed them within a limited notion of femininity.[42] Arguing for the suitability of Impressionist technique to women's manner of perception, Parisian critic S.C. de Soissons wrote:
|
64 |
+
|
65 |
+
One can understand that women have no originality of thought, and that literature and music have no feminine character; but surely women know how to observe, and what they see is quite different from that which men see, and the art which they put in their gestures, in their toilet, in the decoration of their environment is sufficient to give is the idea of an instinctive, of a peculiar genius which resides in each one of them.[43]
|
66 |
+
|
67 |
+
While Impressionism legitimized the domestic social life as subject matter, of which women had intimate knowledge, it also tended to limit them to that subject matter. Portrayals of often-identifiable sitters in domestic settings (which could offer commissions) were dominant in the exhibitions.[44] The subjects of the paintings were often women interacting with their environment by either their gaze or movement. Cassatt, in particular, was aware of her placement of subjects: she kept her predominantly female figures from objectification and cliche; when they are not reading, they converse, sew, drink tea, and when they are inactive, they seem lost in thought.[45]
|
68 |
+
|
69 |
+
The women Impressionists, like their male counterparts, were striving for "truth," for new ways of seeing and new painting techniques; each artist had an individual painting style.[46] Women Impressionists (particularly Morisot and Cassatt) were conscious of the balance of power between women and objects in their paintings – the bourgeois women depicted are not defined by decorative objects, but instead, interact with and dominate the things with which they live.[47] There are many similarities in their depictions of women who seem both at ease and subtly confined.[48] Gonzalès' Box at the Italian Opera depicts a woman staring into the distance, at ease in a social sphere but confined by the box and the man standing next to her. Cassatt's painting Young Girl at a Window is brighter in color but remains constrained by the canvas edge as she looks out the window.
|
70 |
+
|
71 |
+
Despite their success in their ability to have a career and Impressionism's demise attributed to its allegedly feminine characteristics (its sensuality, dependence on sensation, physicality, and fluidity) the four women artists (and other, lesser-known women Impressionists) were largely omitted from art historical textbooks covering Impressionist artists until Tamar Garb's Women Impressionists published in 1986.[49] For example, Impressionism by Jean Leymarie, published in 1955 included no information on any women Impressionists.
|
72 |
+
|
73 |
+
The central figures in the development of Impressionism in France,[50][51] listed alphabetically, were:
|
74 |
+
|
75 |
+
Frédéric Bazille, Paysage au bord du Lez, 1870, Minneapolis Institute of Art
|
76 |
+
|
77 |
+
Alfred Sisley, Bridge at Villeneuve-la-Garenne, 1872, Metropolitan Museum of Art
|
78 |
+
|
79 |
+
Berthe Morisot, The Cradle, 1872, Musée d'Orsay
|
80 |
+
|
81 |
+
Armand Guillaumin, Sunset at Ivry (Soleil couchant à Ivry), 1873, Musée d'Orsay
|
82 |
+
|
83 |
+
Édouard Manet, Boating, 1874, Metropolitan Museum of Art
|
84 |
+
|
85 |
+
Alfred Sisley, La Seine au Point du jour, 1877, Museum of modern art André Malraux - MuMa, Le Havre
|
86 |
+
|
87 |
+
Édouard Manet, The Plum, 1878, National Gallery of Art, Washington, D.C.
|
88 |
+
|
89 |
+
Édouard Manet, A Bar at the Folies-Bergère (Un Bar aux Folies-Bergère), 1882, Courtauld Institute of Art
|
90 |
+
|
91 |
+
Edgar Degas, After the Bath, Woman Drying Herself, c. 1884–1886 (reworked between 1890 and 1900), MuMa, Le Havre
|
92 |
+
|
93 |
+
Edgar Degas, L'Absinthe, 1876, Musée d'Orsay, Paris
|
94 |
+
|
95 |
+
Edgar Degas, Dancer with a Bouquet of Flowers (Star of the Ballet), 1878, Getty Center, Los Angeles
|
96 |
+
|
97 |
+
Edgar Degas, Woman in the Bath, 1886, Hill–Stead Museum, Farmington, Connecticut
|
98 |
+
|
99 |
+
Edgar Degas, Dancers at The Bar, 1888, The Phillips Collection, Washington, D.C.
|
100 |
+
|
101 |
+
Gustave Caillebotte, Paris Street; Rainy Day, 1877, Art Institute of Chicago
|
102 |
+
|
103 |
+
Pierre-Auguste Renoir, La Parisienne, 1874, National Museum Cardiff
|
104 |
+
|
105 |
+
Pierre-Auguste Renoir, Portrait of Irène Cahen d'Anvers (La Petite Irène), 1880, Foundation E.G. Bührle, Zürich
|
106 |
+
|
107 |
+
Pierre-Auguste Renoir, Two Sisters (On the Terrace), 1881, Art Institute of Chicago
|
108 |
+
|
109 |
+
Pierre-Auguste Renoir, Girl with a Hoop, 1885, National Gallery of Art, Washington, D.C.
|
110 |
+
|
111 |
+
Claude Monet, The Cliff at Étretat after the Storm, 1885, Clark Art Institute, Williamstown, Massachusetts
|
112 |
+
|
113 |
+
Mary Cassatt, The Child's Bath (The Bath), 1893, oil on canvas, Art Institute of Chicago
|
114 |
+
|
115 |
+
Berthe Morisot, Portrait of Mme Boursier and Her Daughter, c. 1873, Brooklyn Museum
|
116 |
+
|
117 |
+
Claude Monet, Le Grand Canal, 1908, Museum of Fine Arts, Boston
|
118 |
+
|
119 |
+
The Impressionists
|
120 |
+
|
121 |
+
Among the close associates of the Impressionists were several painters who adopted their methods to some degree. These include Jean-Louis Forain (who participated in Impressionist exhibitions in 1879, 1880, 1881 and 1886)[54] and Giuseppe De Nittis, an Italian artist living in Paris who participated in the first Impressionist exhibit at the invitation of Degas, although the other Impressionists disparaged his work.[55] Federico Zandomeneghi was another Italian friend of Degas who showed with the Impressionists. Eva Gonzalès was a follower of Manet who did not exhibit with the group. James Abbott McNeill Whistler was an American-born painter who played a part in Impressionism although he did not join the group and preferred grayed colours. Walter Sickert, an English artist, was initially a follower of Whistler, and later an important disciple of Degas; he did not exhibit with the Impressionists. In 1904 the artist and writer Wynford Dewhurst wrote the first important study of the French painters published in English, Impressionist Painting: its genesis and development, which did much to popularize Impressionism in Great Britain.
|
122 |
+
|
123 |
+
By the early 1880s, Impressionist methods were affecting, at least superficially, the art of the Salon. Fashionable painters such as Jean Béraud and Henri Gervex found critical and financial success by brightening their palettes while retaining the smooth finish expected of Salon art.[56] Works by these artists are sometimes casually referred to as Impressionism, despite their remoteness from Impressionist practice.
|
124 |
+
|
125 |
+
The influence of the French Impressionists lasted long after most of them had died. Artists like J.D. Kirszenbaum were borrowing Impressionist techniques throughout the twentieth century.
|
126 |
+
|
127 |
+
As the influence of Impressionism spread beyond France, artists, too numerous to list, became identified as practitioners of the new style. Some of the more important examples are:
|
128 |
+
|
129 |
+
The sculptor Auguste Rodin is sometimes called an Impressionist for the way he used roughly modeled surfaces to suggest transient light effects.[57]
|
130 |
+
|
131 |
+
Pictorialist photographers whose work is characterized by soft focus and atmospheric effects have also been called Impressionists.
|
132 |
+
|
133 |
+
French Impressionist Cinema is a term applied to a loosely defined group of films and filmmakers in France from 1919–1929, although these years are debatable. French Impressionist filmmakers include Abel Gance, Jean Epstein, Germaine Dulac, Marcel L’Herbier, Louis Delluc, and Dmitry Kirsanoff.
|
134 |
+
|
135 |
+
Musical Impressionism is the name given to a movement in European classical music that arose in the late 19th century and continued into the middle of the 20th century. Originating in France, musical Impressionism is characterized by suggestion and atmosphere, and eschews the emotional excesses of the Romantic era. Impressionist composers favoured short forms such as the nocturne, arabesque, and prelude, and often explored uncommon scales such as the whole tone scale. Perhaps the most notable innovations of Impressionist composers were the introduction of major 7th chords and the extension of chord structures in 3rds to five- and six-part harmonies.
|
136 |
+
|
137 |
+
The influence of visual Impressionism on its musical counterpart is debatable. Claude Debussy and Maurice Ravel are generally considered the greatest Impressionist composers, but Debussy disavowed the term, calling it the invention of critics. Erik Satie was also considered in this category, though his approach was regarded as less serious, more musical novelty in nature. Paul Dukas is another French composer sometimes considered an Impressionist, but his style is perhaps more closely aligned to the late Romanticists. Musical Impressionism beyond France includes the work of such composers as Ottorino Respighi (Italy), Ralph Vaughan Williams, Cyril Scott, and John Ireland (England), Manuel De Falla and Isaac Albeniz (Spain), and Charles Griffes (America).
|
138 |
+
|
139 |
+
The term Impressionism has also been used to describe works of literature in which a few select details suffice to convey the sensory impressions of an incident or scene. Impressionist literature is closely related to Symbolism, with its major exemplars being Baudelaire, Mallarmé, Rimbaud, and Verlaine. Authors such as Virginia Woolf, D.H. Lawrence, and Joseph Conrad have written works that are Impressionistic in the way that they describe, rather than interpret, the impressions, sensations and emotions that constitute a character's mental life.
|
140 |
+
|
141 |
+
During the 1880s several artists began to develop different precepts for the use of colour, pattern, form, and line, derived from the Impressionist example: Vincent van Gogh, Paul Gauguin, Georges Seurat, and Henri de Toulouse-Lautrec. These artists were slightly younger than the Impressionists, and their work is known as post-Impressionism. Some of the original Impressionist artists also ventured into this new territory; Camille Pissarro briefly painted in a pointillist manner, and even Monet abandoned strict plein air painting. Paul Cézanne, who participated in the first and third Impressionist exhibitions, developed a highly individual vision emphasising pictorial structure, and he is more often called a post-Impressionist. Although these cases illustrate the difficulty of assigning labels, the work of the original Impressionist painters may, by definition, be categorised as Impressionism.
|
142 |
+
|
143 |
+
Georges Seurat, A Sunday Afternoon on the Island of La Grande Jatte, 1884–1886, The Art Institute of Chicago
|
144 |
+
|
145 |
+
Vincent van Gogh, Cypresses, 1889, Metropolitan Museum of Art
|
146 |
+
|
147 |
+
Paul Gauguin, The Midday Nap, 1894, Metropolitan Museum of Art
|
148 |
+
|
149 |
+
Paul Cézanne, The Card Players, 1894–1895, Musée d'Orsay, Paris
|
en/2711.html.txt
ADDED
@@ -0,0 +1,110 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In computing, a printer is a peripheral device which makes a persistent representation of graphics or text, usually on paper.[1] While most output is human-readable, bar code printers are an example of an expanded use for printers.[2] The different types of printers include 3D printer, inkjet printer, laser printer, thermal printer, etc.[3]
|
2 |
+
|
3 |
+
The first computer printer designed was a mechanically driven apparatus by Charles Babbage for his difference engine in the 19th century; however, his mechanical printer design was not built until 2000.[4]
|
4 |
+
|
5 |
+
The first electronic printer was the EP-101, invented by Japanese company Epson and released in 1968.[5][6]
|
6 |
+
|
7 |
+
The first commercial printers generally used mechanisms from electric typewriters and Teletype machines. The demand for higher speed led to the development of new systems specifically for computer use. In the 1980s there were daisy wheel systems similar to typewriters, line printers that produced similar output but at much higher speed, and dot matrix systems that could mix text and graphics but produced relatively low-quality output. The plotter was used for those requiring high quality line art like blueprints.
|
8 |
+
|
9 |
+
The introduction of the low-cost laser printer in 1984 with the first HP LaserJet,[7] and the addition of PostScript in next year's Apple LaserWriter, set off a revolution in printing known as desktop publishing.[8] Laser printers using PostScript mixed text and graphics, like dot-matrix printers, but at quality levels formerly available only from commercial typesetting systems. By 1990, most simple printing tasks like fliers and brochures were now created on personal computers and then laser printed; expensive offset printing systems were being dumped as scrap. The HP Deskjet of 1988 offered the same advantages as a laser printer in terms of flexibility, but produced somewhat lower quality output (depending on the paper) from much less expensive mechanisms. Inkjet systems rapidly displaced dot matrix and daisy wheel printers from the market. By the 2000s high-quality printers of this sort had fallen under the $100 price point and became commonplace.
|
10 |
+
|
11 |
+
The rapid update of internet email through the 1990s and into the 2000s has largely displaced the need for printing as a means of moving documents, and a wide variety of reliable storage systems means that a "physical backup" is of little benefit today. Even the desire for printed output for "offline reading" while on mass transit or aircraft has been displaced by e-book readers and tablet computers. Today, traditional printers are being used more for special purposes, like printing photographs or artwork, and are no longer a must-have peripheral.[opinion]
|
12 |
+
|
13 |
+
Starting around 2010, 3D printing became an area of intense interest, allowing the creation of physical objects with the same sort of effort as an early laser printer required to produce a brochure. These devices are in their earliest stages of development and have not yet become commonplace.[citation needed]
|
14 |
+
|
15 |
+
Personal printers are primarily designed to support individual users, and may be connected to only a single computer. These printers are designed for low-volume, short-turnaround print jobs, requiring minimal setup time to produce a hard copy of a given document. However, they are generally slow devices ranging from 6 to around 25 pages per minute (ppm),
|
16 |
+
and the cost per page is relatively high. However, this is offset by the on-demand convenience. Some printers can print documents stored on memory cards or from digital cameras and scanners.
|
17 |
+
|
18 |
+
Networked or shared printers are "designed for high-volume, high-speed printing". They are usually shared by many users on a network and can print at speeds of 45 to around 100 ppm.[9] The Xerox 9700 could achieve 120 ppm.
|
19 |
+
|
20 |
+
A virtual printer is a piece of computer software whose user interface and API resembles that of a printer driver, but which is not connected with a physical computer printer. A virtual printer can be used to create a file which is an image of the data which would be printed, for archival purposes or as input to another program, for example to create a PDF or to transmit to another system or user.
|
21 |
+
|
22 |
+
A barcode printer is a computer peripheral for printing barcode labels or tags that can be attached to, or printed directly on, physical objects. Barcode printers are commonly used to label cartons before shipment, or to label retail items with UPCs or EANs.
|
23 |
+
|
24 |
+
A 3D printer is a device for making a three-dimensional object from a 3D model or other electronic data source through additive processes in which successive layers of material (including plastics, metals, food, cement, wood, and other materials) are laid down under computer control. It is called a printer by analogy with an inkjet printer which produces a two-dimensional document by a similar process of depositing a layer of ink on paper.
|
25 |
+
|
26 |
+
The choice of print technology has a great effect on the cost of the printer and cost of operation, speed, quality and permanence of documents, and noise. Some printer technologies[10] do not work with certain types of physical media, such as carbon paper or transparencies.
|
27 |
+
|
28 |
+
A second aspect of printer technology that is often forgotten is resistance to alteration: liquid ink, such as from an inkjet head or fabric ribbon, becomes absorbed by the paper fibers, so documents printed with liquid ink are more difficult to alter than documents printed with toner or solid inks, which do not penetrate below the paper surface.
|
29 |
+
|
30 |
+
Cheques can be printed with liquid ink or on special cheque paper with toner anchorage so that alterations may be detected.[11] The machine-readable lower portion of a cheque must be printed using MICR toner or ink. Banks and other clearing houses employ automation equipment that relies on the magnetic flux from these specially printed characters to function properly.
|
31 |
+
|
32 |
+
The following printing technologies are routinely found in modern printers:
|
33 |
+
|
34 |
+
A laser printer rapidly produces high quality text and graphics. As with digital photocopiers and multifunction printers (MFPs), laser printers employ a xerographic printing process but differ from analog photocopiers in that the image is produced by the direct scanning of a laser beam across the printer's photoreceptor.
|
35 |
+
|
36 |
+
Another toner-based printer is the LED printer which uses an array of LEDs instead of a laser to cause toner adhesion to the print drum.
|
37 |
+
|
38 |
+
Inkjet printers operate by propelling variably sized droplets of liquid ink onto almost any sized page. They are the most common type of computer printer used by consumers.
|
39 |
+
|
40 |
+
Solid ink printers, also known as phase-change printers, are a type of thermal transfer printer. They use solid sticks of CMYK-coloured ink, similar in consistency to candle wax, which are melted and fed into a piezo crystal operated print-head. The printhead sprays the ink on a rotating, oil coated drum. The paper then passes over the print drum, at which time the image is immediately transferred, or transfixed, to the page. Solid ink printers are most commonly used as colour office printers and are excellent at printing on transparencies and other non-porous media. Solid ink printers can produce excellent results. Acquisition and operating costs are similar to laser printers. Drawbacks of the technology include high energy consumption and long warm-up times from a cold state. Also, some users complain that the resulting prints are difficult to write on, as the wax tends to repel inks from pens, and are difficult to feed through automatic document feeders, but these traits have been significantly reduced in later models. In addition, this type of printer is only available from one manufacturer, Xerox, manufactured as part of their Xerox Phaser office printer line. Previously, solid ink printers were manufactured by Tektronix, but Tek sold the printing business to Xerox in 2001.
|
41 |
+
|
42 |
+
A dye-sublimation printer (or dye-sub printer) is a printer that employs a printing process that uses heat to transfer dye to a medium such as a plastic card, paper, or canvas. The process is usually to lay one colour at a time using a ribbon that has colour panels. Dye-sub printers are intended primarily for high-quality colour applications, including colour photography; and are less well-suited for text. While once the province of high-end print shops, dye-sublimation printers are now increasingly used as dedicated consumer photo printers.
|
43 |
+
|
44 |
+
Thermal printers work by selectively heating regions of the special heat-sensitive paper. Monochrome thermal printers are used in cash registers, ATMs, gasoline dispensers and some older inexpensive fax machines. Colours can be achieved with special papers and different temperatures and heating rates for different colours; these coloured sheets are not required in black-and-white output. One example is Zink (a portmanteau of "zero ink").
|
45 |
+
|
46 |
+
The following technologies are either obsolete, or limited to special applications though most were, at one time, in widespread use.
|
47 |
+
|
48 |
+
Impact printers rely on a forcible impact to transfer ink to the media. The impact printer uses a print head that either hits the surface of the ink ribbon, pressing the ink ribbon against the paper (similar to the action of a typewriter), or, less commonly, hits the back of the paper, pressing the paper against the ink ribbon (the IBM 1403 for example). All but the dot matrix printer rely on the use of fully formed characters, letterforms that represent each of the characters that the printer was capable of printing. In addition, most of these printers were limited to monochrome, or sometimes two-color, printing in a single typeface at one time, although bolding and underlining of text could be done by "overstriking", that is, printing two or more impressions either in the same character position or slightly offset. Impact printers varieties include typewriter-derived printers, teletypewriter-derived printers, daisywheel printers, dot matrix printers, and line printers. Dot-matrix printers remain in common use in businesses where multi-part forms are printed. An overview of impact printing[12] contains a detailed description of many of the technologies used.
|
49 |
+
|
50 |
+
Several different computer printers were simply computer-controllable versions of existing electric typewriters. The Friden Flexowriter and IBM Selectric-based printers were the most-common examples. The Flexowriter printed with a conventional typebar mechanism while the Selectric used IBM's well-known "golf ball" printing mechanism. In either case, the letter form then struck a ribbon which was pressed against the paper, printing one character at a time. The maximum speed of the Selectric printer (the faster of the two) was 15.5 characters per second.
|
51 |
+
|
52 |
+
The common teleprinter could easily be interfaced with the computer and became very popular except for those computers manufactured by IBM. Some models used a "typebox" that was positioned, in the X- and Y-axes, by a mechanism, and the selected letter form was struck by a hammer. Others used a type cylinder in a similar way as the Selectric typewriters used their type ball. In either case, the letter form then struck a ribbon to print the letterform. Most teleprinters operated at ten characters per second although a few achieved 15 CPS.
|
53 |
+
|
54 |
+
Daisy wheel printers operate in much the same fashion as a typewriter. A hammer strikes a wheel with petals, the "daisy wheel", each petal containing a letter form at its tip. The letter form strikes a ribbon of ink, depositing the ink on the page and thus printing a character. By rotating the daisy wheel, different characters are selected for printing. These printers were also referred to as letter-quality printers because they could produce a text which was as clear and crisp as a typewriter. The fastest letter-quality printers printed at 30 characters per second.
|
55 |
+
|
56 |
+
The term dot matrix printer is used for impact printers that use a matrix of small pins to transfer ink to the page.[13] The advantage of dot matrix over other impact printers is that they can produce graphical images in addition to text; however the text is generally of poorer quality than impact printers that use letterforms (type).
|
57 |
+
|
58 |
+
Dot-matrix printers can be broadly divided into two major classes:
|
59 |
+
|
60 |
+
Dot matrix printers can either be character-based or line-based (that is, a single horizontal series of pixels across the page), referring to the configuration of the print head.
|
61 |
+
|
62 |
+
In the 1970s and '80s, dot matrix printers were one of the more common types of printers used for general use, such as for home and small office use. Such printers normally had either 9 or 24 pins on the print head (early 7 pin printers also existed, which did not print descenders). There was a period during the early home computer era when a range of printers were manufactured under many brands such as the Commodore VIC-1525 using the Seikosha Uni-Hammer system. This used a single solenoid with an oblique striker that would be actuated 7 times for each column of 7 vertical pixels while the head was moving at a constant speed. The angle of the striker would align the dots vertically even though the head had moved one dot spacing in the time. The vertical dot position was controlled by a synchronized longitudinally ribbed platen behind the paper that rotated rapidly with a rib moving vertically seven dot spacings in the time it took to print one pixel column.[14] 24-pin print heads were able to print at a higher quality and started to offer additional type styles and were marketed as Near Letter Quality by some vendors. Once the price of inkjet printers dropped to the point where they were competitive with dot matrix printers, dot matrix printers began to fall out of favour for general use.
|
63 |
+
|
64 |
+
Some dot matrix printers, such as the NEC P6300, can be upgraded to print in colour. This is achieved through the use of a four-colour ribbon mounted on a mechanism (provided in an upgrade kit that replaces the standard black ribbon mechanism after installation) that raises and lowers the ribbons as needed. Colour graphics are generally printed in four passes at standard resolution, thus slowing down printing considerably. As a result, colour graphics can take up to four times longer to print than standard monochrome graphics, or up to 8-16 times as long at high resolution mode.
|
65 |
+
|
66 |
+
Dot matrix printers are still commonly used in low-cost, low-quality applications such as cash registers, or in demanding, very high volume applications like invoice printing. Impact printing, unlike laser printing, allows the pressure of the print head to be applied to a stack of two or more forms to print multi-part documents such as sales invoices and credit card receipts using continuous stationery with carbonless copy paper. It also has security advantages as ink impressed into a paper matrix by force is harder to erase invisibly. Dot-matrix printers were being superseded even as receipt printers after the end of the twentieth century.
|
67 |
+
|
68 |
+
Line printers print an entire line of text at a time. Four principal designs exist.
|
69 |
+
|
70 |
+
In each case, to print a line, precisely timed hammers to strike against the back of the paper at the exact moment that the correct character to be printed is passing in front of the paper. The paper presses forward against a ribbon which then presses against the character form and the impression of the character form is printed onto the paper. Each system could have slight timing issues, which could cause minor misalignment of the resulting printed characters. For drum or typebar printers, this appeared as vertical misalignment, with characters being printed slightly above or below the rest of the line. In chain or bar printers, the misalignment was horizontal, with printed characters being crowded closer together or farther apart. This was much less noticeable to human vision than vertical misalignment, where characters seemed to bounce up and down in the line, so they were considered as higher quality print.
|
71 |
+
|
72 |
+
Line printers are the fastest of all impact printers and are used for bulk printing in large computer centres. A line printer can print at 1100 lines per minute or faster, frequently printing pages more rapidly than many current laser printers. On the other hand, the mechanical components of line printers operate with tight tolerances and require regular preventive maintenance (PM) to produce a top quality print. They are virtually never used with personal computers and have now been replaced by high-speed laser printers. The legacy of line printers lives on in many computers operating systems, which use the abbreviations "lp", "LPR", or "LPT" to refer to printers.
|
73 |
+
|
74 |
+
Liquid ink electrostatic printers use a chemical coated paper, which is charged by the print head according to the image of the document.[19] The paper is passed near a pool of liquid ink with the opposite charge. The charged areas of the paper attract the ink and thus form the image. This process was developed from the process of electrostatic copying.[20] Color reproduction is very accurate, and because there is no heating the scale distortion is less than ±0.1%. (All laser printers have an accuracy of ±1%.)
|
75 |
+
|
76 |
+
Worldwide, most survey offices used this printer before color inkjet plotters become popular. Liquid ink electrostatic printers were mostly available in 36 to 54 inches (910 to 1,370 mm) width and also 6 color printing. These were also used to print large billboards. It was first introduced by Versatec, which was later bought by Xerox. 3M also used to make these printers.[21]
|
77 |
+
|
78 |
+
Pen-based plotters were an alternate printing technology once common in engineering and architectural firms. Pen-based plotters rely on contact with the paper (but not impact, per se) and special purpose pens that are mechanically run over the paper to create text and images. Since the pens output continuous lines, they were able to produce technical drawings of higher resolution than was achievable with dot-matrix technology.[22] Some plotters used roll-fed paper, and therefore had a minimal restriction on the size of the output in one dimension. These plotters were capable of producing quite sizable drawings.
|
79 |
+
|
80 |
+
A number of other sorts of printers are important for historical reasons, or for special purpose uses.
|
81 |
+
|
82 |
+
Most printers other than line printers accept control characters or unique character sequences to control various printer functions. These may range from shifting from lower to upper case or from black to red ribbon on typewriter printers to switching fonts and changing character sizes and colors on raster printers. Early printer controls were not standardized, with each manufacturer's equipment having its own set. The IBM Personal Printer Data Stream (PPDS) became a commonly used command set for dot-matrix printers.
|
83 |
+
|
84 |
+
Today, most printers accept one or more page description languages (PDLs). Laser printers with greater processing power frequently offer support for variants of Hewlett-Packard's Printer Command Language (PCL), PostScript or XML Paper Specification. Most inkjet devices support manufacturer proprietary PDLs such as ESC/P. The diversity in mobile platforms have led to various standardization efforts around device PDLs such as the Printer Working Group (PWG's) PWG Raster.
|
85 |
+
|
86 |
+
The speed of early printers was measured in units of characters per minute (cpm) for character printers, or lines per minute (lpm) for line printers. Modern printers are measured in pages per minute (ppm). These measures are used primarily as a marketing tool, and are not as well standardised as toner yields. Usually pages per minute refers to sparse monochrome office documents, rather than dense pictures which usually print much more slowly, especially colour images. Speeds in ppm usually apply to A4 paper in most countries in the world, and letter paper size, about 6% shorter, in North America.
|
87 |
+
|
88 |
+
The data received by a printer may be:
|
89 |
+
|
90 |
+
Some printers can process all four types of data, others not.
|
91 |
+
|
92 |
+
Today it is possible to print everything (even plain text) by sending ready bitmapped images to the printer. This allows better control over formatting, especially among machines from different vendors. Many printer drivers do not use the text mode at all, even if the printer is capable of it.[citation needed]
|
93 |
+
|
94 |
+
A monochrome printer can only produce an image consisting of one colour, usually black. A monochrome printer may also be able to produce various tones of that color, such as a grey-scale. A colour printer can produce images of multiple colours. A photo printer is a colour printer that can produce images that mimic the colour range (gamut) and resolution of prints made from photographic film. Many can be used on a standalone basis without a computer, using a memory card or USB connector.
|
95 |
+
|
96 |
+
The page yield is number of pages that can be printed from a toner cartridge or ink cartridge—before the cartridge needs to be refilled or replaced.
|
97 |
+
The actual number of pages yielded by a specific cartridge depends on a number of factors.[23]
|
98 |
+
|
99 |
+
For a fair comparison, many laser printer manufacturers use the ISO/IEC 19752 process to measure the toner cartridge yield.[24][25]
|
100 |
+
|
101 |
+
In order to fairly compare operating expenses of printers with a relatively small ink cartridge to printers with a larger, more expensive toner cartridge that typically holds more toner and so prints more pages before the cartridge needs to be replaced, many people prefer to estimate operating expenses in terms of cost per page (CPP).[24]
|
102 |
+
|
103 |
+
Retailers often apply the "razor and blades" business model: a company may sell a printer at cost, and make profits on the ink cartridge, paper, or some other replacement part. This has caused legal disputes regarding the right of companies other than the printer manufacturer to sell compatible ink cartridges. To protect their business model, several manufacturers invest heavily in developing new cartridge technology and patenting it.
|
104 |
+
|
105 |
+
Other manufacturers, in reaction to the challenges from using this business model, choose to make more money on printers and less on the ink, promoting the latter through their advertising campaigns. Finally, this generates two clearly different proposals: "cheap printer – expensive ink" or "expensive printer – cheap ink". Ultimately, the consumer decision depends on their reference interest rate or their time preference. From an economics viewpoint, there is a clear trade-off between cost per copy and cost of the printer.
|
106 |
+
|
107 |
+
Printer steganography is a type of steganography – "hiding data within data"[26] – produced by color printers, including Brother, Canon, Dell, Epson, HP, IBM, Konica Minolta, Kyocera, Lanier, Lexmark, Ricoh, Toshiba and Xerox[27] brand color laser printers, where tiny yellow dots are added to each page. The dots are barely visible and contain encoded printer serial numbers, as well as date and time stamps.
|
108 |
+
|
109 |
+
More than half of all printers sold at U.S. retail in 2010 were wireless-capable, but nearly three-quarters of consumers who have access to those printers weren't taking advantage of the increased access to print from multiple devices according to the new Wireless Printing Study.[28]
|
110 |
+
|
en/2712.html.txt
ADDED
@@ -0,0 +1,117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A printing press is a mechanical device for applying pressure to an inked surface resting upon a print medium (such as paper or cloth), thereby transferring the ink. It marked a dramatic improvement on earlier printing methods in which the cloth, paper or other medium was brushed or rubbed repeatedly to achieve the transfer of ink, and accelerated the process. Typically used for texts, the invention and global spread of the printing press was one of the most influential events in the second millennium.[1][2]
|
4 |
+
|
5 |
+
In Germany, around 1440, goldsmith Johannes Gutenberg invented the printing press, which started the Printing Revolution. Modelled on the design of existing screw presses, a single Renaissance printing press could produce up to 3,600 pages per workday,[3] compared to forty by hand-printing and a few by hand-copying.[4] Gutenberg's newly devised hand mould made possible the precise and rapid creation of metal movable type in large quantities. His two inventions, the hand mould and the printing press, together drastically reduced the cost of printing books and other documents in Europe, particularly for shorter print runs.
|
6 |
+
|
7 |
+
From Mainz the printing press spread within several decades to over two hundred cities in a dozen European countries.[5] By 1500, printing presses in operation throughout Western Europe had already produced more than twenty million volumes.[5] In the 16th century, with presses spreading further afield, their output rose tenfold to an estimated 150 to 200 million copies.[5] The operation of a press became synonymous with the enterprise of printing, and lent its name to a new medium of expression and communication, "the press".[6]
|
8 |
+
|
9 |
+
In Renaissance Europe, the arrival of mechanical movable type printing introduced the era of mass communication, which permanently altered the structure of society. The relatively unrestricted circulation of information and (revolutionary) ideas transcended borders, captured the masses in the Reformation and threatened the power of political and religious authorities. The sharp increase in literacy broke the monopoly of the literate elite on education and learning and bolstered the emerging middle class. Across Europe, the increasing cultural self-awareness of its peoples led to the rise of proto-nationalism, and accelerated by the development of European vernacular languages, to the detriment of Latin's status as lingua franca.[7] In the 19th century, the replacement of the hand-operated Gutenberg-style press by steam-powered rotary presses allowed printing on an industrial scale.[8]
|
10 |
+
|
11 |
+
The rapid economic and socio-cultural development of late medieval society in Europe created favorable intellectual and technological conditions for Gutenberg's improved version of the printing press: the entrepreneurial spirit of emerging capitalism increasingly made its impact on medieval modes of production, fostering economic thinking and improving the efficiency of traditional work-processes. The sharp rise of medieval learning and literacy amongst the middle class led to an increased demand for books which the time-consuming hand-copying method fell far short of accommodating.[9]
|
12 |
+
|
13 |
+
Technologies preceding the press that led to the press's invention included: manufacturing of paper, development of ink, woodblock printing, and distribution of eyeglasses.[10] At the same time, a number of medieval products and technological processes had reached a level of maturity which allowed their potential use for printing purposes. Gutenberg took up these far-flung strands, combined them into one complete and functioning system, and perfected the printing process through all its stages by adding a number of inventions and innovations of his own:
|
14 |
+
|
15 |
+
The screw press which allowed direct pressure to be applied on flat-plane was already of great antiquity in Gutenberg's time and was used for a wide range of tasks.[11] Introduced in the 1st century AD by the Romans, it was commonly employed in agricultural production for pressing wine grapes and olives (for olive oil), both of which formed an integral part of the Mediterranean and medieval diet.[12] The device was also used from very early on in urban contexts as a cloth press for printing patterns.[13] Gutenberg may have also been inspired by the paper presses which had spread through the German lands since the late 14th century and which worked on the same mechanical principles.[14]
|
16 |
+
|
17 |
+
During the Islamic Golden Age, Arab Muslims were printing texts, including passages from the Qur’an, embracing the Chinese craft of paper making, developed it and adopted it widely in the Muslim world, which led to a major increase in the production of manuscript texts. In Egypt during the Fatimid era, the printing technique was adopted reproducing texts on paper strips and supplying them in various copies to meet the demand.[15]
|
18 |
+
|
19 |
+
Gutenberg adopted the basic design, thereby mechanizing the printing process.[16] Printing, however, put a demand on the machine quite different from pressing. Gutenberg adapted the construction so that the pressing power exerted by the platen on the paper was now applied both evenly and with the required sudden elasticity. To speed up the printing process, he introduced a movable undertable with a plane surface on which the sheets could be swiftly changed.[17]
|
20 |
+
|
21 |
+
The concept of movable type existed prior to 15th century Europe; sporadic evidence that the typographical principle, the idea of creating a text by reusing individual characters, was known had been cropping up since the 12th century and possibly before (the oldest known application dating back as far as the Phaistos disc). The known examples range from movable type printing in China during the Song dynasty, in Korea during the Goryeo Dynasty, where metal movable-type printing technology was developed in 1234,[18][19] to Germany (Prüfening inscription) and England (letter tiles) and Italy (Altarpiece of Pellegrino II).[20] However, the various techniques employed (imprinting, punching and assembling individual letters) did not have the refinement and efficiency needed to become widely accepted. Tsuen-Hsuin and Needham, and Briggs and Burke suggest that the movable type printing in China and Korea was rarely employed.[18][19] Ibrahim Muteferrika of the Ottoman Empire ran a printing press with movable Arabic type.[21]
|
22 |
+
|
23 |
+
Gutenberg greatly improved the process by treating typesetting and printing as two separate work steps. A goldsmith by profession, he created his type pieces from a lead-based alloy which suited printing purposes so well that it is still used today.[22] The mass production of metal letters was achieved by his key invention of a special hand mould, the matrix.[23] The Latin alphabet proved to be an enormous advantage in the process because, in contrast to logographic writing systems, it allowed the type-setter to represent any text with a theoretical minimum of only around two dozen different letters.[24]
|
24 |
+
|
25 |
+
Another factor conducive to printing arose from the book existing in the format of the codex, which had originated in the Roman period.[25] Considered the most important advance in the history of the book prior to printing itself, the codex had completely replaced the ancient scroll at the onset of the Middle Ages (AD 500).[26] The codex holds considerable practical advantages over the scroll format; it is more convenient to read (by turning pages), more compact, and less costly, and both recto and verso sides could be used for writing or printing, unlike the scroll.[27]
|
26 |
+
|
27 |
+
A fourth development was the early success of medieval papermakers at mechanizing paper manufacture. The introduction of water-powered paper mills, the first certain evidence of which dates to 1282,[28] allowed for a massive expansion of production and replaced the laborious handcraft characteristic of both Chinese[29] and Muslim papermaking.[30] Papermaking centres began to multiply in the late 13th century in Italy, reducing the price of paper to one sixth of parchment and then falling further; papermaking centers reached Germany a century later.[31]
|
28 |
+
|
29 |
+
Despite this it appears that the final breakthrough of paper depended just as much on the rapid spread of movable-type printing.[32] It is notable that codices of parchment, which in terms of quality is superior to any other writing material,[33] still had a substantial share in Gutenberg's edition of the 42-line Bible.[34] After much experimentation, Gutenberg managed to overcome the difficulties which traditional water-based inks caused by soaking the paper, and found the formula for an oil-based ink suitable for high-quality printing with metal type.[35]
|
30 |
+
|
31 |
+
A printing press, in its classical form, is a standing mechanism, ranging from 5 to 7 feet (1.5 to 2.1 m) long, 3 feet (0.91 m) wide, and 7 feet (2.1 m) tall. The small individual metal letters known as type would be set up by a compositor into the desired lines of text. Several lines of text would be arranged at once and were placed in a wooden frame known as a galley. Once the correct number of pages were composed, the galleys would be laid face up in a frame, also known as a forme.[36], which itself is placed onto a flat stone, 'bed,' or 'coffin.' The text is inked using two balls, pads mounted on handles. The balls were made of dog skin leather, because it has no pores,[37] and stuffed with sheep's wool and were inked. This ink was then applied to the text evenly. One damp piece of paper was then taken from a heap of paper and placed on the tympan. The paper was damp as this lets the type 'bite' into the paper better. Small pins hold the paper in place. The paper is now held between a frisket and tympan (two frames covered with paper or parchment).
|
32 |
+
|
33 |
+
These are folded down, so that the paper lies on the surface of the inked type. The bed is rolled under the platen, using a windlass mechanism. A small rotating handle is used called the 'rounce' to do this, and the impression is made with a screw that transmits pressure through the platen. To turn the screw the long handle attached to it is turned. This is known as the bar or 'Devil's Tail.' In a well-set-up press, the springiness of the paper, frisket, and tympan caused the bar to spring back and raise the platen, the windlass turned again to move the bed back to its original position, the tympan and frisket raised and opened, and the printed sheet removed. Such presses were always worked by hand. After around 1800, iron presses were developed, some of which could be operated by steam power.
|
34 |
+
|
35 |
+
The function of the press in the image on the left was described by William Skeen in 1872,
|
36 |
+
|
37 |
+
this sketch represents a press in its completed form, with tympans attached to the end of the carriage, and with the frisket above the tympans. The tympans, inner and outer, are thin iron frames, one fitting into the other, on each of which is stretched a skin of parchment or a breadth of fine cloth. A woollen blanket or two with a few sheets of paper are placed between these, the whole thus forming a thin elastic pad, on which the sheet to be printed is laid. The frisket is a slender frame-work, covered with coarse paper, on which an impression is first taken; the whole of the printed part is then cut out, leaving apertures exactly corresponding with the pages of type on the carriage of the press. The frisket when folded on to the tympans, and both turned down over the forme of types and run in under the platten, preserves the sheet from contact with any thing but the inked surface of the types, when the pull, which brings down the screw and forces the platten to produce the impression, is made by the pressman who works the lever,—to whom is facetiously given the title of “the practitioner at the bar.”.[38]
|
38 |
+
|
39 |
+
Johannes Gutenberg's work on the printing press began in approximately 1436 when he partnered with Andreas Dritzehn—a man who had previously instructed in gem-cutting—and Andreas Heilmann, owner of a paper mill.[39] However, it was not until a 1439 lawsuit against Gutenberg that an official record existed; witnesses' testimony discussed Gutenberg's types, an inventory of metals (including lead), and his type molds.[39]
|
40 |
+
|
41 |
+
Having previously worked as a professional goldsmith, Gutenberg made skillful use of the knowledge of metals he had learned as a craftsman. He was the first to make type from an alloy of lead, tin, and antimony, which was critical for producing durable type that produced high-quality printed books and proved to be much better suited for printing than all other known materials. To create these lead types, Gutenberg used what is considered one of his most ingenious inventions,[39] a special matrix enabling the quick and precise molding of new type blocks from a uniform template. His type case is estimated to have contained around 290 separate letter boxes, most of which were required for special characters, ligatures, punctuation marks, and so forth.[40]
|
42 |
+
|
43 |
+
Gutenberg is also credited with the introduction of an oil-based ink which was more durable than the previously used water-based inks. As printing material he used both paper and vellum (high-quality parchment). In the Gutenberg Bible, Gutenberg made a trial of colour printing for a few of the page headings, present only in some copies.[41] A later work, the Mainz Psalter of 1453, presumably designed by Gutenberg but published under the imprint of his successors Johann Fust and Peter Schöffer, had elaborate red and blue printed initials.[42]
|
44 |
+
|
45 |
+
The Printing Revolution occurred when the spread of the printing press facilitated the wide circulation of information and ideas, acting as an "agent of change" through the societies that it reached. (Eisenstein (1980))
|
46 |
+
|
47 |
+
The invention of mechanical movable type printing led to a huge increase of printing activities across Europe within only a few decades. From a single print shop in Mainz, Germany, printing had spread to no less than around 270 cities in Central, Western and Eastern Europe by the end of the 15th century.[44] As early as 1480, there were printers active in 110 different places in Germany, Italy, France, Spain, the Netherlands, Belgium, Switzerland, England, Bohemia and Poland.[5] From that time on, it is assumed that "the printed book was in universal use in Europe".[5]
|
48 |
+
|
49 |
+
In Italy, a center of early printing, print shops had been established in 77 cities and towns by 1500. At the end of the following century, 151 locations in Italy had seen at one time printing activities, with a total of nearly three thousand printers known to be active. Despite this proliferation, printing centres soon emerged; thus, one third of the Italian printers published in Venice.[45]
|
50 |
+
|
51 |
+
By 1500, the printing presses in operation throughout Western Europe had already produced more than twenty million copies.[5] In the following century, their output rose tenfold to an estimated 150 to 200 million copies.[5]
|
52 |
+
|
53 |
+
European printing presses of around 1600 were capable of producing between 1,500[46] and 3,600 impressions per workday.[3] By comparison, Far Eastern printing, where the back of the paper was manually rubbed to the page,[47] did not exceed an output of forty pages per day.[4]
|
54 |
+
|
55 |
+
Of Erasmus's work, at least 750,000 copies were sold during his lifetime alone (1469–1536).[48] In the early days of the Reformation, the revolutionary potential of bulk printing took princes and papacy alike by surprise. In the period from 1518 to 1524, the publication of books in Germany alone skyrocketed sevenfold; between 1518 and 1520, Luther's tracts were distributed in 300,000 printed copies.[49]
|
56 |
+
|
57 |
+
The rapidity of typographical text production, as well as the sharp fall in unit costs, led to the issuing of the first newspapers (see Relation) which opened up an entirely new field for conveying up-to-date information to the public.[50]
|
58 |
+
|
59 |
+
Incunable are surviving pre-16th century print works which are collected by many of the libraries in Europe and North America.[51]
|
60 |
+
|
61 |
+
The printing press was also a factor in the establishment of a community of scientists who could easily communicate their discoveries through the establishment of widely disseminated scholarly journals, helping to bring on the scientific revolution.[citation needed] Because of the printing press, authorship became more meaningful and profitable. It was suddenly important who had said or written what, and what the precise formulation and time of composition was. This allowed the exact citing of references, producing the rule, "One Author, one work (title), one piece of information" (Giesecke, 1989; 325). Before, the author was less important, since a copy of Aristotle made in Paris would not be exactly identical to one made in Bologna. For many works prior to the printing press, the name of the author has been entirely lost.[citation needed]
|
62 |
+
|
63 |
+
Because the printing process ensured that the same information fell on the same pages, page numbering, tables of contents, and indices became common, though they previously had not been unknown.[citation needed] The process of reading also changed, gradually moving over several centuries from oral readings to silent, private reading.[citation needed] Over the next 200 years, the wider availability of printed materials led to a dramatic rise in the adult literacy rate throughout Europe.[52]
|
64 |
+
|
65 |
+
The printing press was an important step towards the democratization of knowledge.[53][54] Within 50 or 60 years of the invention of the printing press, the entire classical canon had been reprinted and widely promulgated throughout Europe (Eisenstein, 1969; 52). More people had access to knowledge both new and old, more people could discuss these works. Book production became more commercialised, and the first copyright laws were passed.[55] On the other hand, the printing press was criticized for allowing the dissemination of information which may have been incorrect.[56][57]
|
66 |
+
|
67 |
+
A second outgrowth of this popularization of knowledge was the decline of Latin as the language of most published works, to be replaced by the vernacular language of each area, increasing the variety of published works. The printed word also helped to unify and standardize the spelling and syntax of these vernaculars, in effect 'decreasing' their variability. This rise in importance of national languages as opposed to pan-European Latin is cited[who?] as one of the causes of the rise of nationalism in Europe.
|
68 |
+
|
69 |
+
A third consequence of popularization of printing was on the economy. The printing press was associated with higher levels of city growth.[58] The publication of trade related manuals and books teaching techniques like double-entry bookkeeping increased the reliability of trade and led to the decline of merchant guilds and the rise of individual traders.[59]
|
70 |
+
|
71 |
+
At the dawn of the Industrial Revolution, the mechanics of the hand-operated Gutenberg-style press were still essentially unchanged, although new materials in its construction, amongst other innovations, had gradually improved its printing efficiency. By 1800, Lord Stanhope had built a press completely from cast iron which reduced the force required by 90%, while doubling the size of the printed area.[60] With a capacity of 480 pages per hour, the Stanhope press doubled the output of the old style press.[61] Nonetheless, the limitations inherent to the traditional method of printing became obvious.
|
72 |
+
|
73 |
+
Two ideas altered the design of the printing press radically: First, the use of steam power for running the machinery, and second the replacement of the printing flatbed with the rotary motion of cylinders. Both elements were for the first time successfully implemented by the German printer Friedrich Koenig in a series of press designs devised between 1802 and 1818.[62] Having moved to London in 1804, Koenig soon met Thomas Bensley and secured financial support for his project in 1807.[60] Patented in 1810, Koenig had designed a steam press "much like a hand press connected to a steam engine."[60] The first production trial of this model occurred in April 1811. He produced his machine with assistance from German engineer Andreas Friedrich Bauer.
|
74 |
+
|
75 |
+
Koenig and Bauer sold two of their first models to The Times in London in 1814, capable of 1,100 impressions per hour. The first edition so printed was on 28 November 1814. They went on to perfect the early model so that it could print on both sides of a sheet at once. This began the long process of making newspapers available to a mass audience (which in turn helped spread literacy), and from the 1820s changed the nature of book production, forcing a greater standardization in titles and other metadata. Their company Koenig & Bauer AG is still one of the world's largest manufacturers of printing presses today.
|
76 |
+
|
77 |
+
The steam-powered rotary printing press, invented in 1843 in the United States by Richard M. Hoe,[63] ultimately allowed millions of copies of a page in a single day. Mass production of printed works flourished after the transition to rolled paper, as continuous feed allowed the presses to run at a much faster pace. Hoe's original design operated at up to 2,000 revolutions per hour where each revolution deposited 4 page images giving the press a throughput of 8,000 pages per hour.[64] By 1891 The New York World and Philadelphia Item were operating presses producing either 90,000 4 page sheets per hour or 48,000 8 page sheets. [65]
|
78 |
+
|
79 |
+
Also, in the middle of the 19th century, there was a separate development of jobbing presses, small presses capable of printing small-format pieces such as billheads, letterheads, business cards, and envelopes. Jobbing presses were capable of quick set-up (average setup time for a small job was under 15 minutes) and quick production (even on treadle-powered jobbing presses it was considered normal to get 1,000 impressions per hour [iph] with one pressman, with speeds of 1,500 iph often attained on simple envelope work).[citation needed] Job printing emerged as a reasonably cost-effective duplicating solution for commerce at this time.
|
80 |
+
|
81 |
+
The table lists the maximum number of pages which the various press designs could print per hour.
|
82 |
+
|
83 |
+
Model of the Common Press, used from 1650 to 1850
|
84 |
+
|
85 |
+
Printing press from 1811
|
86 |
+
|
87 |
+
Stanhope press from 1842
|
88 |
+
|
89 |
+
Imprenta Press V John Sherwin from 1860
|
90 |
+
|
91 |
+
Reliance Printing Press from the 1890s
|
92 |
+
|
93 |
+
From old price tables it can be deduced that the capacity of a printing press around 1600, assuming a fifteen-hour workday, was between 3.200 and 3.600 impressions per day.
|
94 |
+
|
95 |
+
This method almost doubled the printing speed and produced more than 40 copies a day. Printing technology reached its peak at this point.
|
96 |
+
|
97 |
+
At the same time, then, as the printing press in the physical, technological sense was invented, 'the press' in the extended sense of the word also entered the historical stage. The phenomenon of publishing was born.
|
98 |
+
|
99 |
+
Gutenberg's invention took full advantage of the degree of abstraction in representing language forms that was offered by the alphabet and by the Western forms of script that were current in the fifteenth century.
|
100 |
+
|
101 |
+
The most momentous development in the history of the book until the invention of printing was the replacement of the roll by the codex; this we may define as a collection of sheets of any material, folded double and fastened together at the back or spine, and usually protected by covers. (p. 1)
|
102 |
+
|
103 |
+
In the West, the only inhibiting expense in the production of writings for an increasingly literate market was the manual labor of the scribe himself. With his mechanization by movable-type printing in the 1440s, the manufacture of paper, until then relatively confined, began to spread very widely. The Paper Revolution of the thirteenth century thus entered a new era.
|
104 |
+
|
105 |
+
Despite all that has been said above, even the strongest supporters of papyrus would not deny that parchment of good quality is the finest writing material ever devised by man. It is immensely strong, remains flexible indefinitely under normal conditions, does not deteriorate with age, and possesses a smooth, even surface which is both pleasant to the eye and provides unlimited scope for the finest writing and illumination.
|
106 |
+
|
107 |
+
The outstanding difference between the two ends of the Old World was the absence of screw-presses from China, but this is only another manifestation of the fact that this basic mechanism was foreign to that culture.
|
108 |
+
|
109 |
+
In East Asia, both woodblock and movable type printing were manual reproduction techniques, that is hand printing.
|
110 |
+
|
111 |
+
Chinese paper was suitable only for calligraphy or block-printing; there were no screw-based presses in the east, because they were not wine-drinkers, didn’t have olives, and used other means to dry their paper.
|
112 |
+
|
113 |
+
The second necessary element was the concept of the printing press itself, an idea that had never been conceived in the Far East.
|
114 |
+
|
115 |
+
On the effects of the printing press
|
116 |
+
|
117 |
+
Technology of printing
|
en/2713.html.txt
ADDED
@@ -0,0 +1,184 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The Inca Empire (Quechua: Tawantinsuyu, lit. "The Four Regions"[4]), also known as the Incan Empire and the Inka Empire, was the largest empire in pre-Columbian America.[5] The administrative, political and military center of the empire was located in the city of Cusco. The Inca civilization arose from the Peruvian highlands sometime in the early 13th century. Its last stronghold was conquered by the Spanish in 1572.
|
4 |
+
|
5 |
+
From 1438 to 1533, the Incas incorporated a large portion of western South America, centered on the Andean Mountains, using conquest and peaceful assimilation, among other methods. At its largest, the empire joined Peru, western Ecuador, western and south central Bolivia, northwest Argentina, a large portion of what is today Chile, and the southwesternmost tip of Colombia into a state comparable to the historical empires of Eurasia. Its official language was Quechua.[6] Many local forms of worship persisted in the empire, most of them concerning local sacred Huacas, but the Inca leadership encouraged the sun worship of Inti – their sun god – and imposed its sovereignty above other cults such as that of Pachamama.[7] The Incas considered their king, the Sapa Inca, to be the "son of the sun."[8]
|
6 |
+
|
7 |
+
The Inca Empire was unusual in that it lacked many features associated with civilization in the Old World. Anthropologist Gordon McEwan wrote that:[9]
|
8 |
+
|
9 |
+
The Incas lacked the use of wheeled vehicles. They lacked animals to ride and draft animals that could pull wagons and plows... [They] lacked the knowledge of iron and steel... Above all, they lacked a system of writing... Despite these supposed handicaps, the Incas were still able to construct one of the greatest imperial states in human history.
|
10 |
+
|
11 |
+
Notable features of the Inca Empire include its monumental architecture, especially stonework, extensive road network reaching all corners of the empire, finely-woven textiles, use of knotted strings (quipu) for record keeping and communication, agricultural innovations in a difficult environment, and the organization and management fostered or imposed on its people and their labor.
|
12 |
+
|
13 |
+
The Incan economy has been described in contradictory ways by scholars:[10]
|
14 |
+
|
15 |
+
... feudal, slave, socialist (here one may choose between socialist paradise or socialist tyranny)
|
16 |
+
|
17 |
+
The Inca Empire functioned largely without money and without markets. Instead, exchange of goods and services was based on reciprocity between individuals and among individuals, groups, and Inca rulers. "Taxes" consisted of a labour obligation of a person to the Empire. The Inca rulers (who theoretically owned all the means of production) reciprocated by granting access to land and goods and providing food and drink in celebratory feasts for their subjects.[11]
|
18 |
+
|
19 |
+
The Inca referred to their empire as Tawantinsuyu,[4] "the four suyu". In Quechua, tawa is four and -ntin is a suffix naming a group, so that a tawantin is a quartet, a group of four things taken together, in this case the four suyu ("regions" or "provinces") whose corners met at the capital. The four suyu were: Chinchaysuyu (north), Antisuyu (east; the Amazon jungle), Qullasuyu (south) and Kuntisuyu (west). The name Tawantinsuyu was, therefore, a descriptive term indicating a union of provinces. The Spanish transliterated the name as Tahuatinsuyo or Tahuatinsuyu.
|
20 |
+
|
21 |
+
The term Inka means "ruler" or "lord" in Quechua and was used to refer to the ruling class or the ruling family.[12] The Incas were a very small percentage of the total population of the empire, probably numbering only 15,000 to 40,000, but ruling a population of around 10 million people.[13] The Spanish adopted the term (transliterated as Inca in Spanish) as an ethnic term referring to all subjects of the empire rather than simply the ruling class. As such, the name Imperio inca ("Inca Empire") referred to the nation that they encountered and subsequently conquered.
|
22 |
+
|
23 |
+
The Inca Empire was the last chapter of thousands of years of Andean civilizations. The Andean civilization was one of five civilizations in the world deemed by scholars to be "pristine", that is indigenous and not derivative from other civilizations.[14]
|
24 |
+
|
25 |
+
The Inca Empire was preceded by two large-scale empires in the Andes: the Tiwanaku (c. 300–1100 AD), based around Lake Titicaca and the Wari or Huari (c. 600–1100 AD) centered near the city of Ayacucho. The Wari occupied the Cuzco area for about 400 years. Thus, many of the characteristics of the Inca Empire derived from earlier multi-ethnic and expansive Andean cultures.[15]
|
26 |
+
|
27 |
+
Carl Troll has argued that the development of the Inca state in the central Andes was aided by conditions that allow for the elaboration of the staple food chuño. Chuño, which can be stored for long periods, is made of potato dried at the freezing temperatures that are common at nighttime in the southern Peruvian highlands. Such a link between the Inca state and chuño may be questioned, as potatoes and other crops such as maize can also be dried with only sunlight.[16] Troll did also argue that llamas, the Inca's pack animal, can be found in its largest numbers in this very same region.[16] It is worth considering the maximum extent of the Inca Empire roughly coincided with the greatest distribution of llamas and alpacas in Pre-Hispanic America.[17] The link between the Andean biomes of puna and páramo, pastoralism and the Inca state is a matter of research.[18] As a third point Troll pointed out irrigation technology as advantageous to the Inca state-building.[18] While Troll theorized environmental influences on the Inca Empire, he opposed environmental determinism, arguing that culture lay at the core of the Inca civilization.[18]
|
28 |
+
|
29 |
+
The Inca people were a pastoral tribe in the Cusco area around the 12th century. Incan oral history tells an origin story of three caves. The center cave at Tampu T'uqu (Tambo Tocco) was named Qhapaq T'uqu ("principal niche", also spelled Capac Tocco). The other caves were Maras T'uqu (Maras Tocco) and Sutiq T'uqu (Sutic Tocco).[19] Four brothers and four sisters stepped out of the middle cave. They were: Ayar Manco, Ayar Cachi, Ayar Awqa (Ayar Auca) and Ayar Uchu; and Mama Ocllo, Mama Raua, Mama Huaco and Mama Qura (Mama Cora). Out of the side caves came the people who were to be the ancestors of all the Inca clans.
|
30 |
+
|
31 |
+
Ayar Manco carried a magic staff made of the finest gold. Where this staff landed, the people would live. They traveled for a long time. On the way, Ayar Cachi boasted about his strength and power. His siblings tricked him into returning to the cave to get a sacred llama. When he went into the cave, they trapped him inside to get rid of him.
|
32 |
+
|
33 |
+
Ayar Uchu decided to stay on the top of the cave to look over the Inca people. The minute he proclaimed that, he turned to stone. They built a shrine around the stone and it became a sacred object. Ayar Auca grew tired of all this and decided to travel alone. Only Ayar Manco and his four sisters remained.
|
34 |
+
|
35 |
+
Finally, they reached Cusco. The staff sank into the ground. Before they arrived, Mama Ocllo had already borne Ayar Manco a child, Sinchi Roca. The people who were already living in Cusco fought hard to keep their land, but Mama Huaca was a good fighter. When the enemy attacked, she threw her bolas (several stones tied together that spun through the air when thrown) at a soldier (gualla) and killed him instantly. The other people became afraid and ran away.
|
36 |
+
|
37 |
+
After that, Ayar Manco became known as Manco Cápac, the founder of the Inca. It is said that he and his sisters built the first Inca homes in the valley with their own hands. When the time came, Manco Cápac turned to stone like his brothers before him. His son, Sinchi Roca, became the second emperor of the Inca.[20]
|
38 |
+
|
39 |
+
Under the leadership of Manco Cápac, the Inca formed the small city-state Kingdom of Cusco (Quechua Qusqu', Qosqo). In 1438, they began a far-reaching expansion under the command of Sapa Inca (paramount leader) Pachacuti-Cusi Yupanqui, whose name literally meant "earth-shaker". The name of Pachacuti was given to him after he conquered the Tribe of Chancas (modern Apurímac). During his reign, he and his son Tupac Yupanqui brought much of the modern-day territory of Peru under Inca control.[21]
|
40 |
+
|
41 |
+
Pachacuti reorganized the kingdom of Cusco into the Tahuantinsuyu, which consisted of a central government with the Inca at its head and four provincial governments with strong leaders: Chinchasuyu (NW), Antisuyu (NE), Kuntisuyu (SW) and Qullasuyu (SE).[22] Pachacuti is thought to have built Machu Picchu, either as a family home or summer retreat, although it may have been an agricultural station.[23]
|
42 |
+
|
43 |
+
Pachacuti sent spies to regions he wanted in his empire and they brought to him reports on political organization, military strength and wealth. He then sent messages to their leaders extolling the benefits of joining his empire, offering them presents of luxury goods such as high quality textiles and promising that they would be materially richer as his subjects.
|
44 |
+
|
45 |
+
Most accepted the rule of the Inca as a fait accompli and acquiesced peacefully. Refusal to accept Inca rule resulted in military conquest. Following conquest the local rulers were executed. The ruler's children were brought to Cusco to learn about Inca administration systems, then return to rule their native lands. This allowed the Inca to indoctrinate them into the Inca nobility and, with luck, marry their daughters into families at various corners of the empire.
|
46 |
+
|
47 |
+
Traditionally the son of the Inca ruler led the army. Pachacuti's son Túpac Inca Yupanqui began conquests to the north in 1463 and continued them as Inca ruler after Pachacuti's death in 1471. Túpac Inca's most important conquest was the Kingdom of Chimor, the Inca's only serious rival for the Peruvian coast. Túpac Inca's empire then stretched north into modern-day Ecuador and Colombia.
|
48 |
+
|
49 |
+
Túpac Inca's son Huayna Cápac added a small portion of land to the north in modern-day Ecuador. At its height, the Inca Empire included Peru, western and south central Bolivia, southwest Ecuador and a large portion of what is today Chile, north of the Maule River. Traditional historiography claims the advance south halted after the Battle of the Maule where they met determined resistance from the Mapuche.[24] This view is challenged by historian Osvaldo Silva who argues instead that it was the social and political framework of the Mapuche that posed the main difficulty in imposing imperial rule.[24] Silva does accept that the battle of the Maule was a stalemate, but argues the Incas lacked incentives for conquest they had had when fighting more complex societies such as the Chimú Empire.[24] Silva also disputes the date given by traditional historiography for the battle: the late 15th century during the reign of Topa Inca Yupanqui (1471–93).[24] Instead, he places it in 1532 during the Inca Civil War.[24] Nevertheless, Silva agrees on the claim that the bulk of the Incan conquests were made during the late 15th century.[24] At the time of the Incan Civil War an Inca army was, according to Diego de Rosales, subduing a revolt among the Diaguitas of Copiapó and Coquimbo.[24]
|
50 |
+
|
51 |
+
The empire's push into the Amazon Basin near the Chinchipe River was stopped by the Shuar in 1527.[25] The empire extended into corners of Argentina and Colombia. However, most of the southern portion of the Inca empire, the portion denominated as Qullasuyu, was located in the Altiplano.
|
52 |
+
|
53 |
+
The Inca Empire was an amalgamation of languages, cultures and peoples. The components of the empire were not all uniformly loyal, nor were the local cultures all fully integrated. The Inca empire as a whole had an economy based on exchange and taxation of luxury goods and labour. The following quote describes a method of taxation:
|
54 |
+
|
55 |
+
For as is well known to all, not a single village of the highlands or the plains failed to pay the tribute levied on it by those who were in charge of these matters. There were even provinces where, when the natives alleged that they were unable to pay their tribute, the Inca ordered that each inhabitant should be obliged to turn in every four months a large quill full of live lice, which was the Inca's way of teaching and accustoming them to pay tribute.[26]
|
56 |
+
|
57 |
+
Spanish conquistadors led by Francisco Pizarro and his brothers explored south from what is today Panama, reaching Inca territory by 1526.[27] It was clear that they had reached a wealthy land with prospects of great treasure, and after another expedition in 1529 Pizarro traveled to Spain and received royal approval to conquer the region and be its viceroy. This approval was received as detailed in the following quote: "In July 1529 the Queen of Spain signed a charter allowing Pizarro to conquer the Incas. Pizarro was named governor and captain of all conquests in Peru, or New Castile, as the Spanish now called the land."[28]
|
58 |
+
|
59 |
+
When the conquistadors returned to Peru in 1532, a war of succession between the sons of Sapa Inca Huayna Capac, Huáscar and Atahualpa, and unrest among newly conquered territories weakened the empire. Perhaps more importantly, smallpox, influenza, typhus and measles had spread from Central America.
|
60 |
+
|
61 |
+
The forces led by Pizarro consisted of 168 men, one cannon, and 27 horses. Conquistadors ported lances, arquebuses, steel armor and long swords. In contrast, the Inca used weapons made out of wood, stone, copper and bronze, while using an Alpaca fiber based armor, putting them at significant technological disadvantage—none of their weapons could pierce the Spanish steel armor. In addition, due to the absence of horses in the Americas, the Inca did not develop tactics to fight cavalry. However, the Inca were still effective warriors, being able to successfully fight the Mapuche, which later would strategically defeat the Spanish as they expanded further south.
|
62 |
+
|
63 |
+
The first engagement between the Inca and the Spanish was the Battle of Puná, near present-day Guayaquil, Ecuador, on the Pacific Coast; Pizarro then founded the city of Piura in July 1532. Hernando de Soto was sent inland to explore the interior and returned with an invitation to meet the Inca, Atahualpa, who had defeated his brother in the civil war and was resting at Cajamarca with his army of 80,000 troops, that were at the moment armed only with hunting tools (knives and lassos for hunting llamas).
|
64 |
+
|
65 |
+
Pizarro and some of his men, most notably a friar named Vincente de Valverde, met with the Inca, who had brought only a small retinue. The Inca offered them ceremonial chicha in a golden cup, which the Spanish rejected. The Spanish interpreter, Friar Vincente, read the "Requerimiento" that demanded that he and his empire accept the rule of King Charles I of Spain and convert to Christianity. Atahualpa dismissed the message and asked them to leave. After this, the Spanish began their attack against the mostly unarmed Inca, captured Atahualpa as hostage, and forced the Inca to collaborate.
|
66 |
+
|
67 |
+
Atahualpa offered the Spaniards enough gold to fill the room he was imprisoned in and twice that amount of silver. The Inca fulfilled this ransom, but Pizarro deceived them, refusing to release the Inca afterwards. During Atahualpa's imprisonment Huáscar was assassinated elsewhere. The Spaniards maintained that this was at Atahualpa's orders; this was used as one of the charges against Atahualpa when the Spaniards finally executed him, in August 1533.[29]
|
68 |
+
|
69 |
+
Although "defeat" often implies an unwanted loss in battle, much of the Inca elite "actually welcomed the Spanish invaders as liberators and willingly settled down with them to share rule of Andean farmers and miners."[30]
|
70 |
+
|
71 |
+
The Spanish installed Atahualpa's brother Manco Inca Yupanqui in power; for some time Manco cooperated with the Spanish while they fought to put down resistance in the north. Meanwhile, an associate of Pizarro, Diego de Almagro, attempted to claim Cusco. Manco tried to use this intra-Spanish feud to his advantage, recapturing Cusco in 1536, but the Spanish retook the city afterwards. Manco Inca then retreated to the mountains of Vilcabamba and established the small Neo-Inca State, where he and his successors ruled for another 36 years, sometimes raiding the Spanish or inciting revolts against them. In 1572 the last Inca stronghold was conquered and the last ruler, Túpac Amaru, Manco's son, was captured and executed.[31] This ended resistance to the Spanish conquest under the political authority of the Inca state.
|
72 |
+
|
73 |
+
After the fall of the Inca Empire many aspects of Inca culture were systematically destroyed, including their sophisticated farming system, known as the vertical archipelago model of agriculture.[32] Spanish colonial officials used the Inca mita corvée labor system for colonial aims, sometimes brutally. One member of each family was forced to work in the gold and silver mines, the foremost of which was the titanic silver mine at Potosí. When a family member died, which would usually happen within a year or two, the family was required to send a replacement.[citation needed]
|
74 |
+
|
75 |
+
The effects of smallpox on the Inca empire were even more devastating. Beginning in Colombia, smallpox spread rapidly before the Spanish invaders first arrived in the empire. The spread was probably aided by the efficient Inca road system. Smallpox was only the first epidemic.[33] Other diseases, including a probable Typhus outbreak in 1546, influenza and smallpox together in 1558, smallpox again in 1589, diphtheria in 1614, and measles in 1618, all ravaged the Inca people.
|
76 |
+
|
77 |
+
The number of people inhabiting Tawantinsuyu at its peak is uncertain, with estimates ranging from 4–37 million. Most population estimates are in the range of 6 to 14 million. In spite of the fact that the Inca kept excellent census records using their quipus, knowledge of how to read them was lost as almost all fell into disuse and disintegrated over time or were destroyed by the Spaniards.[34]
|
78 |
+
|
79 |
+
The empire was extremely linguistically diverse. Some of the most important languages were Quechua, Aymara, Puquina and Mochica, respectively mainly spoken in the Central Andes, the Altiplano or (Qullasuyu), the south Peruvian coast (Kuntisuyu), and the area of the north Peruvian coast (Chinchaysuyu) around Chan Chan, today Trujillo. Other languages included Quignam, Jaqaru, Leco, Uru-Chipaya languages, Kunza, Humahuaca, Cacán, Mapudungun, Culle, Chachapoya, Catacao languages, Manta, and Barbacoan languages, as well as numerous Amazonian languages on the frontier regions. The exact linguistic topography of the pre-Columbian and early colonial Andes remains incompletely understood, owing to the extinction of several languages and the loss of historical records.
|
80 |
+
|
81 |
+
In order to manage this diversity, the Inca lords promoted the usage of Quechua, especially the variety of modern-day Lima [35] as the Qhapaq Runasimi ("great language of the people"), or the official language/lingua franca. Defined by mutual intelligibility, Quechua is actually a family of languages rather than one single language, parallel to the Romance or Slavic languages in Europe. Most communities within the empire, even those resistant to Inca rule, learned to speak a variety of Quechua (forming new regional varieties with distinct phonetics) in order to communicate with the Inca lords and mitma colonists, as well as the wider integrating society, but largely retained their native languages as well. The Incas also had their own ethnic language, referred to as Qhapaq simi ("royal language"), which is thought to have been closely related to or a dialect of Puquina, which appears to have been the official language of the former Tiwanaku Empire, from which the Incas claimed descent, making Qhapaq simi a source of prestige for them. The split between Qhapaq simi and Qhapaq Runasimi also exemplifies the larger split between hatun and hunin (high and low) society in general.
|
82 |
+
|
83 |
+
There are several common misconceptions about the history of Quechua, as it is frequently identified as the "Inca language". Quechua did not originate with the Incas, had been a lingua franca in multiple areas before the Inca expansions, was diverse before the rise of the Incas, and it was not the native or original language of the Incas. In addition, the main official language of the Inca Empire was the coastal Quechua variety, native to modern Lima, not the Cusco dialect. The pre-Inca Chincha Kingdom, with whom the Incas struck an alliance, had made this variety into a local prestige language by their extensive trading activities. The Peruvian coast was also the most populous and economically active region of the Inca Empire, and employing coastal Quechua offered an alternative to neighboring Mochica, the language of the rival state of Chimu. Trade had also been spreading Quechua northwards before the Inca expansions, towards Cajamarca and Ecuador, and was likely the official language of the older Wari Empire. However, the Incas have left an impressive linguistic legacy, in that they introduced Quechua to many areas where it is still widely spoken today, including Ecuador, southern Bolivia, southern Colombia, and parts of the Amazon basin. The Spanish conquerors continued the official usage of Quechua during the early colonial period, and transformed it into a literary language.[36]
|
84 |
+
|
85 |
+
The Incas were not known to develop a written form of language; however, they visually recorded narratives through paintings on vases and cups (qirus).[37] These paintings are usually accompanied by geometric patterns known as toqapu, which are also found in textiles. Researchers have speculated that toqapu patterns could have served as a form of written communication (e.g.: heraldry, or glyphs), however this remains unclear.[38] The Incas also kept records by using quipus.
|
86 |
+
|
87 |
+
The high infant mortality rates that plagued the Inca Empire caused all newborn infants to be given the term ‘wawa’ when they were born. Most families did not invest very much into their child until they reached the age of two or three years old. Once the child reached the age of three, a "coming of age" ceremony occurred, called the rutuchikuy. For the Incas, this ceremony indicated that the child had entered the stage of "ignorance". During this ceremony, the family would invite all relatives to their house for food and dance, and then each member of the family would receive a lock of hair from the child. After each family member had received a lock, the father would shave the child's head. This stage of life was categorized by a stage of "ignorance, inexperience, and lack of reason, a condition that the child would overcome with time."[39] For Incan society, in order to advance from the stage of ignorance to development the child must learn the roles associated with their gender.
|
88 |
+
|
89 |
+
The next important ritual was to celebrate the maturity of a child. Unlike the coming of age ceremony, the celebration of maturity signified the child's sexual potency. This celebration of puberty was called warachikuy for boys and qikuchikuy for girls. The warachikuy ceremony included dancing, fasting, tasks to display strength, and family ceremonies. The boy would also be given new clothes and taught how to act as an unmarried man. The qikuchikuy signified the onset of menstruation, upon which the girl would go into the forest alone and return only once the bleeding had ended. In the forest she would fast, and, once returned, the girl would be given a new name, adult clothing, and advice. This "folly" stage of life was the time young adults were allowed to have sex without being a parent.[39]
|
90 |
+
|
91 |
+
Between the ages of 20 and 30, people were considered young adults, "ripe for serious thought and labor."[39] Young adults were able to retain their youthful status by living at home and assisting in their home community. Young adults only reached full maturity and independence once they had married.
|
92 |
+
|
93 |
+
At the end of life, the terms for men and women denote loss of sexual vitality and humanity. Specifically, the "decrepitude" stage signifies the loss of mental well-being and further physical decline.
|
94 |
+
|
95 |
+
In the Incan Empire, the age of marriage differed for men and women: men typically married at the age of 20, while women usually got married about four years earlier at the age of 16.[40] Men who were highly ranked in society could have multiple wives, but those lower in the ranks could only take a single wife.[41] Marriages were typically within classes and resembled a more business-like agreement. Once married, the women were expected to cook, collect food and watch over the children and livestock.[40] Girls and mothers would also work around the house to keep it orderly to please the public inspectors.[42] These duties remained the same even after wives became pregnant and with the added responsibility of praying and making offerings to Kanopa, who was the god of pregnancy.[40] It was typical for marriages to begin on a trial basis with both men and women having a say in the longevity of the marriage. If the man felt that it wouldn't work out or if the woman wanted to return to her parents’ home the marriage would end. Once the marriage was final, the only way the two could be divorced was if they did not have a child together.[40] Marriage within the Empire was crucial for survival. A family was considered disadvantaged if there was not a married couple at the center because everyday life centered around the balance of male and female tasks.[43]
|
96 |
+
|
97 |
+
According to some historians, such as Terence N. D'Altroy, male and female roles were considered equal in Inca society. The "indigenous cultures saw the two genders as complementary parts of a whole."[43] In other words, there was not a hierarchical structure in the domestic sphere for the Incas. Within the domestic sphere, women were known as the weavers. Women's everyday tasks included: spinning, watching the children, weaving cloth, cooking, brewing chichi, preparing fields for cultivation, planting seeds, bearing children, harvesting, weeding, hoeing, herding, and carrying water.[44] Men on the other hand, "weeded, plowed, participated in combat, helped in the harvest, carried firewood, built houses, herded llama and alpaca, and spun and wove when necessary".[44] This relationship between the genders may have been complementary. Unsurprisingly, onlooking Spaniards believed women were treated like slaves, because women did not work in Spanish society to the same extent, and certainly did not work in fields.[45] Women were sometimes allowed to own land and herds because inheritance was passed down from both the mother's and father's side of the family.[46] Kinship within the Inca society followed a parallel line of descent. In other words, women ascended from women and men ascended from men. Due to the parallel descent, a woman had access to land and other necessities through her mother.[44]
|
98 |
+
|
99 |
+
Inca myths were transmitted orally until early Spanish colonists recorded them; however, some scholars claim that they were recorded on quipus, Andean knotted string records.[47]
|
100 |
+
|
101 |
+
The Inca believed in reincarnation.[48] After death, the passage to the next world was fraught with difficulties. The spirit of the dead, camaquen, would need to follow a long road and during the trip the assistance of a black dog that could see in the dark was required. Most Incas imagined the after world to be like an earthly paradise with flower-covered fields and snow-capped mountains.
|
102 |
+
|
103 |
+
It was important to the Inca that they not die as a result of burning or that the body of the deceased not be incinerated. Burning would cause their vital force to disappear and threaten their passage to the after world. Those who obeyed the Inca moral code – ama suwa, ama llulla, ama quella (do not steal, do not lie, do not be lazy) – "went to live in the Sun's warmth while others spent their eternal days in the cold earth".[49] The Inca nobility practiced cranial deformation.[50] They wrapped tight cloth straps around the heads of newborns to shape their soft skulls into a more conical form, thus distinguishing the nobility from other social classes.
|
104 |
+
|
105 |
+
The Incas made human sacrifices. As many as 4,000 servants, court officials, favorites and concubines were killed upon the death of the Inca Huayna Capac in 1527.[51] The Incas performed child sacrifices around important events, such as the death of the Sapa Inca or during a famine. These sacrifices were known as qhapaq hucha.[52]
|
106 |
+
|
107 |
+
The Incas were polytheists who worshipped many gods. These included:
|
108 |
+
|
109 |
+
The Inca Empire employed central planning. The Inca Empire traded with outside regions, although they did not operate a substantial internal market economy. While axe-monies were used along the northern coast, presumably by the provincial mindaláe trading class,[53] most households in the empire lived in a traditional economy in which households were required to pay taxes, usually in the form of the mit'a corvée labor, and military obligations,[54] though barter (or trueque) was present in some areas.[55] In return, the state provided security, food in times of hardship through the supply of emergency resources, agricultural projects (e.g. aqueducts and terraces) to increase productivity and occasional feasts. While mit'a was used by the state to obtain labor, individual villages had a pre-inca system of communal work, known as mink'a. This system survives to the modern day, known as mink'a or faena. The economy rested on the material foundations of the vertical archipelago, a system of ecological complementarity in accessing resources[56] and the cultural foundation of ayni, or reciprocal exchange.[57][58]
|
110 |
+
|
111 |
+
The Sapa Inca was conceptualized as divine and was effectively head of the state religion. The Willaq Umu (or Chief Priest) was second to the emperor. Local religious traditions continued and in some cases such as the Oracle at Pachacamac on the Peruvian coast, were officially venerated. Following Pachacuti, the Sapa Inca claimed descent from Inti, who placed a high value on imperial blood; by the end of the empire, it was common to incestuously wed brother and sister. He was "son of the sun," and his people the intip churin, or "children of the sun," and both his right to rule and mission to conquer derived from his holy ancestor. The Sapa Inca also presided over ideologically important festivals, notably during the Inti Raymi, or "Sunfest" attended by soldiers, mummified rulers, nobles, clerics and the general population of Cusco beginning on the June solstice and culminating nine days later with the ritual breaking of the earth using a foot plow by the Inca. Moreover, Cusco was considered cosmologically central, loaded as it was with huacas and radiating ceque lines and geographic center of the Four-Quarters; Inca Garcilaso de la Vega called it "the navel of the universe".[59][60][61][62]
|
112 |
+
|
113 |
+
The Inca Empire was a federalist system consisting of a central government with the Inca at its head and four-quarters, or suyu: Chinchay Suyu (NW), Anti Suyu (NE), Kunti Suyu (SW) and Qulla Suyu (SE). The four corners of these quarters met at the center, Cusco. These suyu were likely created around 1460 during the reign of Pachacuti before the empire reached its largest territorial extent. At the time the suyu were established they were roughly of equal size and only later changed their proportions as the empire expanded north and south along the Andes.[63]
|
114 |
+
|
115 |
+
Cusco was likely not organized as a wamani, or province. Rather, it was probably somewhat akin to a modern federal district, like Washington, DC or Mexico City. The city sat at the center of the four suyu and served as the preeminent center of politics and religion. While Cusco was essentially governed by the Sapa Inca, his relatives and the royal panaqa lineages, each suyu was governed by an Apu, a term of esteem used for men of high status and for venerated mountains. Both Cusco as a district and the four suyu as administrative regions were grouped into upper hanan and lower hurin divisions. As the Inca did not have written records, it is impossible to exhaustively list the constituent wamani. However, colonial records allow us to reconstruct a partial list. There were likely more than 86 wamani, with more than 48 in the highlands and more than 38 on the coast.[64][65][66]
|
116 |
+
|
117 |
+
The most populous suyu was Chinchaysuyu, which encompassed the former Chimu empire and much of the northern Andes. At its largest extent, it extended through much of modern Ecuador and into modern Colombia.
|
118 |
+
|
119 |
+
The largest suyu by area was Qullasuyu, named after the Aymara-speaking Qulla people. It encompassed the Bolivian Altiplano and much of the southern Andes, reaching Argentina and as far south as the Maipo or Maule river in Central Chile.[67] Historian José Bengoa singled out Quillota as likely being the foremost Inca settlement in Chile.[68]
|
120 |
+
|
121 |
+
The second smallest suyu, Antisuyu, was northwest of Cusco in the high Andes. Its name is the root of the word "Andes."[69]
|
122 |
+
|
123 |
+
Kuntisuyu was the smallest suyu, located along the southern coast of modern Peru, extending into the highlands towards Cusco.[70]
|
124 |
+
|
125 |
+
The Inca state had no separate judiciary or codified laws. Customs, expectations and traditional local power holders governed behavior. The state had legal force, such as through tokoyrikoq (lit. "he who sees all"), or inspectors. The highest such inspector, typically a blood relative to the Sapa Inca, acted independently of the conventional hierarchy, providing a point of view for the Sapa Inca free of bureaucratic influence.[71]
|
126 |
+
|
127 |
+
The Inca had three moral precepts that governed their behavior:
|
128 |
+
|
129 |
+
Colonial sources are not entirely clear or in agreement about Inca government structure, such as exact duties and functions of government positions. But the basic structure can be broadly described. The top was the Sapa Inca. Below that may have been the Willaq Umu, literally the "priest who recounts", the High Priest of the Sun.[72] However, beneath the Sapa Inca also sat the Inkap rantin, who was a confidant and assistant to the Sapa Inca, perhaps similar to a Prime Minister.[73] Starting with Topa Inca Yupanqui, a "Council of the Realm" was composed of 16 nobles: 2 from hanan Cusco; 2 from hurin Cusco; 4 from Chinchaysuyu; 2 from Cuntisuyu; 4 from Collasuyu; and 2 from Antisuyu. This weighting of representation balanced the hanan and hurin divisions of the empire, both within Cusco and within the Quarters (hanan suyukuna and hurin suyukuna).[74]
|
130 |
+
|
131 |
+
While provincial bureaucracy and government varied greatly, the basic organization was decimal. Taxpayers – male heads of household of a certain age range – were organized into corvée labor units (often doubling as military units) that formed the state's muscle as part of mit'a service. Each unit of more than 100 tax-payers were headed by a kuraka, while smaller units were headed by a kamayuq, a lower, non-hereditary status. However, while kuraka status was hereditary and typically served for life, the position of a kuraka in the hierarchy was subject to change based on the privileges of superiors in the hierarchy; a pachaka kuraka could be appointed to the position by a waranqa kuraka. Furthermore, one kuraka in each decimal level could serve as the head of one of the nine groups at a lower level, so that a pachaka kuraka might also be a waranqa kuraka, in effect directly responsible for one unit of 100 tax-payers and less directly responsible for nine other such units.[75][76][77]
|
132 |
+
|
133 |
+
Francisco Pizarro
|
134 |
+
|
135 |
+
Architecture was the most important of the Incan arts, with textiles reflecting architectural motifs. The most notable example is Machu Picchu, which was constructed by Inca engineers. The prime Inca structures were made of stone blocks that fit together so well that a knife could not be fitted through the stonework. These constructs have survived for centuries, with no use of mortar to sustain them.
|
136 |
+
|
137 |
+
This process was first used on a large scale by the Pucara (c. 300 BC–AD 300) peoples to the south in Lake Titicaca and later in the city of Tiwanaku (c. AD 400–1100) in present-day Bolivia. The rocks were sculpted to fit together exactly by repeatedly lowering a rock onto another and carving away any sections on the lower rock where the dust was compressed. The tight fit and the concavity on the lower rocks made them extraordinarily stable, despite the ongoing challenge of earthquakes and volcanic activity.
|
138 |
+
|
139 |
+
Physical measures used by the Inca were based on human body parts. Units included fingers, the distance from thumb to forefinger, palms, cubits and wingspans. The most basic distance unit was thatkiy or thatki, or one pace. The next largest unit was reported by Cobo to be the topo or tupu, measuring 6,000 thatkiys, or about 7.7 km (4.8 mi); careful study has shown that a range of 4.0 to 6.3 km (2.5 to 3.9 mi) is likely. Next was the wamani, composed of 30 topos (roughly 232 km or 144 mi). To measure area, 25 by 50 wingspans were used, reckoned in topos (roughly 3,280 km2 or 1,270 sq mi). It seems likely that distance was often interpreted as one day's walk; the distance between tambo way-stations varies widely in terms of distance, but far less in terms of time to walk that distance.[80][81]
|
140 |
+
|
141 |
+
Inca calendars were strongly tied to astronomy. Inca astronomers understood equinoxes, solstices and zenith passages, along with the Venus cycle. They could not, however, predict eclipses. The Inca calendar was essentially lunisolar, as two calendars were maintained in parallel, one solar and one lunar. As 12 lunar months fall 11 days short of a full 365-day solar year, those in charge of the calendar had to adjust every winter solstice. Each lunar month was marked with festivals and rituals.[82] Apparently, the days of the week were not named and days were not grouped into weeks. Similarly, months were not grouped into seasons. Time during a day was not measured in hours or minutes, but in terms of how far the sun had travelled or in how long it had taken to perform a task.[83]
|
142 |
+
|
143 |
+
The sophistication of Inca administration, calendrics and engineering required facility with numbers. Numerical information was stored in the knots of quipu strings, allowing for compact storage of large numbers.[84][85] These numbers were stored in base-10 digits, the same base used by the Quechua language[86] and in administrative and military units.[76] These numbers, stored in quipu, could be calculated on yupanas, grids with squares of positionally varying mathematical values, perhaps functioning as an abacus.[87] Calculation was facilitated by moving piles of tokens, seeds or pebbles between compartments of the yupana. It is likely that Inca mathematics at least allowed division of integers into integers or fractions and multiplication of integers and fractions.[88]
|
144 |
+
|
145 |
+
According to mid-17th-century Jesuit chronicler Bernabé Cobo,[89] the Inca designated officials to perform accounting-related tasks. These officials were called quipo camayos. Study of khipu sample VA 42527 (Museum für Völkerkunde, Berlin)[90] revealed that the numbers arranged in calendrically significant patterns were used for agricultural purposes in the "farm account books" kept by the khipukamayuq (accountant or warehouse keeper) to facilitate the closing of accounting books.[91]
|
146 |
+
|
147 |
+
Ceramics were painted using the polychrome technique portraying numerous motifs including animals, birds, waves, felines (popular in the Chavin culture) and geometric patterns found in the Nazca style of ceramics. In a culture without a written language, ceramics portrayed the basic scenes of everyday life, including the smelting of metals, relationships and scenes of tribal warfare. The most distinctive Inca ceramic objects are the Cusco bottles or "aryballos".[92] Many of these pieces are on display in Lima in the Larco Archaeological Museum and the National Museum of Archaeology, Anthropology and History.
|
148 |
+
|
149 |
+
Almost all of the gold and silver work of the Incan empire was melted down by the conquistadors, and shipped back to Spain.[93]
|
150 |
+
|
151 |
+
The Inca recorded information on assemblages of knotted strings, known as Quipu, although they can no longer be decoded. Originally it was thought that Quipu were used only as mnemonic devices or to record numerical data. Quipus are also believed to record history and literature.[94]
|
152 |
+
|
153 |
+
The Inca made many discoveries in medicine.[95] They performed successful skull surgery, by cutting holes in the skull to alleviate fluid buildup and inflammation caused by head wounds. Many skull surgeries performed by Inca surgeons were successful. Survival rates were 80–90%, compared to about 30% before Inca times.[96]
|
154 |
+
|
155 |
+
The Incas revered the coca plant as sacred/magical. Its leaves were used in moderate amounts to lessen hunger and pain during work, but were mostly used for religious and health purposes.[97] The Spaniards took advantage of the effects of chewing coca leaves.[97] The Chasqui, messengers who ran throughout the empire to deliver messages, chewed coca leaves for extra energy. Coca leaves were also used as an anaesthetic during surgeries.
|
156 |
+
|
157 |
+
The Inca army was the most powerful at that time, because any ordinary villager or farmer could be recruited as a soldier as part of the mit'a system of mandatory public service. Every able bodied male Inca of fighting age had to take part in war in some capacity at least once and to prepare for warfare again when needed. By the time the empire reached its largest size, every section of the empire contributed in setting up an army for war.
|
158 |
+
|
159 |
+
The Incas had no iron or steel and their weapons were not much more effective than those of their opponents so they often defeated opponents by sheer force of numbers, or else by persuading them to surrender beforehand by offering generous terms.[98] Inca weaponry included "hardwood spears launched using throwers, arrows, javelins, slings, the bolas, clubs, and maces with star-shaped heads made of copper or bronze."[98][99] Rolling rocks downhill onto the enemy was a common strategy, taking advantage of the hilly terrain.[100] Fighting was sometimes accompanied by drums and trumpets made of wood, shell or bone.[101][102] Armor included:[98][103]
|
160 |
+
|
161 |
+
Roads allowed quick movement (on foot) for the Inca army and shelters called tambo and storage silos called qullqas were built one day's travelling distance from each other, so that an army on campaign could always be fed and rested. This can be seen in names of ruins such as Ollantay Tambo, or My Lord's Storehouse. These were set up so the Inca and his entourage would always have supplies (and possibly shelter) ready as they traveled.
|
162 |
+
|
163 |
+
Chronicles and references from the 16th and 17th centuries support the idea of a banner. However, it represented the Inca (emperor), not the empire.
|
164 |
+
|
165 |
+
Francisco López de Jerez[106] wrote in 1534:
|
166 |
+
|
167 |
+
... todos venían repartidos en sus escuadras con sus banderas y capitanes que los mandan, con tanto concierto como turcos.(... all of them came distributed into squads, with their flags and captains commanding them, as well-ordered as Turks.)
|
168 |
+
|
169 |
+
Chronicler Bernabé Cobo wrote:
|
170 |
+
|
171 |
+
The royal standard or banner was a small square flag, ten or twelve spans around, made of cotton or wool cloth, placed on the end of a long staff, stretched and stiff such that it did not wave in the air and on it each king painted his arms and emblems, for each one chose different ones, though the sign of the Incas was the rainbow and two parallel snakes along the width with the tassel as a crown, which each king used to add for a badge or blazon those preferred, like a lion, an eagle and other figures.
|
172 |
+
(... el guión o estandarte real era una banderilla cuadrada y pequeña, de diez o doce palmos de ruedo, hecha de lienzo de algodón o de lana, iba puesta en el remate de una asta larga, tendida y tiesa, sin que ondease al aire, y en ella pintaba cada rey sus armas y divisas, porque cada uno las escogía diferentes, aunque las generales de los Incas eran el arco celeste y dos culebras tendidas a lo largo paralelas con la borda que le servía de corona, a las cuales solía añadir por divisa y blasón cada rey las que le parecía, como un león, un águila y otras figuras.)-Bernabé Cobo, Historia del Nuevo Mundo (1653)
|
173 |
+
|
174 |
+
Guaman Poma's 1615 book, El primer nueva corónica y buen gobierno, shows numerous line drawings of Inca flags.[107] In his 1847 book A History of the Conquest of Peru, "William H. Prescott ... says that in the Inca army each company had its particular banner and that the imperial standard, high above all, displayed the glittering device of the rainbow, the armorial ensign of the Incas."[108] A 1917 world flags book says the Inca "heir-apparent ... was entitled to display the royal standard of the rainbow in his military campaigns."[109]
|
175 |
+
|
176 |
+
In modern times the rainbow flag has been wrongly associated with the Tawantinsuyu and displayed as a symbol of Inca heritage by some groups in Peru and Bolivia. The city of Cusco also flies the Rainbow Flag, but as an official flag of the city. The Peruvian president Alejandro Toledo (2001–2006) flew the Rainbow Flag in Lima's presidential palace. However, according to Peruvian historiography, the Inca Empire never had a flag. Peruvian historian María Rostworowski said, "I bet my life, the Inca never had that flag, it never existed, no chronicler mentioned it".[110] Also, to the Peruvian newspaper El Comercio, the flag dates to the first decades of the 20th century,[111] and even the Congress of the Republic of Peru has determined that flag is a fake by citing the conclusion of National Academy of Peruvian History:
|
177 |
+
|
178 |
+
"The official use of the wrongly called 'Tawantinsuyu flag' is a mistake. In the Pre-Hispanic Andean World there did not exist the concept of a flag, it did not belong to their historic context".[111]
|
179 |
+
National Academy of Peruvian History
|
180 |
+
|
181 |
+
Incas were able to adapt to their high-altitude living through successful acclimatization, which is characterized by increasing oxygen supply to the blood tissues. For the native Inca living in the Andean highlands, this was achieved through the development of a larger lung capacity, and an increase in red blood cell counts, hemoglobin concentration, and capillary beds.[112]
|
182 |
+
|
183 |
+
Compared to other humans, the Incas had slower heart rates, almost one-third larger lung capacity, about 2 L (4 pints) more blood volume and double the amount of hemoglobin, which transfers oxygen from the lungs to the rest of the body. While the Conquistadors may have been slightly taller, the Inca had the advantage of coping with the extraordinary altitude.
|
184 |
+
|
en/2714.html.txt
ADDED
@@ -0,0 +1,184 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The Inca Empire (Quechua: Tawantinsuyu, lit. "The Four Regions"[4]), also known as the Incan Empire and the Inka Empire, was the largest empire in pre-Columbian America.[5] The administrative, political and military center of the empire was located in the city of Cusco. The Inca civilization arose from the Peruvian highlands sometime in the early 13th century. Its last stronghold was conquered by the Spanish in 1572.
|
4 |
+
|
5 |
+
From 1438 to 1533, the Incas incorporated a large portion of western South America, centered on the Andean Mountains, using conquest and peaceful assimilation, among other methods. At its largest, the empire joined Peru, western Ecuador, western and south central Bolivia, northwest Argentina, a large portion of what is today Chile, and the southwesternmost tip of Colombia into a state comparable to the historical empires of Eurasia. Its official language was Quechua.[6] Many local forms of worship persisted in the empire, most of them concerning local sacred Huacas, but the Inca leadership encouraged the sun worship of Inti – their sun god – and imposed its sovereignty above other cults such as that of Pachamama.[7] The Incas considered their king, the Sapa Inca, to be the "son of the sun."[8]
|
6 |
+
|
7 |
+
The Inca Empire was unusual in that it lacked many features associated with civilization in the Old World. Anthropologist Gordon McEwan wrote that:[9]
|
8 |
+
|
9 |
+
The Incas lacked the use of wheeled vehicles. They lacked animals to ride and draft animals that could pull wagons and plows... [They] lacked the knowledge of iron and steel... Above all, they lacked a system of writing... Despite these supposed handicaps, the Incas were still able to construct one of the greatest imperial states in human history.
|
10 |
+
|
11 |
+
Notable features of the Inca Empire include its monumental architecture, especially stonework, extensive road network reaching all corners of the empire, finely-woven textiles, use of knotted strings (quipu) for record keeping and communication, agricultural innovations in a difficult environment, and the organization and management fostered or imposed on its people and their labor.
|
12 |
+
|
13 |
+
The Incan economy has been described in contradictory ways by scholars:[10]
|
14 |
+
|
15 |
+
... feudal, slave, socialist (here one may choose between socialist paradise or socialist tyranny)
|
16 |
+
|
17 |
+
The Inca Empire functioned largely without money and without markets. Instead, exchange of goods and services was based on reciprocity between individuals and among individuals, groups, and Inca rulers. "Taxes" consisted of a labour obligation of a person to the Empire. The Inca rulers (who theoretically owned all the means of production) reciprocated by granting access to land and goods and providing food and drink in celebratory feasts for their subjects.[11]
|
18 |
+
|
19 |
+
The Inca referred to their empire as Tawantinsuyu,[4] "the four suyu". In Quechua, tawa is four and -ntin is a suffix naming a group, so that a tawantin is a quartet, a group of four things taken together, in this case the four suyu ("regions" or "provinces") whose corners met at the capital. The four suyu were: Chinchaysuyu (north), Antisuyu (east; the Amazon jungle), Qullasuyu (south) and Kuntisuyu (west). The name Tawantinsuyu was, therefore, a descriptive term indicating a union of provinces. The Spanish transliterated the name as Tahuatinsuyo or Tahuatinsuyu.
|
20 |
+
|
21 |
+
The term Inka means "ruler" or "lord" in Quechua and was used to refer to the ruling class or the ruling family.[12] The Incas were a very small percentage of the total population of the empire, probably numbering only 15,000 to 40,000, but ruling a population of around 10 million people.[13] The Spanish adopted the term (transliterated as Inca in Spanish) as an ethnic term referring to all subjects of the empire rather than simply the ruling class. As such, the name Imperio inca ("Inca Empire") referred to the nation that they encountered and subsequently conquered.
|
22 |
+
|
23 |
+
The Inca Empire was the last chapter of thousands of years of Andean civilizations. The Andean civilization was one of five civilizations in the world deemed by scholars to be "pristine", that is indigenous and not derivative from other civilizations.[14]
|
24 |
+
|
25 |
+
The Inca Empire was preceded by two large-scale empires in the Andes: the Tiwanaku (c. 300–1100 AD), based around Lake Titicaca and the Wari or Huari (c. 600–1100 AD) centered near the city of Ayacucho. The Wari occupied the Cuzco area for about 400 years. Thus, many of the characteristics of the Inca Empire derived from earlier multi-ethnic and expansive Andean cultures.[15]
|
26 |
+
|
27 |
+
Carl Troll has argued that the development of the Inca state in the central Andes was aided by conditions that allow for the elaboration of the staple food chuño. Chuño, which can be stored for long periods, is made of potato dried at the freezing temperatures that are common at nighttime in the southern Peruvian highlands. Such a link between the Inca state and chuño may be questioned, as potatoes and other crops such as maize can also be dried with only sunlight.[16] Troll did also argue that llamas, the Inca's pack animal, can be found in its largest numbers in this very same region.[16] It is worth considering the maximum extent of the Inca Empire roughly coincided with the greatest distribution of llamas and alpacas in Pre-Hispanic America.[17] The link between the Andean biomes of puna and páramo, pastoralism and the Inca state is a matter of research.[18] As a third point Troll pointed out irrigation technology as advantageous to the Inca state-building.[18] While Troll theorized environmental influences on the Inca Empire, he opposed environmental determinism, arguing that culture lay at the core of the Inca civilization.[18]
|
28 |
+
|
29 |
+
The Inca people were a pastoral tribe in the Cusco area around the 12th century. Incan oral history tells an origin story of three caves. The center cave at Tampu T'uqu (Tambo Tocco) was named Qhapaq T'uqu ("principal niche", also spelled Capac Tocco). The other caves were Maras T'uqu (Maras Tocco) and Sutiq T'uqu (Sutic Tocco).[19] Four brothers and four sisters stepped out of the middle cave. They were: Ayar Manco, Ayar Cachi, Ayar Awqa (Ayar Auca) and Ayar Uchu; and Mama Ocllo, Mama Raua, Mama Huaco and Mama Qura (Mama Cora). Out of the side caves came the people who were to be the ancestors of all the Inca clans.
|
30 |
+
|
31 |
+
Ayar Manco carried a magic staff made of the finest gold. Where this staff landed, the people would live. They traveled for a long time. On the way, Ayar Cachi boasted about his strength and power. His siblings tricked him into returning to the cave to get a sacred llama. When he went into the cave, they trapped him inside to get rid of him.
|
32 |
+
|
33 |
+
Ayar Uchu decided to stay on the top of the cave to look over the Inca people. The minute he proclaimed that, he turned to stone. They built a shrine around the stone and it became a sacred object. Ayar Auca grew tired of all this and decided to travel alone. Only Ayar Manco and his four sisters remained.
|
34 |
+
|
35 |
+
Finally, they reached Cusco. The staff sank into the ground. Before they arrived, Mama Ocllo had already borne Ayar Manco a child, Sinchi Roca. The people who were already living in Cusco fought hard to keep their land, but Mama Huaca was a good fighter. When the enemy attacked, she threw her bolas (several stones tied together that spun through the air when thrown) at a soldier (gualla) and killed him instantly. The other people became afraid and ran away.
|
36 |
+
|
37 |
+
After that, Ayar Manco became known as Manco Cápac, the founder of the Inca. It is said that he and his sisters built the first Inca homes in the valley with their own hands. When the time came, Manco Cápac turned to stone like his brothers before him. His son, Sinchi Roca, became the second emperor of the Inca.[20]
|
38 |
+
|
39 |
+
Under the leadership of Manco Cápac, the Inca formed the small city-state Kingdom of Cusco (Quechua Qusqu', Qosqo). In 1438, they began a far-reaching expansion under the command of Sapa Inca (paramount leader) Pachacuti-Cusi Yupanqui, whose name literally meant "earth-shaker". The name of Pachacuti was given to him after he conquered the Tribe of Chancas (modern Apurímac). During his reign, he and his son Tupac Yupanqui brought much of the modern-day territory of Peru under Inca control.[21]
|
40 |
+
|
41 |
+
Pachacuti reorganized the kingdom of Cusco into the Tahuantinsuyu, which consisted of a central government with the Inca at its head and four provincial governments with strong leaders: Chinchasuyu (NW), Antisuyu (NE), Kuntisuyu (SW) and Qullasuyu (SE).[22] Pachacuti is thought to have built Machu Picchu, either as a family home or summer retreat, although it may have been an agricultural station.[23]
|
42 |
+
|
43 |
+
Pachacuti sent spies to regions he wanted in his empire and they brought to him reports on political organization, military strength and wealth. He then sent messages to their leaders extolling the benefits of joining his empire, offering them presents of luxury goods such as high quality textiles and promising that they would be materially richer as his subjects.
|
44 |
+
|
45 |
+
Most accepted the rule of the Inca as a fait accompli and acquiesced peacefully. Refusal to accept Inca rule resulted in military conquest. Following conquest the local rulers were executed. The ruler's children were brought to Cusco to learn about Inca administration systems, then return to rule their native lands. This allowed the Inca to indoctrinate them into the Inca nobility and, with luck, marry their daughters into families at various corners of the empire.
|
46 |
+
|
47 |
+
Traditionally the son of the Inca ruler led the army. Pachacuti's son Túpac Inca Yupanqui began conquests to the north in 1463 and continued them as Inca ruler after Pachacuti's death in 1471. Túpac Inca's most important conquest was the Kingdom of Chimor, the Inca's only serious rival for the Peruvian coast. Túpac Inca's empire then stretched north into modern-day Ecuador and Colombia.
|
48 |
+
|
49 |
+
Túpac Inca's son Huayna Cápac added a small portion of land to the north in modern-day Ecuador. At its height, the Inca Empire included Peru, western and south central Bolivia, southwest Ecuador and a large portion of what is today Chile, north of the Maule River. Traditional historiography claims the advance south halted after the Battle of the Maule where they met determined resistance from the Mapuche.[24] This view is challenged by historian Osvaldo Silva who argues instead that it was the social and political framework of the Mapuche that posed the main difficulty in imposing imperial rule.[24] Silva does accept that the battle of the Maule was a stalemate, but argues the Incas lacked incentives for conquest they had had when fighting more complex societies such as the Chimú Empire.[24] Silva also disputes the date given by traditional historiography for the battle: the late 15th century during the reign of Topa Inca Yupanqui (1471–93).[24] Instead, he places it in 1532 during the Inca Civil War.[24] Nevertheless, Silva agrees on the claim that the bulk of the Incan conquests were made during the late 15th century.[24] At the time of the Incan Civil War an Inca army was, according to Diego de Rosales, subduing a revolt among the Diaguitas of Copiapó and Coquimbo.[24]
|
50 |
+
|
51 |
+
The empire's push into the Amazon Basin near the Chinchipe River was stopped by the Shuar in 1527.[25] The empire extended into corners of Argentina and Colombia. However, most of the southern portion of the Inca empire, the portion denominated as Qullasuyu, was located in the Altiplano.
|
52 |
+
|
53 |
+
The Inca Empire was an amalgamation of languages, cultures and peoples. The components of the empire were not all uniformly loyal, nor were the local cultures all fully integrated. The Inca empire as a whole had an economy based on exchange and taxation of luxury goods and labour. The following quote describes a method of taxation:
|
54 |
+
|
55 |
+
For as is well known to all, not a single village of the highlands or the plains failed to pay the tribute levied on it by those who were in charge of these matters. There were even provinces where, when the natives alleged that they were unable to pay their tribute, the Inca ordered that each inhabitant should be obliged to turn in every four months a large quill full of live lice, which was the Inca's way of teaching and accustoming them to pay tribute.[26]
|
56 |
+
|
57 |
+
Spanish conquistadors led by Francisco Pizarro and his brothers explored south from what is today Panama, reaching Inca territory by 1526.[27] It was clear that they had reached a wealthy land with prospects of great treasure, and after another expedition in 1529 Pizarro traveled to Spain and received royal approval to conquer the region and be its viceroy. This approval was received as detailed in the following quote: "In July 1529 the Queen of Spain signed a charter allowing Pizarro to conquer the Incas. Pizarro was named governor and captain of all conquests in Peru, or New Castile, as the Spanish now called the land."[28]
|
58 |
+
|
59 |
+
When the conquistadors returned to Peru in 1532, a war of succession between the sons of Sapa Inca Huayna Capac, Huáscar and Atahualpa, and unrest among newly conquered territories weakened the empire. Perhaps more importantly, smallpox, influenza, typhus and measles had spread from Central America.
|
60 |
+
|
61 |
+
The forces led by Pizarro consisted of 168 men, one cannon, and 27 horses. Conquistadors ported lances, arquebuses, steel armor and long swords. In contrast, the Inca used weapons made out of wood, stone, copper and bronze, while using an Alpaca fiber based armor, putting them at significant technological disadvantage—none of their weapons could pierce the Spanish steel armor. In addition, due to the absence of horses in the Americas, the Inca did not develop tactics to fight cavalry. However, the Inca were still effective warriors, being able to successfully fight the Mapuche, which later would strategically defeat the Spanish as they expanded further south.
|
62 |
+
|
63 |
+
The first engagement between the Inca and the Spanish was the Battle of Puná, near present-day Guayaquil, Ecuador, on the Pacific Coast; Pizarro then founded the city of Piura in July 1532. Hernando de Soto was sent inland to explore the interior and returned with an invitation to meet the Inca, Atahualpa, who had defeated his brother in the civil war and was resting at Cajamarca with his army of 80,000 troops, that were at the moment armed only with hunting tools (knives and lassos for hunting llamas).
|
64 |
+
|
65 |
+
Pizarro and some of his men, most notably a friar named Vincente de Valverde, met with the Inca, who had brought only a small retinue. The Inca offered them ceremonial chicha in a golden cup, which the Spanish rejected. The Spanish interpreter, Friar Vincente, read the "Requerimiento" that demanded that he and his empire accept the rule of King Charles I of Spain and convert to Christianity. Atahualpa dismissed the message and asked them to leave. After this, the Spanish began their attack against the mostly unarmed Inca, captured Atahualpa as hostage, and forced the Inca to collaborate.
|
66 |
+
|
67 |
+
Atahualpa offered the Spaniards enough gold to fill the room he was imprisoned in and twice that amount of silver. The Inca fulfilled this ransom, but Pizarro deceived them, refusing to release the Inca afterwards. During Atahualpa's imprisonment Huáscar was assassinated elsewhere. The Spaniards maintained that this was at Atahualpa's orders; this was used as one of the charges against Atahualpa when the Spaniards finally executed him, in August 1533.[29]
|
68 |
+
|
69 |
+
Although "defeat" often implies an unwanted loss in battle, much of the Inca elite "actually welcomed the Spanish invaders as liberators and willingly settled down with them to share rule of Andean farmers and miners."[30]
|
70 |
+
|
71 |
+
The Spanish installed Atahualpa's brother Manco Inca Yupanqui in power; for some time Manco cooperated with the Spanish while they fought to put down resistance in the north. Meanwhile, an associate of Pizarro, Diego de Almagro, attempted to claim Cusco. Manco tried to use this intra-Spanish feud to his advantage, recapturing Cusco in 1536, but the Spanish retook the city afterwards. Manco Inca then retreated to the mountains of Vilcabamba and established the small Neo-Inca State, where he and his successors ruled for another 36 years, sometimes raiding the Spanish or inciting revolts against them. In 1572 the last Inca stronghold was conquered and the last ruler, Túpac Amaru, Manco's son, was captured and executed.[31] This ended resistance to the Spanish conquest under the political authority of the Inca state.
|
72 |
+
|
73 |
+
After the fall of the Inca Empire many aspects of Inca culture were systematically destroyed, including their sophisticated farming system, known as the vertical archipelago model of agriculture.[32] Spanish colonial officials used the Inca mita corvée labor system for colonial aims, sometimes brutally. One member of each family was forced to work in the gold and silver mines, the foremost of which was the titanic silver mine at Potosí. When a family member died, which would usually happen within a year or two, the family was required to send a replacement.[citation needed]
|
74 |
+
|
75 |
+
The effects of smallpox on the Inca empire were even more devastating. Beginning in Colombia, smallpox spread rapidly before the Spanish invaders first arrived in the empire. The spread was probably aided by the efficient Inca road system. Smallpox was only the first epidemic.[33] Other diseases, including a probable Typhus outbreak in 1546, influenza and smallpox together in 1558, smallpox again in 1589, diphtheria in 1614, and measles in 1618, all ravaged the Inca people.
|
76 |
+
|
77 |
+
The number of people inhabiting Tawantinsuyu at its peak is uncertain, with estimates ranging from 4–37 million. Most population estimates are in the range of 6 to 14 million. In spite of the fact that the Inca kept excellent census records using their quipus, knowledge of how to read them was lost as almost all fell into disuse and disintegrated over time or were destroyed by the Spaniards.[34]
|
78 |
+
|
79 |
+
The empire was extremely linguistically diverse. Some of the most important languages were Quechua, Aymara, Puquina and Mochica, respectively mainly spoken in the Central Andes, the Altiplano or (Qullasuyu), the south Peruvian coast (Kuntisuyu), and the area of the north Peruvian coast (Chinchaysuyu) around Chan Chan, today Trujillo. Other languages included Quignam, Jaqaru, Leco, Uru-Chipaya languages, Kunza, Humahuaca, Cacán, Mapudungun, Culle, Chachapoya, Catacao languages, Manta, and Barbacoan languages, as well as numerous Amazonian languages on the frontier regions. The exact linguistic topography of the pre-Columbian and early colonial Andes remains incompletely understood, owing to the extinction of several languages and the loss of historical records.
|
80 |
+
|
81 |
+
In order to manage this diversity, the Inca lords promoted the usage of Quechua, especially the variety of modern-day Lima [35] as the Qhapaq Runasimi ("great language of the people"), or the official language/lingua franca. Defined by mutual intelligibility, Quechua is actually a family of languages rather than one single language, parallel to the Romance or Slavic languages in Europe. Most communities within the empire, even those resistant to Inca rule, learned to speak a variety of Quechua (forming new regional varieties with distinct phonetics) in order to communicate with the Inca lords and mitma colonists, as well as the wider integrating society, but largely retained their native languages as well. The Incas also had their own ethnic language, referred to as Qhapaq simi ("royal language"), which is thought to have been closely related to or a dialect of Puquina, which appears to have been the official language of the former Tiwanaku Empire, from which the Incas claimed descent, making Qhapaq simi a source of prestige for them. The split between Qhapaq simi and Qhapaq Runasimi also exemplifies the larger split between hatun and hunin (high and low) society in general.
|
82 |
+
|
83 |
+
There are several common misconceptions about the history of Quechua, as it is frequently identified as the "Inca language". Quechua did not originate with the Incas, had been a lingua franca in multiple areas before the Inca expansions, was diverse before the rise of the Incas, and it was not the native or original language of the Incas. In addition, the main official language of the Inca Empire was the coastal Quechua variety, native to modern Lima, not the Cusco dialect. The pre-Inca Chincha Kingdom, with whom the Incas struck an alliance, had made this variety into a local prestige language by their extensive trading activities. The Peruvian coast was also the most populous and economically active region of the Inca Empire, and employing coastal Quechua offered an alternative to neighboring Mochica, the language of the rival state of Chimu. Trade had also been spreading Quechua northwards before the Inca expansions, towards Cajamarca and Ecuador, and was likely the official language of the older Wari Empire. However, the Incas have left an impressive linguistic legacy, in that they introduced Quechua to many areas where it is still widely spoken today, including Ecuador, southern Bolivia, southern Colombia, and parts of the Amazon basin. The Spanish conquerors continued the official usage of Quechua during the early colonial period, and transformed it into a literary language.[36]
|
84 |
+
|
85 |
+
The Incas were not known to develop a written form of language; however, they visually recorded narratives through paintings on vases and cups (qirus).[37] These paintings are usually accompanied by geometric patterns known as toqapu, which are also found in textiles. Researchers have speculated that toqapu patterns could have served as a form of written communication (e.g.: heraldry, or glyphs), however this remains unclear.[38] The Incas also kept records by using quipus.
|
86 |
+
|
87 |
+
The high infant mortality rates that plagued the Inca Empire caused all newborn infants to be given the term ‘wawa’ when they were born. Most families did not invest very much into their child until they reached the age of two or three years old. Once the child reached the age of three, a "coming of age" ceremony occurred, called the rutuchikuy. For the Incas, this ceremony indicated that the child had entered the stage of "ignorance". During this ceremony, the family would invite all relatives to their house for food and dance, and then each member of the family would receive a lock of hair from the child. After each family member had received a lock, the father would shave the child's head. This stage of life was categorized by a stage of "ignorance, inexperience, and lack of reason, a condition that the child would overcome with time."[39] For Incan society, in order to advance from the stage of ignorance to development the child must learn the roles associated with their gender.
|
88 |
+
|
89 |
+
The next important ritual was to celebrate the maturity of a child. Unlike the coming of age ceremony, the celebration of maturity signified the child's sexual potency. This celebration of puberty was called warachikuy for boys and qikuchikuy for girls. The warachikuy ceremony included dancing, fasting, tasks to display strength, and family ceremonies. The boy would also be given new clothes and taught how to act as an unmarried man. The qikuchikuy signified the onset of menstruation, upon which the girl would go into the forest alone and return only once the bleeding had ended. In the forest she would fast, and, once returned, the girl would be given a new name, adult clothing, and advice. This "folly" stage of life was the time young adults were allowed to have sex without being a parent.[39]
|
90 |
+
|
91 |
+
Between the ages of 20 and 30, people were considered young adults, "ripe for serious thought and labor."[39] Young adults were able to retain their youthful status by living at home and assisting in their home community. Young adults only reached full maturity and independence once they had married.
|
92 |
+
|
93 |
+
At the end of life, the terms for men and women denote loss of sexual vitality and humanity. Specifically, the "decrepitude" stage signifies the loss of mental well-being and further physical decline.
|
94 |
+
|
95 |
+
In the Incan Empire, the age of marriage differed for men and women: men typically married at the age of 20, while women usually got married about four years earlier at the age of 16.[40] Men who were highly ranked in society could have multiple wives, but those lower in the ranks could only take a single wife.[41] Marriages were typically within classes and resembled a more business-like agreement. Once married, the women were expected to cook, collect food and watch over the children and livestock.[40] Girls and mothers would also work around the house to keep it orderly to please the public inspectors.[42] These duties remained the same even after wives became pregnant and with the added responsibility of praying and making offerings to Kanopa, who was the god of pregnancy.[40] It was typical for marriages to begin on a trial basis with both men and women having a say in the longevity of the marriage. If the man felt that it wouldn't work out or if the woman wanted to return to her parents’ home the marriage would end. Once the marriage was final, the only way the two could be divorced was if they did not have a child together.[40] Marriage within the Empire was crucial for survival. A family was considered disadvantaged if there was not a married couple at the center because everyday life centered around the balance of male and female tasks.[43]
|
96 |
+
|
97 |
+
According to some historians, such as Terence N. D'Altroy, male and female roles were considered equal in Inca society. The "indigenous cultures saw the two genders as complementary parts of a whole."[43] In other words, there was not a hierarchical structure in the domestic sphere for the Incas. Within the domestic sphere, women were known as the weavers. Women's everyday tasks included: spinning, watching the children, weaving cloth, cooking, brewing chichi, preparing fields for cultivation, planting seeds, bearing children, harvesting, weeding, hoeing, herding, and carrying water.[44] Men on the other hand, "weeded, plowed, participated in combat, helped in the harvest, carried firewood, built houses, herded llama and alpaca, and spun and wove when necessary".[44] This relationship between the genders may have been complementary. Unsurprisingly, onlooking Spaniards believed women were treated like slaves, because women did not work in Spanish society to the same extent, and certainly did not work in fields.[45] Women were sometimes allowed to own land and herds because inheritance was passed down from both the mother's and father's side of the family.[46] Kinship within the Inca society followed a parallel line of descent. In other words, women ascended from women and men ascended from men. Due to the parallel descent, a woman had access to land and other necessities through her mother.[44]
|
98 |
+
|
99 |
+
Inca myths were transmitted orally until early Spanish colonists recorded them; however, some scholars claim that they were recorded on quipus, Andean knotted string records.[47]
|
100 |
+
|
101 |
+
The Inca believed in reincarnation.[48] After death, the passage to the next world was fraught with difficulties. The spirit of the dead, camaquen, would need to follow a long road and during the trip the assistance of a black dog that could see in the dark was required. Most Incas imagined the after world to be like an earthly paradise with flower-covered fields and snow-capped mountains.
|
102 |
+
|
103 |
+
It was important to the Inca that they not die as a result of burning or that the body of the deceased not be incinerated. Burning would cause their vital force to disappear and threaten their passage to the after world. Those who obeyed the Inca moral code – ama suwa, ama llulla, ama quella (do not steal, do not lie, do not be lazy) – "went to live in the Sun's warmth while others spent their eternal days in the cold earth".[49] The Inca nobility practiced cranial deformation.[50] They wrapped tight cloth straps around the heads of newborns to shape their soft skulls into a more conical form, thus distinguishing the nobility from other social classes.
|
104 |
+
|
105 |
+
The Incas made human sacrifices. As many as 4,000 servants, court officials, favorites and concubines were killed upon the death of the Inca Huayna Capac in 1527.[51] The Incas performed child sacrifices around important events, such as the death of the Sapa Inca or during a famine. These sacrifices were known as qhapaq hucha.[52]
|
106 |
+
|
107 |
+
The Incas were polytheists who worshipped many gods. These included:
|
108 |
+
|
109 |
+
The Inca Empire employed central planning. The Inca Empire traded with outside regions, although they did not operate a substantial internal market economy. While axe-monies were used along the northern coast, presumably by the provincial mindaláe trading class,[53] most households in the empire lived in a traditional economy in which households were required to pay taxes, usually in the form of the mit'a corvée labor, and military obligations,[54] though barter (or trueque) was present in some areas.[55] In return, the state provided security, food in times of hardship through the supply of emergency resources, agricultural projects (e.g. aqueducts and terraces) to increase productivity and occasional feasts. While mit'a was used by the state to obtain labor, individual villages had a pre-inca system of communal work, known as mink'a. This system survives to the modern day, known as mink'a or faena. The economy rested on the material foundations of the vertical archipelago, a system of ecological complementarity in accessing resources[56] and the cultural foundation of ayni, or reciprocal exchange.[57][58]
|
110 |
+
|
111 |
+
The Sapa Inca was conceptualized as divine and was effectively head of the state religion. The Willaq Umu (or Chief Priest) was second to the emperor. Local religious traditions continued and in some cases such as the Oracle at Pachacamac on the Peruvian coast, were officially venerated. Following Pachacuti, the Sapa Inca claimed descent from Inti, who placed a high value on imperial blood; by the end of the empire, it was common to incestuously wed brother and sister. He was "son of the sun," and his people the intip churin, or "children of the sun," and both his right to rule and mission to conquer derived from his holy ancestor. The Sapa Inca also presided over ideologically important festivals, notably during the Inti Raymi, or "Sunfest" attended by soldiers, mummified rulers, nobles, clerics and the general population of Cusco beginning on the June solstice and culminating nine days later with the ritual breaking of the earth using a foot plow by the Inca. Moreover, Cusco was considered cosmologically central, loaded as it was with huacas and radiating ceque lines and geographic center of the Four-Quarters; Inca Garcilaso de la Vega called it "the navel of the universe".[59][60][61][62]
|
112 |
+
|
113 |
+
The Inca Empire was a federalist system consisting of a central government with the Inca at its head and four-quarters, or suyu: Chinchay Suyu (NW), Anti Suyu (NE), Kunti Suyu (SW) and Qulla Suyu (SE). The four corners of these quarters met at the center, Cusco. These suyu were likely created around 1460 during the reign of Pachacuti before the empire reached its largest territorial extent. At the time the suyu were established they were roughly of equal size and only later changed their proportions as the empire expanded north and south along the Andes.[63]
|
114 |
+
|
115 |
+
Cusco was likely not organized as a wamani, or province. Rather, it was probably somewhat akin to a modern federal district, like Washington, DC or Mexico City. The city sat at the center of the four suyu and served as the preeminent center of politics and religion. While Cusco was essentially governed by the Sapa Inca, his relatives and the royal panaqa lineages, each suyu was governed by an Apu, a term of esteem used for men of high status and for venerated mountains. Both Cusco as a district and the four suyu as administrative regions were grouped into upper hanan and lower hurin divisions. As the Inca did not have written records, it is impossible to exhaustively list the constituent wamani. However, colonial records allow us to reconstruct a partial list. There were likely more than 86 wamani, with more than 48 in the highlands and more than 38 on the coast.[64][65][66]
|
116 |
+
|
117 |
+
The most populous suyu was Chinchaysuyu, which encompassed the former Chimu empire and much of the northern Andes. At its largest extent, it extended through much of modern Ecuador and into modern Colombia.
|
118 |
+
|
119 |
+
The largest suyu by area was Qullasuyu, named after the Aymara-speaking Qulla people. It encompassed the Bolivian Altiplano and much of the southern Andes, reaching Argentina and as far south as the Maipo or Maule river in Central Chile.[67] Historian José Bengoa singled out Quillota as likely being the foremost Inca settlement in Chile.[68]
|
120 |
+
|
121 |
+
The second smallest suyu, Antisuyu, was northwest of Cusco in the high Andes. Its name is the root of the word "Andes."[69]
|
122 |
+
|
123 |
+
Kuntisuyu was the smallest suyu, located along the southern coast of modern Peru, extending into the highlands towards Cusco.[70]
|
124 |
+
|
125 |
+
The Inca state had no separate judiciary or codified laws. Customs, expectations and traditional local power holders governed behavior. The state had legal force, such as through tokoyrikoq (lit. "he who sees all"), or inspectors. The highest such inspector, typically a blood relative to the Sapa Inca, acted independently of the conventional hierarchy, providing a point of view for the Sapa Inca free of bureaucratic influence.[71]
|
126 |
+
|
127 |
+
The Inca had three moral precepts that governed their behavior:
|
128 |
+
|
129 |
+
Colonial sources are not entirely clear or in agreement about Inca government structure, such as exact duties and functions of government positions. But the basic structure can be broadly described. The top was the Sapa Inca. Below that may have been the Willaq Umu, literally the "priest who recounts", the High Priest of the Sun.[72] However, beneath the Sapa Inca also sat the Inkap rantin, who was a confidant and assistant to the Sapa Inca, perhaps similar to a Prime Minister.[73] Starting with Topa Inca Yupanqui, a "Council of the Realm" was composed of 16 nobles: 2 from hanan Cusco; 2 from hurin Cusco; 4 from Chinchaysuyu; 2 from Cuntisuyu; 4 from Collasuyu; and 2 from Antisuyu. This weighting of representation balanced the hanan and hurin divisions of the empire, both within Cusco and within the Quarters (hanan suyukuna and hurin suyukuna).[74]
|
130 |
+
|
131 |
+
While provincial bureaucracy and government varied greatly, the basic organization was decimal. Taxpayers – male heads of household of a certain age range – were organized into corvée labor units (often doubling as military units) that formed the state's muscle as part of mit'a service. Each unit of more than 100 tax-payers were headed by a kuraka, while smaller units were headed by a kamayuq, a lower, non-hereditary status. However, while kuraka status was hereditary and typically served for life, the position of a kuraka in the hierarchy was subject to change based on the privileges of superiors in the hierarchy; a pachaka kuraka could be appointed to the position by a waranqa kuraka. Furthermore, one kuraka in each decimal level could serve as the head of one of the nine groups at a lower level, so that a pachaka kuraka might also be a waranqa kuraka, in effect directly responsible for one unit of 100 tax-payers and less directly responsible for nine other such units.[75][76][77]
|
132 |
+
|
133 |
+
Francisco Pizarro
|
134 |
+
|
135 |
+
Architecture was the most important of the Incan arts, with textiles reflecting architectural motifs. The most notable example is Machu Picchu, which was constructed by Inca engineers. The prime Inca structures were made of stone blocks that fit together so well that a knife could not be fitted through the stonework. These constructs have survived for centuries, with no use of mortar to sustain them.
|
136 |
+
|
137 |
+
This process was first used on a large scale by the Pucara (c. 300 BC–AD 300) peoples to the south in Lake Titicaca and later in the city of Tiwanaku (c. AD 400–1100) in present-day Bolivia. The rocks were sculpted to fit together exactly by repeatedly lowering a rock onto another and carving away any sections on the lower rock where the dust was compressed. The tight fit and the concavity on the lower rocks made them extraordinarily stable, despite the ongoing challenge of earthquakes and volcanic activity.
|
138 |
+
|
139 |
+
Physical measures used by the Inca were based on human body parts. Units included fingers, the distance from thumb to forefinger, palms, cubits and wingspans. The most basic distance unit was thatkiy or thatki, or one pace. The next largest unit was reported by Cobo to be the topo or tupu, measuring 6,000 thatkiys, or about 7.7 km (4.8 mi); careful study has shown that a range of 4.0 to 6.3 km (2.5 to 3.9 mi) is likely. Next was the wamani, composed of 30 topos (roughly 232 km or 144 mi). To measure area, 25 by 50 wingspans were used, reckoned in topos (roughly 3,280 km2 or 1,270 sq mi). It seems likely that distance was often interpreted as one day's walk; the distance between tambo way-stations varies widely in terms of distance, but far less in terms of time to walk that distance.[80][81]
|
140 |
+
|
141 |
+
Inca calendars were strongly tied to astronomy. Inca astronomers understood equinoxes, solstices and zenith passages, along with the Venus cycle. They could not, however, predict eclipses. The Inca calendar was essentially lunisolar, as two calendars were maintained in parallel, one solar and one lunar. As 12 lunar months fall 11 days short of a full 365-day solar year, those in charge of the calendar had to adjust every winter solstice. Each lunar month was marked with festivals and rituals.[82] Apparently, the days of the week were not named and days were not grouped into weeks. Similarly, months were not grouped into seasons. Time during a day was not measured in hours or minutes, but in terms of how far the sun had travelled or in how long it had taken to perform a task.[83]
|
142 |
+
|
143 |
+
The sophistication of Inca administration, calendrics and engineering required facility with numbers. Numerical information was stored in the knots of quipu strings, allowing for compact storage of large numbers.[84][85] These numbers were stored in base-10 digits, the same base used by the Quechua language[86] and in administrative and military units.[76] These numbers, stored in quipu, could be calculated on yupanas, grids with squares of positionally varying mathematical values, perhaps functioning as an abacus.[87] Calculation was facilitated by moving piles of tokens, seeds or pebbles between compartments of the yupana. It is likely that Inca mathematics at least allowed division of integers into integers or fractions and multiplication of integers and fractions.[88]
|
144 |
+
|
145 |
+
According to mid-17th-century Jesuit chronicler Bernabé Cobo,[89] the Inca designated officials to perform accounting-related tasks. These officials were called quipo camayos. Study of khipu sample VA 42527 (Museum für Völkerkunde, Berlin)[90] revealed that the numbers arranged in calendrically significant patterns were used for agricultural purposes in the "farm account books" kept by the khipukamayuq (accountant or warehouse keeper) to facilitate the closing of accounting books.[91]
|
146 |
+
|
147 |
+
Ceramics were painted using the polychrome technique portraying numerous motifs including animals, birds, waves, felines (popular in the Chavin culture) and geometric patterns found in the Nazca style of ceramics. In a culture without a written language, ceramics portrayed the basic scenes of everyday life, including the smelting of metals, relationships and scenes of tribal warfare. The most distinctive Inca ceramic objects are the Cusco bottles or "aryballos".[92] Many of these pieces are on display in Lima in the Larco Archaeological Museum and the National Museum of Archaeology, Anthropology and History.
|
148 |
+
|
149 |
+
Almost all of the gold and silver work of the Incan empire was melted down by the conquistadors, and shipped back to Spain.[93]
|
150 |
+
|
151 |
+
The Inca recorded information on assemblages of knotted strings, known as Quipu, although they can no longer be decoded. Originally it was thought that Quipu were used only as mnemonic devices or to record numerical data. Quipus are also believed to record history and literature.[94]
|
152 |
+
|
153 |
+
The Inca made many discoveries in medicine.[95] They performed successful skull surgery, by cutting holes in the skull to alleviate fluid buildup and inflammation caused by head wounds. Many skull surgeries performed by Inca surgeons were successful. Survival rates were 80–90%, compared to about 30% before Inca times.[96]
|
154 |
+
|
155 |
+
The Incas revered the coca plant as sacred/magical. Its leaves were used in moderate amounts to lessen hunger and pain during work, but were mostly used for religious and health purposes.[97] The Spaniards took advantage of the effects of chewing coca leaves.[97] The Chasqui, messengers who ran throughout the empire to deliver messages, chewed coca leaves for extra energy. Coca leaves were also used as an anaesthetic during surgeries.
|
156 |
+
|
157 |
+
The Inca army was the most powerful at that time, because any ordinary villager or farmer could be recruited as a soldier as part of the mit'a system of mandatory public service. Every able bodied male Inca of fighting age had to take part in war in some capacity at least once and to prepare for warfare again when needed. By the time the empire reached its largest size, every section of the empire contributed in setting up an army for war.
|
158 |
+
|
159 |
+
The Incas had no iron or steel and their weapons were not much more effective than those of their opponents so they often defeated opponents by sheer force of numbers, or else by persuading them to surrender beforehand by offering generous terms.[98] Inca weaponry included "hardwood spears launched using throwers, arrows, javelins, slings, the bolas, clubs, and maces with star-shaped heads made of copper or bronze."[98][99] Rolling rocks downhill onto the enemy was a common strategy, taking advantage of the hilly terrain.[100] Fighting was sometimes accompanied by drums and trumpets made of wood, shell or bone.[101][102] Armor included:[98][103]
|
160 |
+
|
161 |
+
Roads allowed quick movement (on foot) for the Inca army and shelters called tambo and storage silos called qullqas were built one day's travelling distance from each other, so that an army on campaign could always be fed and rested. This can be seen in names of ruins such as Ollantay Tambo, or My Lord's Storehouse. These were set up so the Inca and his entourage would always have supplies (and possibly shelter) ready as they traveled.
|
162 |
+
|
163 |
+
Chronicles and references from the 16th and 17th centuries support the idea of a banner. However, it represented the Inca (emperor), not the empire.
|
164 |
+
|
165 |
+
Francisco López de Jerez[106] wrote in 1534:
|
166 |
+
|
167 |
+
... todos venían repartidos en sus escuadras con sus banderas y capitanes que los mandan, con tanto concierto como turcos.(... all of them came distributed into squads, with their flags and captains commanding them, as well-ordered as Turks.)
|
168 |
+
|
169 |
+
Chronicler Bernabé Cobo wrote:
|
170 |
+
|
171 |
+
The royal standard or banner was a small square flag, ten or twelve spans around, made of cotton or wool cloth, placed on the end of a long staff, stretched and stiff such that it did not wave in the air and on it each king painted his arms and emblems, for each one chose different ones, though the sign of the Incas was the rainbow and two parallel snakes along the width with the tassel as a crown, which each king used to add for a badge or blazon those preferred, like a lion, an eagle and other figures.
|
172 |
+
(... el guión o estandarte real era una banderilla cuadrada y pequeña, de diez o doce palmos de ruedo, hecha de lienzo de algodón o de lana, iba puesta en el remate de una asta larga, tendida y tiesa, sin que ondease al aire, y en ella pintaba cada rey sus armas y divisas, porque cada uno las escogía diferentes, aunque las generales de los Incas eran el arco celeste y dos culebras tendidas a lo largo paralelas con la borda que le servía de corona, a las cuales solía añadir por divisa y blasón cada rey las que le parecía, como un león, un águila y otras figuras.)-Bernabé Cobo, Historia del Nuevo Mundo (1653)
|
173 |
+
|
174 |
+
Guaman Poma's 1615 book, El primer nueva corónica y buen gobierno, shows numerous line drawings of Inca flags.[107] In his 1847 book A History of the Conquest of Peru, "William H. Prescott ... says that in the Inca army each company had its particular banner and that the imperial standard, high above all, displayed the glittering device of the rainbow, the armorial ensign of the Incas."[108] A 1917 world flags book says the Inca "heir-apparent ... was entitled to display the royal standard of the rainbow in his military campaigns."[109]
|
175 |
+
|
176 |
+
In modern times the rainbow flag has been wrongly associated with the Tawantinsuyu and displayed as a symbol of Inca heritage by some groups in Peru and Bolivia. The city of Cusco also flies the Rainbow Flag, but as an official flag of the city. The Peruvian president Alejandro Toledo (2001–2006) flew the Rainbow Flag in Lima's presidential palace. However, according to Peruvian historiography, the Inca Empire never had a flag. Peruvian historian María Rostworowski said, "I bet my life, the Inca never had that flag, it never existed, no chronicler mentioned it".[110] Also, to the Peruvian newspaper El Comercio, the flag dates to the first decades of the 20th century,[111] and even the Congress of the Republic of Peru has determined that flag is a fake by citing the conclusion of National Academy of Peruvian History:
|
177 |
+
|
178 |
+
"The official use of the wrongly called 'Tawantinsuyu flag' is a mistake. In the Pre-Hispanic Andean World there did not exist the concept of a flag, it did not belong to their historic context".[111]
|
179 |
+
National Academy of Peruvian History
|
180 |
+
|
181 |
+
Incas were able to adapt to their high-altitude living through successful acclimatization, which is characterized by increasing oxygen supply to the blood tissues. For the native Inca living in the Andean highlands, this was achieved through the development of a larger lung capacity, and an increase in red blood cell counts, hemoglobin concentration, and capillary beds.[112]
|
182 |
+
|
183 |
+
Compared to other humans, the Incas had slower heart rates, almost one-third larger lung capacity, about 2 L (4 pints) more blood volume and double the amount of hemoglobin, which transfers oxygen from the lungs to the rest of the body. While the Conquistadors may have been slightly taller, the Inca had the advantage of coping with the extraordinary altitude.
|
184 |
+
|
en/2715.html.txt
ADDED
@@ -0,0 +1,86 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A tooth (plural teeth) is a hard, calcified structure found in the jaws (or mouths) of many vertebrates and used to break down food. Some animals, particularly carnivores, also use teeth for hunting or for defensive purposes. The roots of teeth are covered by gums. Teeth are not made of bone, but rather of multiple tissues of varying density and hardness that originate from the embryonic germ layer, the ectoderm.
|
4 |
+
|
5 |
+
The general structure of teeth is similar across the vertebrates, although there is considerable variation in their form and position. The teeth of mammals have deep roots, and this pattern is also found in some fish, and in crocodilians. In most teleost fish, however, the teeth are attached to the outer surface of the bone, while in lizards they are attached to the inner surface of the jaw by one side. In cartilaginous fish, such as sharks, the teeth are attached by tough ligaments to the hoops of cartilage that form the jaw.[1]
|
6 |
+
|
7 |
+
Some animals develop only one set of teeth (monophyodonts) while others develop many sets (polyphyodonts). Sharks, for example, grow a new set of teeth every two weeks to replace worn teeth. Rodent incisors grow and wear away continually through gnawing, which helps maintain relatively constant length. The industry of the beaver is due in part to this qualification. Many rodents such as voles and guinea pigs, but not mice, as well as leporidae like rabbits, have continuously growing molars in addition to incisors.[2][3]
|
8 |
+
|
9 |
+
Teeth are not always attached to the jaw, as they are in mammals. In many reptiles and fish, teeth are attached to the palate or to the floor of the mouth, forming additional rows inside those on the jaws proper. Some teleosts even have teeth in the pharynx. While not true teeth in the usual sense, the dermal denticles of sharks are almost identical in structure and are likely to have the same evolutionary origin. Indeed, teeth appear to have first evolved in sharks, and are not found in the more primitive jawless fish – while lampreys do have tooth-like structures on the tongue, these are in fact, composed of keratin, not of dentine or enamel, and bear no relationship to true teeth.[1] Though "modern" teeth-like structures with dentine and enamel have been found in late conodonts, they are now supposed to have evolved independently of later vertebrates' teeth.[4][5]
|
10 |
+
|
11 |
+
Living amphibians typically have small teeth, or none at all, since they commonly feed only on soft foods. In reptiles, teeth are generally simple and conical in shape, although there is some variation between species, most notably the venom-injecting fangs of snakes. The pattern of incisors, canines, premolars and molars is found only in mammals, and to varying extents, in their evolutionary ancestors. The numbers of these types of teeth vary greatly between species; zoologists use a standardised dental formula to describe the precise pattern in any given group.[1]
|
12 |
+
|
13 |
+
The genes governing tooth development in mammals are homologous to those involved in the development of fish scales.[6] Study of a tooth plate of a fossil of the extinct fish Romundina stellina showed that the teeth and scales were made of the same tissues, also found in mammal teeth, lending support to the theory that teeth evolved as a modification of scales.[7]
|
14 |
+
|
15 |
+
Teeth are among the most distinctive (and long-lasting) features of mammal species. Paleontologists use teeth to identify fossil species and determine their relationships. The shape of the animal's teeth are related to its diet. For example, plant matter is hard to digest, so herbivores have many molars for chewing and grinding. Carnivores, on the other hand, have canine teeth to kill prey and to tear meat.
|
16 |
+
|
17 |
+
Mammals, in general, are diphyodont, meaning that they develop two sets of teeth. In humans, the first set (the "baby," "milk," "primary" or "deciduous" set) normally starts to appear at about six months of age, although some babies are born with one or more visible teeth, known as neonatal teeth. Normal tooth eruption at about six months is known as teething and can be painful. Kangaroos, elephants, and manatees are unusual among mammals because they are polyphyodonts.
|
18 |
+
|
19 |
+
In Aardvarks, teeth lack enamel and have many pulp tubules, hence the name of the order Tubulidentata.
|
20 |
+
|
21 |
+
In dogs, the teeth are less likely than humans to form dental cavities because of the very high pH of dog saliva, which prevents enamel from demineralizing.[8] Sometimes called cuspids, these teeth are shaped like points (cusps) and are used for tearing and grasping food[9]
|
22 |
+
|
23 |
+
Like human teeth, whale teeth have polyp-like protrusions located on the root surface of the tooth. These polyps are made of cementum in both species, but in human teeth, the protrusions are located on the outside of the root, while in whales the nodule is located on the inside of the pulp chamber. While the roots of human teeth are made of cementum on the outer surface, whales have cementum on the entire surface of the tooth with a very small layer of enamel at the tip. This small enamel layer is only seen in older whales where the cementum has been worn away to show the underlying enamel.[10]
|
24 |
+
|
25 |
+
The toothed whale is a suborder of the cetaceans characterized by having teeth. The teeth differ considerably among the species. They may be numerous, with some dolphins bearing over 100 teeth in their jaws. On the other hand, the narwhals have a giant unicorn-like tusk, which is a tooth containing millions of sensory pathways and used for sensing during feeding, navigation, and mating. It is the most neurologically complex tooth known. Beaked whales are almost toothless, with only bizarre teeth found in males. These teeth may be used for feeding but also for demonstrating aggression and showmanship.
|
26 |
+
|
27 |
+
In humans (and most other primates) there are usually 20 primary (also "baby" or "milk") teeth, and later up to 32 permanent teeth. Four of these 32 may be third molars or wisdom teeth, although these are not present in all adults, and may be removed surgically later in life.[11]
|
28 |
+
|
29 |
+
Among primary teeth, 10 of them are usually found in the maxilla (i.e. upper jaw) and the other 10 in the mandible (i.e. lower jaw). Among permanent teeth, 16 are found in the maxilla and the other 16 in the mandible. Most of the teeth have uniquely distinguishing features.
|
30 |
+
|
31 |
+
An adult horse has between 36 and 44 teeth. The enamel and dentin layers of horse teeth are intertwined.[12] All horses have 12 premolars, 12 molars, and 12 incisors.[13] Generally, all male equines also have four canine teeth (called tushes) between the molars and incisors. However, few female horses (less than 28%) have canines, and those that do usually have only one or two, which many times are only partially erupted.[14] A few horses have one to four wolf teeth, which are vestigial premolars, with most of those having only one or two. They are equally common in male and female horses and much more likely to be on the upper jaw. If present these can cause problems as they can interfere with the horse's bit contact. Therefore, wolf teeth are commonly removed.[13]
|
32 |
+
|
33 |
+
Horse teeth can be used to estimate the animal's age. Between birth and five years, age can be closely estimated by observing the eruption pattern on milk teeth and then permanent teeth. By age five, all permanent teeth have usually erupted. The horse is then said to have a "full" mouth. After the age of five, age can only be conjectured by studying the wear patterns on the incisors, shape, the angle at which the incisors meet, and other factors. The wear of teeth may also be affected by diet, natural abnormalities, and cribbing. Two horses of the same age may have different wear patterns.
|
34 |
+
|
35 |
+
A horse's incisors, premolars, and molars, once fully developed, continue to erupt as the grinding surface is worn down through chewing. A young adult horse will have teeth which are 4.5-5 inches long, with the majority of the crown remaining below the gumline in the dental socket. The rest of the tooth will slowly emerge from the jaw, erupting about 1/8" each year, as the horse ages. When the animal reaches old age, the crowns of the teeth are very short and the teeth are often lost altogether. Very old horses, if lacking molars, may need to have their fodder ground up and soaked in water to create a soft mush for them to eat in order to obtain adequate nutrition.
|
36 |
+
|
37 |
+
Elephants' tusks are specialized incisors for digging food up and fighting. Some elephant teeth are similar to those in manatees, and it is notable that elephants are believed to have undergone an aquatic phase in their evolution.
|
38 |
+
|
39 |
+
At birth, elephants have a total of 28 molar plate-like grinding teeth not including the tusks. These are organized into four sets of seven successively larger teeth which the elephant will slowly wear through during its lifetime of chewing rough plant material. Only four teeth are used for chewing at a given time, and as each tooth wears out, another tooth moves forward to take its place in a process similar to a conveyor belt. The last and largest of these teeth usually becomes exposed when the animal is around 40 years of age, and will often last for an additional 20 years. When the last of these teeth has fallen out, regardless of the elephant's age, the animal will no longer be able to chew food and will die of starvation.[15][16]
|
40 |
+
|
41 |
+
Rabbits and other lagomorphs usually shed their deciduous teeth before (or very shortly after) their birth, and are usually born with their permanent teeth.[17] The teeth of rabbits complement their diet, which consists of a wide range of vegetation. Since many of the foods are abrasive enough to cause attrition, rabbit teeth grow continuously throughout life.[18] Rabbits have a total of 6 incisors, three upper premolars, three upper molars, two lower premolars, and two lower molars on each side. There are no canines. Three to four millimeters of the tooth is worn away by incisors every week, whereas the posterior teeth require a month to wear away the same amount.[19]
|
42 |
+
|
43 |
+
The incisors and cheek teeth of rabbits are called aradicular hypsodont teeth. This is sometimes referred to as an elodent dentition. These teeth grow or erupt continuously. The growth or eruption is held in balance by dental abrasion from chewing a diet high in fiber.
|
44 |
+
|
45 |
+
Rodents have upper and lower hypselodont incisors that can continuously grow enamel throughout its life without having properly formed roots.[20] These teeth are also known as aradicular teeth, and unlike humans whose ameloblasts die after tooth development, rodents continually produce enamel, they must wear down their teeth by gnawing on various materials.[21] Enamel and dentin are produced by the enamel organ, and growth is dependent on the presence of stem cells, cellular amplification, and cellular maturation structures in the odontogenic region.[22] Rodent incisors are used for cutting wood, biting through the skin of fruit, or for defense. This allows for the rate of wear and tooth growth to be at equilibrium.[20] The microstructure of rodent incisor enamel has shown to be useful in studying the phylogeny and systematics of rodents because of its independent evolution from the other dental traits. The enamel on rodent incisors are composed of two layers: the inner portio interna (PI) with Hunter-Schreger bands (HSB) and an outer portio externa (PE) with radial enamel (RE).[23] It usually involves the differential regulation of the epithelial stem cell niche in the tooth of two rodent species, such as guinea pigs.[24][25]
|
46 |
+
|
47 |
+
The teeth have enamel on the outside and exposed dentin on the inside, so they self-sharpen during gnawing. On the other hand, continually growing molars are found in some rodent species, such as the sibling vole and the guinea pig.[24][25][24][25] There is variation in the dentition of the rodents, but generally, rodents lack canines and premolars, and have a space between their incisors and molars, called the diastema region.
|
48 |
+
|
49 |
+
Manatees are polyphyodont with mandibular molars developing separately from the jaw and are encased in a bony shell separated by soft tissue.
|
50 |
+
|
51 |
+
Walrus tusks are canine teeth that grow continuously throughout life.[26]
|
52 |
+
|
53 |
+
Fish, such as sharks, may go through many teeth in their lifetime. The replacement of multiple teeth is known as polyphyodontia.
|
54 |
+
|
55 |
+
A class of prehistoric shark are called cladodonts for their strange forked teeth.
|
56 |
+
|
57 |
+
All amphibians have pedicellate teeth which are modified to be flexible due to connective tissue and uncalcified dentine that separates the crown from the base of the tooth.[27]
|
58 |
+
|
59 |
+
Most amphibians exhibit teeth that have a slight attachment to the jaw or acrodont teeth. Acrodont teeth exhibit limited connection to the dentary and have little enervation.[28] This is ideal for organisms who mostly use their teeth for grasping, but not for crushing and allows for rapid regeneration of teeth at a low energy cost. Teeth are usually lost in the course of feeding if the prey is struggling. Additionally, amphibians that undergo a metamorphosis develop bicuspid shaped teeth.[29]
|
60 |
+
|
61 |
+
The teeth of reptiles are replaced constantly during their life. Juvenile crocodilians replace teeth with larger ones at a rate as high as one new tooth per socket every month. Once adult, tooth replacement rates can slow to two years and even longer. Overall, crocodilians may use 3,000 teeth from birth to death. New teeth are created within old teeth.[citation needed]
|
62 |
+
|
63 |
+
A skull of Ichthyornis discovered in 2014 suggests that the beak of birds may have evolved from teeth to allow chicks to escape their shells earlier, and thus avoid predators and also to penetrate protective covers such as hard earth to access underlying food.[30][31]
|
64 |
+
|
65 |
+
True teeth are unique to vertebrates,[32] although many invertebrates have analogous structures often referred to as teeth. The organisms with the simplest genome bearing such tooth-like structures are perhaps the parasitic worms of the family Ancylostomatidae.[33] For example, the hookworm Necator americanus has two dorsal and two ventral cutting plates or teeth around the anterior margin of the buccal capsule. It also has a pair of subdorsal and a pair of subventral teeth located close to the rear.[34]
|
66 |
+
|
67 |
+
Historically the European medicinal leech, another invertebrate parasite, has been used in medicine to remove blood from patients.[35] They have three jaws (tripartite) that look like little saws, and on them are about 100 sharp teeth used to incise the host. The incision leaves a mark that is an inverted Y inside of a circle. After piercing the skin and injecting anticoagulants (hirudin) and anaesthetics, they suck out blood, consuming up to ten times their body weight in a single meal.[36]
|
68 |
+
|
69 |
+
In some species of Bryozoa, the first part of the stomach forms a muscular gizzard lined with chitinous teeth that crush armoured prey such as diatoms. Wave-like peristaltic contractions then move the food through the stomach for digestion.[37]
|
70 |
+
|
71 |
+
Molluscs have a structure called a radula which bears a ribbon of chitinous teeth. However, these teeth are histologically and developmentally different from vertebrate teeth and are unlikely to be homologous. For example, vertebrate teeth develop from a neural crest mesenchyme-derived dental papilla, and the neural crest is specific to vertebrates, as are tissues such as enamel.[32]
|
72 |
+
|
73 |
+
The radula is used by molluscs for feeding and is sometimes compared rather inaccurately to a tongue. It is a minutely toothed, chitinous ribbon, typically used for scraping or cutting food before the food enters the oesophagus. The radula is unique to molluscs, and is found in every class of mollusc apart from bivalves.
|
74 |
+
|
75 |
+
Within the gastropods, the radula is used in feeding by both herbivorous and carnivorous snails and slugs. The arrangement of teeth (also known as denticles) on the radula ribbon varies considerably from one group to another as shown in the diagram on the left.
|
76 |
+
|
77 |
+
Predatory marine snails such as the Naticidae use the radula plus an acidic secretion to bore through the shell of other molluscs. Other predatory marine snails, such as the Conidae, use a specialized radula tooth as a poisoned harpoon. Predatory pulmonate land slugs, such as the ghost slug, use elongated razor-sharp teeth on the radula to seize and devour earthworms. Predatory cephalopods, such as squid, use the radula for cutting prey.
|
78 |
+
|
79 |
+
In most of the more ancient lineages of gastropods, the radula is used to graze by scraping diatoms and other microscopic algae off rock surfaces and other substrates. Limpets scrape algae from rocks using radula equipped with exceptionally hard rasping teeth.[38] These teeth have the strongest known tensile strength of any biological material, outperforming spider silk.[38] The mineral protein of the limpet teeth can withstand a tensile stress of 4.9 GPa, compared to 4 GPa of spider silk and 0.5 GPa of human teeth.[39]
|
80 |
+
|
81 |
+
Because teeth are very resistant, often preserved when bones are not,[40] and reflect the diet of the host organism, they are very valuable to archaeologists and palaeontologists.[41] Early fish such as the thelodonts had scales composed of dentine and an enamel-like compound, suggesting that the origin of teeth was from scales which were retained in the mouth. Fish as early as the late Cambrian had dentine in their exoskeleton, which may have functioned in defense or for sensing their environment.[42] Dentine can be as hard as the rest of teeth and is composed of collagen fibres, reinforced with hydroxyapatite.[42]
|
82 |
+
|
83 |
+
Though teeth are very resistant, they also can be brittle and highly susceptible to cracking.[43] However, cracking of the tooth can be used as a diagnostic tool for predicting bite force. Additionally, enamel fractures can also give valuable insight into the diet and behaviour of archaeological and fossil samples.
|
84 |
+
|
85 |
+
Decalcification removes the enamel from teeth and leaves only the organic interior intact, which comprises dentine and cementine.[44] Enamel is quickly decalcified in acids,[45] perhaps by dissolution by plant acids or via diagenetic solutions, or in the stomachs of vertebrate predators.[44] Enamel can be lost by abrasion or spalling,[44] and is lost before dentine or bone are destroyed by the fossilisation process.[45] In such a case, the 'skeleton' of the teeth would consist of the dentine, with a hollow pulp cavity.[44]
|
86 |
+
The organic part of dentine, conversely, is destroyed by alkalis.[45]
|
en/2716.html.txt
ADDED
@@ -0,0 +1,216 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
India, officially the Republic of India (Hindi: Bhārat Gaṇarājya),[23] is a country in South Asia. It is the second-most populous country, the seventh-largest country by area, and the most populous democracy in the world. Bounded by the Indian Ocean on the south, the Arabian Sea on the southwest, and the Bay of Bengal on the southeast, it shares land borders with Pakistan to the west;[f] China, Nepal, and Bhutan to the north; and Bangladesh and Myanmar to the east. In the Indian Ocean, India is in the vicinity of Sri Lanka and the Maldives; its Andaman and Nicobar Islands share a maritime border with Thailand and Indonesia.
|
6 |
+
|
7 |
+
Modern humans arrived on the Indian subcontinent from Africa no later than 55,000 years ago.[24]
|
8 |
+
Their long occupation, initially in varying forms of isolation as hunter-gatherers, has made the region highly diverse, second only to Africa in human genetic diversity.[25] Settled life emerged on the subcontinent in the western margins of the Indus river basin 9,000 years ago, evolving gradually into the Indus Valley Civilisation of the third millennium BCE.[26]
|
9 |
+
By 1200 BCE, an archaic form of Sanskrit, an Indo-European language, had diffused into India from the northwest, unfolding as the language of the Rigveda, and recording the dawning of Hinduism in India.[27]
|
10 |
+
The Dravidian languages of India were supplanted in the northern regions.[28]
|
11 |
+
By 400 BCE, stratification and exclusion by caste had emerged within Hinduism,[29]
|
12 |
+
and Buddhism and Jainism had arisen, proclaiming social orders unlinked to heredity.[30]
|
13 |
+
Early political consolidations gave rise to the loose-knit Maurya and Gupta Empires based in the Ganges Basin.[31]
|
14 |
+
Their collective era was suffused with wide-ranging creativity,[32] but also marked by the declining status of women,[33] and the incorporation of untouchability into an organised system of belief.[g][34] In South India, the Middle kingdoms exported Dravidian-languages scripts and religious cultures to the kingdoms of Southeast Asia.[35]
|
15 |
+
|
16 |
+
In the early medieval era, Christianity, Islam, Judaism, and Zoroastrianism put down roots on India's southern and western coasts.[36]
|
17 |
+
Muslim armies from Central Asia intermittently overran India's northern plains,[37]
|
18 |
+
eventually establishing the Delhi Sultanate, and drawing northern India into the cosmopolitan networks of medieval Islam.[38]
|
19 |
+
In the 15th century, the Vijayanagara Empire created a long-lasting composite Hindu culture in south India.[39]
|
20 |
+
In the Punjab, Sikhism emerged, rejecting institutionalised religion.[40]
|
21 |
+
The Mughal Empire, in 1526, ushered in two centuries of relative peace,[41]
|
22 |
+
leaving a legacy of luminous architecture.[h][42]
|
23 |
+
Gradually expanding rule of the British East India Company followed, turning India into a colonial economy, but also consolidating its sovereignty.[43] British Crown rule began in 1858. The rights promised to Indians were granted slowly,[44] but technological changes were introduced, and ideas of education, modernity and the public life took root.[45]
|
24 |
+
A pioneering and influential nationalist movement emerged, which was noted for nonviolent resistance and became the major factor in ending British rule.[46] In 1947 the British Indian Empire was partitioned into two independent dominions, a Hindu-majority Dominion of India and a Muslim-majority Dominion of Pakistan, amid large-scale loss of life and an unprecedented migration.[47][48]
|
25 |
+
|
26 |
+
India has been a secular federal republic since 1950, governed in a democratic parliamentary system. It is a pluralistic, multilingual and multi-ethnic society. India's population grew from 361 million in 1951 to 1,211 million in 2011.[49]
|
27 |
+
During the same time, its nominal per capita income increased from US$64 annually to US$1,498, and its literacy rate from 16.6% to 74%. From being a comparatively destitute country in 1951,[50]
|
28 |
+
India has become a fast-growing major economy, a hub for information technology services, with an expanding middle class.[51] It has a space programme which includes several planned or completed extraterrestrial missions. Indian movies, music, and spiritual teachings play an increasing role in global culture.[52]
|
29 |
+
India has substantially reduced its rate of poverty, though at the cost of increasing economic inequality.[53]
|
30 |
+
India is a nuclear weapons state, which ranks high in military expenditure. It has disputes over Kashmir with its neighbours, Pakistan and China, unresolved since the mid-20th century.[54]
|
31 |
+
Among the socio-economic challenges India faces are gender inequality, child malnutrition,[55]
|
32 |
+
and rising levels of air pollution.[56]
|
33 |
+
India's land is megadiverse, with four biodiversity hotspots.[57] Its forest cover comprises 21.4% of its area.[58] India's wildlife, which has traditionally been viewed with tolerance in India's culture,[59] is supported among these forests, and elsewhere, in protected habitats.
|
34 |
+
|
35 |
+
According to the Oxford English Dictionary (Third Edition 2009), the name "India" is derived from the Classical Latin India, a reference to South Asia and an uncertain region to its east; and in turn derived successively from: Hellenistic Greek India ( Ἰνδία); ancient Greek Indos ( Ἰνδός); Old Persian Hindush, an eastern province of the Achaemenid empire; and ultimately its cognate, the Sanskrit Sindhu, or "river," specifically the Indus river and, by implication, its well-settled southern basin.[60][61] The ancient Greeks referred to the Indians as Indoi (Ἰνδοί), which translates as "The people of the Indus".[62]
|
36 |
+
|
37 |
+
The term Bharat (Bhārat; pronounced [ˈbʱaːɾət] (listen)), mentioned in both Indian epic poetry and the Constitution of India,[63][64] is used in its variations by many Indian languages. A modern rendering of the historical name Bharatavarsha, which applied originally to a region of the Gangetic Valley,[65][66] Bharat gained increased currency from the mid-19th century as a native name for India.[63][67]
|
38 |
+
|
39 |
+
Hindustan ([ɦɪndʊˈstaːn] (listen)) is a Middle Persian name for India, introduced during the Mughal Empire and used widely since. Its meaning has varied, referring to a region encompassing present-day northern India and Pakistan or to India in its near entirety.[63][67][68]
|
40 |
+
|
41 |
+
By 55,000 years ago, the first modern humans, or Homo sapiens, had arrived on the Indian subcontinent from Africa, where they had earlier evolved.[69][70][71]
|
42 |
+
The earliest known modern human remains in South Asia date to about 30,000 years ago.[72] After 6500 BCE, evidence for domestication of food crops and animals, construction of permanent structures, and storage of agricultural surplus appeared in Mehrgarh and other sites in what is now Balochistan.[73] These gradually developed into the Indus Valley Civilisation,[74][73] the first urban culture in South Asia,[75] which flourished during 2500–1900 BCE in what is now Pakistan and western India.[76] Centred around cities such as Mohenjo-daro, Harappa, Dholavira, and Kalibangan, and relying on varied forms of subsistence, the civilisation engaged robustly in crafts production and wide-ranging trade.[75]
|
43 |
+
|
44 |
+
During the period 2000–500 BCE, many regions of the subcontinent transitioned from the Chalcolithic cultures to the Iron Age ones.[77] The Vedas, the oldest scriptures associated with Hinduism,[78] were composed during this period,[79] and historians have analysed these to posit a Vedic culture in the Punjab region and the upper Gangetic Plain.[77] Most historians also consider this period to have encompassed several waves of Indo-Aryan migration into the subcontinent from the north-west.[78] The caste system, which created a hierarchy of priests, warriors, and free peasants, but which excluded indigenous peoples by labelling their occupations impure, arose during this period.[80] On the Deccan Plateau, archaeological evidence from this period suggests the existence of a chiefdom stage of political organisation.[77] In South India, a progression to sedentary life is indicated by the large number of megalithic monuments dating from this period,[81] as well as by nearby traces of agriculture, irrigation tanks, and craft traditions.[81]
|
45 |
+
|
46 |
+
In the late Vedic period, around the 6th century BCE, the small states and chiefdoms of the Ganges Plain and the north-western regions had consolidated into 16 major oligarchies and monarchies that were known as the mahajanapadas.[82][83] The emerging urbanisation gave rise to non-Vedic religious movements, two of which became independent religions. Jainism came into prominence during the life of its exemplar, Mahavira.[84] Buddhism, based on the teachings of Gautama Buddha, attracted followers from all social classes excepting the middle class; chronicling the life of the Buddha was central to the beginnings of recorded history in India.[85][86][87] In an age of increasing urban wealth, both religions held up renunciation as an ideal,[88] and both established long-lasting monastic traditions. Politically, by the 3rd century BCE, the kingdom of Magadha had annexed or reduced other states to emerge as the Mauryan Empire.[89] The empire was once thought to have controlled most of the subcontinent except the far south, but its core regions are now thought to have been separated by large autonomous areas.[90][91] The Mauryan kings are known as much for their empire-building and determined management of public life as for Ashoka's renunciation of militarism and far-flung advocacy of the Buddhist dhamma.[92][93]
|
47 |
+
|
48 |
+
The Sangam literature of the Tamil language reveals that, between 200 BCE and 200 CE, the southern peninsula was ruled by the Cheras, the Cholas, and the Pandyas, dynasties that traded extensively with the Roman Empire and with West and South-East Asia.[94][95] In North India, Hinduism asserted patriarchal control within the family, leading to increased subordination of women.[96][89] By the 4th and 5th centuries, the Gupta Empire had created a complex system of administration and taxation in the greater Ganges Plain; this system became a model for later Indian kingdoms.[97][98] Under the Guptas, a renewed Hinduism based on devotion, rather than the management of ritual, began to assert itself.[99] This renewal was reflected in a flowering of sculpture and architecture, which found patrons among an urban elite.[98] Classical Sanskrit literature flowered as well, and Indian science, astronomy, medicine, and mathematics made significant advances.[98]
|
49 |
+
|
50 |
+
The Indian early medieval age, 600 CE to 1200 CE, is defined by regional kingdoms and cultural diversity.[100] When Harsha of Kannauj, who ruled much of the Indo-Gangetic Plain from 606 to 647 CE, attempted to expand southwards, he was defeated by the Chalukya ruler of the Deccan.[101] When his successor attempted to expand eastwards, he was defeated by the Pala king of Bengal.[101] When the Chalukyas attempted to expand southwards, they were defeated by the Pallavas from farther south, who in turn were opposed by the Pandyas and the Cholas from still farther south.[101] No ruler of this period was able to create an empire and consistently control lands much beyond his core region.[100] During this time, pastoral peoples, whose land had been cleared to make way for the growing agricultural economy, were accommodated within caste society, as were new non-traditional ruling classes.[102] The caste system consequently began to show regional differences.[102]
|
51 |
+
|
52 |
+
In the 6th and 7th centuries, the first devotional hymns were created in the Tamil language.[103] They were imitated all over India and led to both the resurgence of Hinduism and the development of all modern languages of the subcontinent.[103] Indian royalty, big and small, and the temples they patronised drew citizens in great numbers to the capital cities, which became economic hubs as well.[104] Temple towns of various sizes began to appear everywhere as India underwent another urbanisation.[104] By the 8th and 9th centuries, the effects were felt in South-East Asia, as South Indian culture and political systems were exported to lands that became part of modern-day Myanmar, Thailand, Laos, Cambodia, Vietnam, Philippines, Malaysia, and Java.[105] Indian merchants, scholars, and sometimes armies were involved in this transmission; South-East Asians took the initiative as well, with many sojourning in Indian seminaries and translating Buddhist and Hindu texts into their languages.[105]
|
53 |
+
|
54 |
+
After the 10th century, Muslim Central Asian nomadic clans, using swift-horse cavalry and raising vast armies united by ethnicity and religion, repeatedly overran South Asia's north-western plains, leading eventually to the establishment of the Islamic Delhi Sultanate in 1206.[106] The sultanate was to control much of North India and to make many forays into South India. Although at first disruptive for the Indian elites, the sultanate largely left its vast non-Muslim subject population to its own laws and customs.[107][108] By repeatedly repulsing Mongol raiders in the 13th century, the sultanate saved India from the devastation visited on West and Central Asia, setting the scene for centuries of migration of fleeing soldiers, learned men, mystics, traders, artists, and artisans from that region into the subcontinent, thereby creating a syncretic Indo-Islamic culture in the north.[109][110] The sultanate's raiding and weakening of the regional kingdoms of South India paved the way for the indigenous Vijayanagara Empire.[111] Embracing a strong Shaivite tradition and building upon the military technology of the sultanate, the empire came to control much of peninsular India,[112] and was to influence South Indian society for long afterwards.[111]
|
55 |
+
|
56 |
+
In the early 16th century, northern India, then under mainly Muslim rulers,[113] fell again to the superior mobility and firepower of a new generation of Central Asian warriors.[114] The resulting Mughal Empire did not stamp out the local societies it came to rule. Instead, it balanced and pacified them through new administrative practices[115][116] and diverse and inclusive ruling elites,[117] leading to more systematic, centralised, and uniform rule.[118] Eschewing tribal bonds and Islamic identity, especially under Akbar, the Mughals united their far-flung realms through loyalty, expressed through a Persianised culture, to an emperor who had near-divine status.[117] The Mughal state's economic policies, deriving most revenues from agriculture[119] and mandating that taxes be paid in the well-regulated silver currency,[120] caused peasants and artisans to enter larger markets.[118] The relative peace maintained by the empire during much of the 17th century was a factor in India's economic expansion,[118] resulting in greater patronage of painting, literary forms, textiles, and architecture.[121] Newly coherent social groups in northern and western India, such as the Marathas, the Rajputs, and the Sikhs, gained military and governing ambitions during Mughal rule, which, through collaboration or adversity, gave them both recognition and military experience.[122] Expanding commerce during Mughal rule gave rise to new Indian commercial and political elites along the coasts of southern and eastern India.[122] As the empire disintegrated, many among these elites were able to seek and control their own affairs.[123]
|
57 |
+
|
58 |
+
By the early 18th century, with the lines between commercial and political dominance being increasingly blurred, a number of European trading companies, including the English East India Company, had established coastal outposts.[124][125] The East India Company's control of the seas, greater resources, and more advanced military training and technology led it to increasingly flex its military muscle and caused it to become attractive to a portion of the Indian elite; these factors were crucial in allowing the company to gain control over the Bengal region by 1765 and sideline the other European companies.[126][124][127][128] Its further access to the riches of Bengal and the subsequent increased strength and size of its army enabled it to annexe or subdue most of India by the 1820s.[129] India was then no longer exporting manufactured goods as it long had, but was instead supplying the British Empire with raw materials. Many historians consider this to be the onset of India's colonial period.[124] By this time, with its economic power severely curtailed by the British parliament and having effectively been made an arm of British administration, the company began more consciously to enter non-economic arenas like education, social reform, and culture.[130]
|
59 |
+
|
60 |
+
Historians consider India's modern age to have begun sometime between 1848 and 1885. The appointment in 1848 of Lord Dalhousie as Governor General of the East India Company set the stage for changes essential to a modern state. These included the consolidation and demarcation of sovereignty, the surveillance of the population, and the education of citizens. Technological changes—among them, railways, canals, and the telegraph—were introduced not long after their introduction in Europe.[131][132][133][134] However, disaffection with the company also grew during this time and set off the Indian Rebellion of 1857. Fed by diverse resentments and perceptions, including invasive British-style social reforms, harsh land taxes, and summary treatment of some rich landowners and princes, the rebellion rocked many regions of northern and central India and shook the foundations of Company rule.[135][136] Although the rebellion was suppressed by 1858, it led to the dissolution of the East India Company and the direct administration of India by the British government. Proclaiming a unitary state and a gradual but limited British-style parliamentary system, the new rulers also protected princes and landed gentry as a feudal safeguard against future unrest.[137][138] In the decades following, public life gradually emerged all over India, leading eventually to the founding of the Indian National Congress in 1885.[139][140][141][142]
|
61 |
+
|
62 |
+
The rush of technology and the commercialisation of agriculture in the second half of the 19th century was marked by economic setbacks and many small farmers became dependent on the whims of far-away markets.[143] There was an increase in the number of large-scale famines,[144] and, despite the risks of infrastructure development borne by Indian taxpayers, little industrial employment was generated for Indians.[145] There were also salutary effects: commercial cropping, especially in the newly canalled Punjab, led to increased food production for internal consumption.[146] The railway network provided critical famine relief,[147] notably reduced the cost of moving goods,[147] and helped nascent Indian-owned industry.[146]
|
63 |
+
|
64 |
+
After World War I, in which approximately one million Indians served,[148] a new period began. It was marked by British reforms but also repressive legislation, by more strident Indian calls for self-rule, and by the beginnings of a nonviolent movement of non-co-operation, of which Mohandas Karamchand Gandhi would become the leader and enduring symbol.[149] During the 1930s, slow legislative reform was enacted by the British; the Indian National Congress won victories in the resulting elections.[150] The next decade was beset with crises: Indian participation in World War II, the Congress's final push for non-co-operation, and an upsurge of Muslim nationalism. All were capped by the advent of independence in 1947, but tempered by the partition of India into two states: India and Pakistan.[151]
|
65 |
+
|
66 |
+
Vital to India's self-image as an independent nation was its constitution, completed in 1950, which put in place a secular and democratic republic.[152] It has remained a democracy with civil liberties, an active Supreme Court, and a largely independent press.[153] Economic liberalisation, which began in the 1990s, has created a large urban middle class, transformed India into one of the world's fastest-growing economies,[154] and increased its geopolitical clout. Indian movies, music, and spiritual teachings play an increasing role in global culture.[153] Yet, India is also shaped by seemingly unyielding poverty, both rural and urban;[153] by religious and caste-related violence;[155] by Maoist-inspired Naxalite insurgencies;[156] and by separatism in Jammu and Kashmir and in Northeast India.[157] It has unresolved territorial disputes with China[158] and with Pakistan.[158] India's sustained democratic freedoms are unique among the world's newer nations; however, in spite of its recent economic successes, freedom from want for its disadvantaged population remains a goal yet to be achieved.[159]
|
67 |
+
|
68 |
+
India accounts for the bulk of the Indian subcontinent, lying atop the Indian tectonic plate, a part of the Indo-Australian Plate.[160] India's defining geological processes began 75 million years ago when the Indian Plate, then part of the southern supercontinent Gondwana, began a north-eastward drift caused by seafloor spreading to its south-west, and later, south and south-east.[160] Simultaneously, the vast Tethyan oceanic crust, to its northeast, began to subduct under the Eurasian Plate.[160] These dual processes, driven by convection in the Earth's mantle, both created the Indian Ocean and caused the Indian continental crust eventually to under-thrust Eurasia and to uplift the Himalayas.[160] Immediately south of the emerging Himalayas, plate movement created a vast trough that rapidly filled with river-borne sediment[161] and now constitutes the Indo-Gangetic Plain.[162] Cut off from the plain by the ancient Aravalli Range lies the Thar Desert.[163]
|
69 |
+
|
70 |
+
The original Indian Plate survives as peninsular India, the oldest and geologically most stable part of India. It extends as far north as the Satpura and Vindhya ranges in central India. These parallel chains run from the Arabian Sea coast in Gujarat in the west to the coal-rich Chota Nagpur Plateau in Jharkhand in the east.[164] To the south, the remaining peninsular landmass, the Deccan Plateau, is flanked on the west and east by coastal ranges known as the Western and Eastern Ghats;[165] the plateau contains the country's oldest rock formations, some over one billion years old. Constituted in such fashion, India lies to the north of the equator between 6° 44′ and 35° 30′ north latitude[i] and 68° 7′ and 97° 25′ east longitude.[166]
|
71 |
+
|
72 |
+
India's coastline measures 7,517 kilometres (4,700 mi) in length; of this distance, 5,423 kilometres (3,400 mi) belong to peninsular India and 2,094 kilometres (1,300 mi) to the Andaman, Nicobar, and Lakshadweep island chains.[167] According to the Indian naval hydrographic charts, the mainland coastline consists of the following: 43% sandy beaches; 11% rocky shores, including cliffs; and 46% mudflats or marshy shores.[167]
|
73 |
+
|
74 |
+
Major Himalayan-origin rivers that substantially flow through India include the Ganges and the Brahmaputra, both of which drain into the Bay of Bengal.[169] Important tributaries of the Ganges include the Yamuna and the Kosi; the latter's extremely low gradient, caused by long-term silt deposition, leads to severe floods and course changes.[170][171] Major peninsular rivers, whose steeper gradients prevent their waters from flooding, include the Godavari, the Mahanadi, the Kaveri, and the Krishna, which also drain into the Bay of Bengal;[172] and the Narmada and the Tapti, which drain into the Arabian Sea.[173] Coastal features include the marshy Rann of Kutch of western India and the alluvial Sundarbans delta of eastern India; the latter is shared with Bangladesh.[174] India has two archipelagos: the Lakshadweep, coral atolls off India's south-western coast; and the Andaman and Nicobar Islands, a volcanic chain in the Andaman Sea.[175]
|
75 |
+
|
76 |
+
The Indian climate is strongly influenced by the Himalayas and the Thar Desert, both of which drive the economically and culturally pivotal summer and winter monsoons.[176] The Himalayas prevent cold Central Asian katabatic winds from blowing in, keeping the bulk of the Indian subcontinent warmer than most locations at similar latitudes.[177][178] The Thar Desert plays a crucial role in attracting the moisture-laden south-west summer monsoon winds that, between June and October, provide the majority of India's rainfall.[176] Four major climatic groupings predominate in India: tropical wet, tropical dry, subtropical humid, and montane.[179]
|
77 |
+
|
78 |
+
India is a megadiverse country, a term employed for 17 countries which display high biological diversity and contain many species exclusively indigenous, or endemic, to them.[181] India is a habitat for 8.6% of all mammal species, 13.7% of bird species, 7.9% of reptile species, 6% of amphibian species, 12.2% of fish species, and 6.0% of all flowering plant species.[182][183] Fully a third of Indian plant species are endemic.[184] India also contains four of the world's 34 biodiversity hotspots,[57] or regions that display significant habitat loss in the presence of high endemism.[j][185]
|
79 |
+
|
80 |
+
India's forest cover is 701,673 km2 (270,917 sq mi), which is 21.35% of the country's total land area. It can be subdivided further into broad categories of canopy density, or the proportion of the area of a forest covered by its tree canopy.[186] Very dense forest, whose canopy density is greater than 70%, occupies 2.61% of India's land area.[186] It predominates in the tropical moist forest of the Andaman Islands, the Western Ghats, and Northeast India.[187] Moderately dense forest, whose canopy density is between 40% and 70%, occupies 9.59% of India's land area.[186] It predominates in the temperate coniferous forest of the Himalayas, the moist deciduous sal forest of eastern India, and the dry deciduous teak forest of central and southern India.[187] Open forest, whose canopy density is between 10% and 40%, occupies 9.14% of India's land area,[186] and predominates in the babul-dominated thorn forest of the central Deccan Plateau and the western Gangetic plain.[187]
|
81 |
+
|
82 |
+
Among the Indian subcontinent's notable indigenous trees are the astringent Azadirachta indica, or neem, which is widely used in rural Indian herbal medicine,[188] and the luxuriant Ficus religiosa, or peepul,[189] which is displayed on the ancient seals of Mohenjo-daro,[190] and under which the Buddha is recorded in the Pali canon to have sought enlightenment,[191]
|
83 |
+
|
84 |
+
Many Indian species have descended from those of Gondwana, the southern supercontinent from which India separated more than 100 million years ago.[192] India's subsequent collision with Eurasia set off a mass exchange of species. However, volcanism and climatic changes later caused the extinction of many endemic Indian forms.[193] Still later, mammals entered India from Asia through two zoogeographical passes flanking the Himalayas.[187] This had the effect of lowering endemism among India's mammals, which stands at 12.6%, contrasting with 45.8% among reptiles and 55.8% among amphibians.[183] Notable endemics are the vulnerable[194] hooded leaf monkey[195] and the threatened[196] Beddom's toad[196][197] of the Western Ghats.
|
85 |
+
|
86 |
+
India contains 172 IUCN-designated threatened animal species, or 2.9% of endangered forms.[198] These include the endangered Bengal tiger and the Ganges river dolphin. Critically endangered species include: the gharial, a crocodilian; the great Indian bustard; and the Indian white-rumped vulture, which has become nearly extinct by having ingested the carrion of diclofenac-treated cattle.[199] The pervasive and ecologically devastating human encroachment of recent decades has critically endangered Indian wildlife. In response, the system of national parks and protected areas, first established in 1935, was expanded substantially. In 1972, India enacted the Wildlife Protection Act[200] and Project Tiger to safeguard crucial wilderness; the Forest Conservation Act was enacted in 1980 and amendments added in 1988.[201] India hosts more than five hundred wildlife sanctuaries and thirteen biosphere reserves,[202] four of which are part of the World Network of Biosphere Reserves; twenty-five wetlands are registered under the Ramsar Convention.[203]
|
87 |
+
|
88 |
+
India is the world's most populous democracy.[205] A parliamentary republic with a multi-party system,[206] it has eight recognised national parties, including the Indian National Congress and the Bharatiya Janata Party (BJP), and more than 40 regional parties.[207] The Congress is considered centre-left in Indian political culture,[208] and the BJP right-wing.[209][210][211] For most of the period between 1950—when India first became a republic—and the late 1980s, the Congress held a majority in the parliament. Since then, however, it has increasingly shared the political stage with the BJP,[212] as well as with powerful regional parties which have often forced the creation of multi-party coalition governments at the centre.[213]
|
89 |
+
|
90 |
+
In the Republic of India's first three general elections, in 1951, 1957, and 1962, the Jawaharlal Nehru-led Congress won easy victories. On Nehru's death in 1964, Lal Bahadur Shastri briefly became prime minister; he was succeeded, after his own unexpected death in 1966, by Nehru's daughter Indira Gandhi, who went on to lead the Congress to election victories in 1967 and 1971. Following public discontent with the state of emergency she declared in 1975, the Congress was voted out of power in 1977; the then-new Janata Party, which had opposed the emergency, was voted in. Its government lasted just over two years. Voted back into power in 1980, the Congress saw a change in leadership in 1984, when Indira Gandhi was assassinated; she was succeeded by her son Rajiv Gandhi, who won an easy victory in the general elections later that year. The Congress was voted out again in 1989 when a National Front coalition, led by the newly formed Janata Dal in alliance with the Left Front, won the elections; that government too proved relatively short-lived, lasting just under two years.[214] Elections were held again in 1991; no party won an absolute majority. The Congress, as the largest single party, was able to form a minority government led by P. V. Narasimha Rao.[215]
|
91 |
+
|
92 |
+
A two-year period of political turmoil followed the general election of 1996. Several short-lived alliances shared power at the centre. The BJP formed a government briefly in 1996; it was followed by two comparatively long-lasting United Front coalitions, which depended on external support. In 1998, the BJP was able to form a successful coalition, the National Democratic Alliance (NDA). Led by Atal Bihari Vajpayee, the NDA became the first non-Congress, coalition government to complete a five-year term.[216] Again in the 2004 Indian general elections, no party won an absolute majority, but the Congress emerged as the largest single party, forming another successful coalition: the United Progressive Alliance (UPA). It had the support of left-leaning parties and MPs who opposed the BJP. The UPA returned to power in the 2009 general election with increased numbers, and it no longer required external support from India's communist parties.[217] That year, Manmohan Singh became the first prime minister since Jawaharlal Nehru in 1957 and 1962 to be re-elected to a consecutive five-year term.[218] In the 2014 general election, the BJP became the first political party since 1984 to win a majority and govern without the support of other parties.[219] The incumbent prime minister is Narendra Modi, a former chief minister of Gujarat. On 20 July 2017, Ram Nath Kovind was elected India's 14th president and took the oath of office on 25 July 2017.[220][221][222]
|
93 |
+
|
94 |
+
India is a federation with a parliamentary system governed under the Constitution of India—the country's supreme legal document. It is a constitutional republic and representative democracy, in which "majority rule is tempered by minority rights protected by law". Federalism in India defines the power distribution between the union and the states. The Constitution of India, which came into effect on 26 January 1950,[224] originally stated India to be a "sovereign, democratic republic;" this characterisation was amended in 1971 to "a sovereign, socialist, secular, democratic republic".[225] India's form of government, traditionally described as "quasi-federal" with a strong centre and weak states,[226] has grown increasingly federal since the late 1990s as a result of political, economic, and social changes.[227][228]
|
95 |
+
|
96 |
+
The Government of India comprises three branches:[230]
|
97 |
+
|
98 |
+
India is a federal union comprising 28 states and 8 union territories.[245] All states, as well as the union territories of Jammu and Kashmir, Puducherry and the National Capital Territory of Delhi, have elected legislatures and governments following the Westminster system of governance. The remaining five union territories are directly ruled by the central government through appointed administrators. In 1956, under the States Reorganisation Act, states were reorganised on a linguistic basis.[246] There are over a quarter of a million local government bodies at city, town, block, district and village levels.[247]
|
99 |
+
|
100 |
+
In the 1950s, India strongly supported decolonisation in Africa and Asia and played a leading role in the Non-Aligned Movement.[249] After initially cordial relations with neighbouring China, India went to war with China in 1962, and was widely thought to have been humiliated. India has had tense relations with neighbouring Pakistan; the two nations have gone to war four times: in 1947, 1965, 1971, and 1999. Three of these wars were fought over the disputed territory of Kashmir, while the fourth, the 1971 war, followed from India's support for the independence of Bangladesh.[250] In the late 1980s, the Indian military twice intervened abroad at the invitation of the host country: a peace-keeping operation in Sri Lanka between 1987 and 1990; and an armed intervention to prevent a 1988 coup d'état attempt in the Maldives. After the 1965 war with Pakistan, India began to pursue close military and economic ties with the Soviet Union; by the late 1960s, the Soviet Union was its largest arms supplier.[251]
|
101 |
+
|
102 |
+
Aside from ongoing its special relationship with Russia,[252] India has wide-ranging defence relations with Israel and France. In recent years, it has played key roles in the South Asian Association for Regional Cooperation and the World Trade Organization. The nation has provided 100,000 military and police personnel to serve in 35 UN peacekeeping operations across four continents. It participates in the East Asia Summit, the G8+5, and other multilateral forums.[253] India has close economic ties with South America,[254] Asia, and Africa; it pursues a "Look East" policy that seeks to strengthen partnerships with the ASEAN nations, Japan, and South Korea that revolve around many issues, but especially those involving economic investment and regional security.[255][256]
|
103 |
+
|
104 |
+
China's nuclear test of 1964, as well as its repeated threats to intervene in support of Pakistan in the 1965 war, convinced India to develop nuclear weapons.[258] India conducted its first nuclear weapons test in 1974 and carried out additional underground testing in 1998. Despite criticism and military sanctions, India has signed neither the Comprehensive Nuclear-Test-Ban Treaty nor the Nuclear Non-Proliferation Treaty, considering both to be flawed and discriminatory.[259] India maintains a "no first use" nuclear policy and is developing a nuclear triad capability as a part of its "Minimum Credible Deterrence" doctrine.[260][261] It is developing a ballistic missile defence shield and, a fifth-generation fighter jet.[262][263] Other indigenous military projects involve the design and implementation of Vikrant-class aircraft carriers and Arihant-class nuclear submarines.[264]
|
105 |
+
|
106 |
+
Since the end of the Cold War, India has increased its economic, strategic, and military co-operation with the United States and the European Union.[265] In 2008, a civilian nuclear agreement was signed between India and the United States. Although India possessed nuclear weapons at the time and was not a party to the Nuclear Non-Proliferation Treaty, it received waivers from the International Atomic Energy Agency and the Nuclear Suppliers Group, ending earlier restrictions on India's nuclear technology and commerce. As a consequence, India became the sixth de facto nuclear weapons state.[266] India subsequently signed co-operation agreements involving civilian nuclear energy with Russia,[267] France,[268] the United Kingdom,[269] and Canada.[270]
|
107 |
+
|
108 |
+
The President of India is the supreme commander of the nation's armed forces; with 1.395 million active troops, they compose the world's second-largest military. It comprises the Indian Army, the Indian Navy, the Indian Air Force, and the Indian Coast Guard.[271] The official Indian defence budget for 2011 was US$36.03 billion, or 1.83% of GDP.[272] For the fiscal year spanning 2012–2013, US$40.44 billion was budgeted.[273] According to a 2008 Stockholm International Peace Research Institute (SIPRI) report, India's annual military expenditure in terms of purchasing power stood at US$72.7 billion.[274] In 2011, the annual defence budget increased by 11.6%,[275] although this does not include funds that reach the military through other branches of government.[276] As of 2012[update], India is the world's largest arms importer; between 2007 and 2011, it accounted for 10% of funds spent on international arms purchases.[277] Much of the military expenditure was focused on defence against Pakistan and countering growing Chinese influence in the Indian Ocean.[275] In May 2017, the Indian Space Research Organisation launched the South Asia Satellite, a gift from India to its neighbouring SAARC countries.[278] In October 2018, India signed a US$5.43 billion (over ₹400 billion) agreement with Russia to procure four S-400 Triumf surface-to-air missile defence systems, Russia's most advanced long-range missile defence system.[279]
|
109 |
+
|
110 |
+
According to the International Monetary Fund (IMF), the Indian economy in 2019 was nominally worth $2.9 trillion; it is the fifth-largest economy by market exchange rates, and is around $11 trillion, the third-largest by purchasing power parity, or PPP.[19] With its average annual GDP growth rate of 5.8% over the past two decades, and reaching 6.1% during 2011–2012,[283] India is one of the world's fastest-growing economies.[284] However, the country ranks 139th in the world in nominal GDP per capita and 118th in GDP per capita at PPP.[285] Until 1991, all Indian governments followed protectionist policies that were influenced by socialist economics. Widespread state intervention and regulation largely walled the economy off from the outside world. An acute balance of payments crisis in 1991 forced the nation to liberalise its economy;[286] since then it has moved slowly towards a free-market system[287][288] by emphasising both foreign trade and direct investment inflows.[289] India has been a member of WTO since 1 January 1995.[290]
|
111 |
+
|
112 |
+
The 513.7-million-worker Indian labour force is the world's second-largest, as of 2016[update].[271] The service sector makes up 55.6% of GDP, the industrial sector 26.3% and the agricultural sector 18.1%. India's foreign exchange remittances of US$70 billion in 2014, the largest in the world, were contributed to its economy by 25 million Indians working in foreign countries.[291] Major agricultural products include: rice, wheat, oilseed, cotton, jute, tea, sugarcane, and potatoes.[245] Major industries include: textiles, telecommunications, chemicals, pharmaceuticals, biotechnology, food processing, steel, transport equipment, cement, mining, petroleum, machinery, and software.[245] In 2006, the share of external trade in India's GDP stood at 24%, up from 6% in 1985.[287] In 2008, India's share of world trade was 1.68%;[292] In 2011, India was the world's tenth-largest importer and the nineteenth-largest exporter.[293] Major exports include: petroleum products, textile goods, jewellery, software, engineering goods, chemicals, and manufactured leather goods.[245] Major imports include: crude oil, machinery, gems, fertiliser, and chemicals.[245] Between 2001 and 2011, the contribution of petrochemical and engineering goods to total exports grew from 14% to 42%.[294] India was the world's second largest textile exporter after China in the 2013 calendar year.[295]
|
113 |
+
|
114 |
+
Averaging an economic growth rate of 7.5% for several years prior to 2007,[287] India has more than doubled its hourly wage rates during the first decade of the 21st century.[296] Some 431 million Indians have left poverty since 1985; India's middle classes are projected to number around 580 million by 2030.[297] Though ranking 51st in global competitiveness, as of 2010[update], India ranks 17th in financial market sophistication, 24th in the banking sector, 44th in business sophistication, and 39th in innovation, ahead of several advanced economies.[298] With seven of the world's top 15 information technology outsourcing companies based in India, as of 2009[update], the country is viewed as the second-most favourable outsourcing destination after the United States.[299] India's consumer market, the world's eleventh-largest, is expected to become fifth-largest by 2030.[297] However, barely 2% of Indians pay income taxes.[300]
|
115 |
+
|
116 |
+
Driven by growth, India's nominal GDP per capita increased steadily from US$329 in 1991, when economic liberalisation began, to US$1,265 in 2010, to an estimated US$1,723 in 2016. It is expected to grow to US$2,358 by 2020.[19] However, it has remained lower than those of other Asian developing countries like Indonesia, Malaysia, Philippines, Sri Lanka, and Thailand, and is expected to remain so in the near future. Its GDP per capita is higher than Bangladesh, Pakistan, Nepal, Afghanistan and others.[301]
|
117 |
+
|
118 |
+
According to a 2011 PricewaterhouseCoopers (PwC) report, India's GDP at purchasing power parity could overtake that of the United States by 2045.[303] During the next four decades, Indian GDP is expected to grow at an annualised average of 8%, making it potentially the world's fastest-growing major economy until 2050.[303] The report highlights key growth factors: a young and rapidly growing working-age population; growth in the manufacturing sector because of rising education and engineering skill levels; and sustained growth of the consumer market driven by a rapidly growing middle-class.[303] The World Bank cautions that, for India to achieve its economic potential, it must continue to focus on public sector reform, transport infrastructure, agricultural and rural development, removal of labour regulations, education, energy security, and public health and nutrition.[304]
|
119 |
+
|
120 |
+
According to the Worldwide Cost of Living Report 2017 released by the Economist Intelligence Unit (EIU) which was created by comparing more than 400 individual prices across 160 products and services, four of the cheapest cities were in India: Bangalore (3rd), Mumbai (5th), Chennai (5th) and New Delhi (8th).[305]
|
121 |
+
|
122 |
+
India's telecommunication industry, the world's fastest-growing, added 227 million subscribers during the period 2010–2011,[306] and after the third quarter of 2017, India surpassed the US to become the second largest smartphone market in the world after China.[307]
|
123 |
+
|
124 |
+
The Indian automotive industry, the world's second-fastest growing, increased domestic sales by 26% during 2009–2010,[308] and exports by 36% during 2008–2009.[309] India's capacity to generate electrical power is 300 gigawatts, of which 42 gigawatts is renewable.[310] At the end of 2011, the Indian IT industry employed 2.8 million professionals, generated revenues close to US$100 billion equalling 7.5% of Indian GDP, and contributed 26% of India's merchandise exports.[311]
|
125 |
+
|
126 |
+
The pharmaceutical industry in India is among the significant emerging markets for the global pharmaceutical industry. The Indian pharmaceutical market is expected to reach $48.5 billion by 2020. India's R & D spending constitutes 60% of the biopharmaceutical industry.[312][313] India is among the top 12 biotech destinations in the world.[314][315] The Indian biotech industry grew by 15.1% in 2012–2013, increasing its revenues from ₹204.4 billion (Indian rupees) to ₹235.24 billion (US$3.94 billion at June 2013 exchange rates).[316]
|
127 |
+
|
128 |
+
Despite economic growth during recent decades, India continues to face socio-economic challenges. In 2006, India contained the largest number of people living below the World Bank's international poverty line of US$1.25 per day.[318] The proportion decreased from 60% in 1981 to 42% in 2005.[319] Under the World Bank's later revised poverty line, it was 21% in 2011.[l][321] 30.7% of India's children under the age of five are underweight.[322] According to a Food and Agriculture Organization report in 2015, 15% of the population is undernourished.[323][324] The Mid-Day Meal Scheme attempts to lower these rates.[325]
|
129 |
+
|
130 |
+
According to a 2016 Walk Free Foundation report there were an estimated 18.3 million people in India, or 1.4% of the population, living in the forms of modern slavery, such as bonded labour, child labour, human trafficking, and forced begging, among others.[326][327][328] According to the 2011 census, there were 10.1 million child labourers in the country, a decline of 2.6 million from 12.6 million in 2001.[329]
|
131 |
+
|
132 |
+
Since 1991, economic inequality between India's states has consistently grown: the per-capita net state domestic product of the richest states in 2007 was 3.2 times that of the poorest.[330] Corruption in India is perceived to have decreased. According to the Corruption Perceptions Index, India ranked 78th out of 180 countries in 2018 with a score of 41 out of 100, an improvement from 85th in 2014.[331][332]
|
133 |
+
|
134 |
+
With 1,210,193,422 residents reported in the 2011 provisional census report,[333] India is the world's second-most populous country. Its population grew by 17.64% from 2001 to 2011,[334] compared to 21.54% growth in the previous decade (1991–2001).[334] The human sex ratio, according to the 2011 census, is 940 females per 1,000 males.[333] The median age was 27.6 as of 2016[update].[271] The first post-colonial census, conducted in 1951, counted 361 million people.[335] Medical advances made in the last 50 years as well as increased agricultural productivity brought about by the "Green Revolution" have caused India's population to grow rapidly.[336]
|
135 |
+
|
136 |
+
The average life expectancy in India is at 68 years—69.6 years for women, 67.3 years for men.[337] There are around 50 physicians per 100,000 Indians.[338] Migration from rural to urban areas has been an important dynamic in India's recent history. The number of people living in urban areas grew by 31.2% between 1991 and 2001.[339] Yet, in 2001, over 70% still lived in rural areas.[340][341] The level of urbanisation increased further from 27.81% in the 2001 Census to 31.16% in the 2011 Census. The slowing down of the overall population growth rate was due to the sharp decline in the growth rate in rural areas since 1991.[342] According to the 2011 census, there are 53 million-plus urban agglomerations in India; among them Mumbai, Delhi, Kolkata, Chennai, Bangalore, Hyderabad and Ahmedabad, in decreasing order by population.[343] The literacy rate in 2011 was 74.04%: 65.46% among females and 82.14% among males.[344] The rural-urban literacy gap, which was 21.2 percentage points in 2001, dropped to 16.1 percentage points in 2011. The improvement in the rural literacy rate is twice that of urban areas.[342] Kerala is the most literate state with 93.91% literacy; while Bihar the least with 63.82%.[344]
|
137 |
+
|
138 |
+
India is home to two major language families: Indo-Aryan (spoken by about 74% of the population) and Dravidian (spoken by 24% of the population). Other languages spoken in India come from the Austroasiatic and Sino-Tibetan language families. India has no national language.[345] Hindi, with the largest number of speakers, is the official language of the government.[346][347] English is used extensively in business and administration and has the status of a "subsidiary official language";[5] it is important in education, especially as a medium of higher education. Each state and union territory has one or more official languages, and the constitution recognises in particular 22 "scheduled languages".
|
139 |
+
|
140 |
+
The 2011 census reported the religion in India with the largest number of followers was Hinduism (79.80% of the population), followed by Islam (14.23%); the remaining were Christianity (2.30%), Sikhism (1.72%), Buddhism (0.70%), Jainism (0.36%) and others[m] (0.9%).[14] India has the world's largest Hindu, Sikh, Jain, Zoroastrian, and Bahá'í populations, and has the third-largest Muslim population—the largest for a non-Muslim majority country.[348][349]
|
141 |
+
|
142 |
+
Indian cultural history spans more than 4,500 years.[350] During the Vedic period (c. 1700 – c. 500 BCE), the foundations of Hindu philosophy, mythology, theology and literature were laid, and many beliefs and practices which still exist today, such as dhárma, kárma, yóga, and mokṣa, were established.[62] India is notable for its religious diversity, with Hinduism, Buddhism, Sikhism, Islam, Christianity, and Jainism among the nation's major religions.[351] The predominant religion, Hinduism, has been shaped by various historical schools of thought, including those of the Upanishads,[352] the Yoga Sutras, the Bhakti movement,[351] and by Buddhist philosophy.[353]
|
143 |
+
|
144 |
+
Much of Indian architecture, including the Taj Mahal, other works of Mughal architecture, and South Indian architecture, blends ancient local traditions with imported styles.[354] Vernacular architecture is also regional in its flavours. Vastu shastra, literally "science of construction" or "architecture" and ascribed to Mamuni Mayan,[355] explores how the laws of nature affect human dwellings;[356] it employs precise geometry and directional alignments to reflect perceived cosmic constructs.[357] As applied in Hindu temple architecture, it is influenced by the Shilpa Shastras, a series of foundational texts whose basic mythological form is the Vastu-Purusha mandala, a square that embodied the "absolute".[358] The Taj Mahal, built in Agra between 1631 and 1648 by orders of Emperor Shah Jahan in memory of his wife, has been described in the UNESCO World Heritage List as "the jewel of Muslim art in India and one of the universally admired masterpieces of the world's heritage".[359] Indo-Saracenic Revival architecture, developed by the British in the late 19th century, drew on Indo-Islamic architecture.[360]
|
145 |
+
|
146 |
+
The earliest literature in India, composed between 1500 BCE and 1200 CE, was in the Sanskrit language.[361] Major works of Sanskrit literature include the Rigveda (c. 1500 BCE – 1200 BCE), the epics: Mahābhārata (c. 400 BCE – 400 CE) and the Ramayana (c. 300 BCE and later); Abhijñānaśākuntalam (The Recognition of Śakuntalā, and other dramas of Kālidāsa (c. 5th century CE) and Mahākāvya poetry.[362][363][364] In Tamil literature, the Sangam literature (c. 600 BCE – 300 BCE) consisting of 2,381 poems, composed by 473 poets, is the earliest work.[365][366][367][368] From the 14th to the 18th centuries, India's literary traditions went through a period of drastic change because of the emergence of devotional poets like Kabīr, Tulsīdās, and Guru Nānak. This period was characterised by a varied and wide spectrum of thought and expression; as a consequence, medieval Indian literary works differed significantly from classical traditions.[369] In the 19th century, Indian writers took a new interest in social questions and psychological descriptions. In the 20th century, Indian literature was influenced by the works of the Bengali poet and novelist Rabindranath Tagore,[370] who was a recipient of the Nobel Prize in Literature.
|
147 |
+
|
148 |
+
Indian music ranges over various traditions and regional styles. Classical music encompasses two genres and their various folk offshoots: the northern Hindustani and southern Carnatic schools.[371] Regionalised popular forms include filmi and folk music; the syncretic tradition of the bauls is a well-known form of the latter. Indian dance also features diverse folk and classical forms. Among the better-known folk dances are: the bhangra of Punjab, the bihu of Assam, the Jhumair and chhau of Jharkhand, Odisha and West Bengal, garba and dandiya of Gujarat, ghoomar of Rajasthan, and the lavani of Maharashtra. Eight dance forms, many with narrative forms and mythological elements, have been accorded classical dance status by India's National Academy of Music, Dance, and Drama. These are: bharatanatyam of the state of Tamil Nadu, kathak of Uttar Pradesh, kathakali and mohiniyattam of Kerala, kuchipudi of Andhra Pradesh, manipuri of Manipur, odissi of Odisha, and the sattriya of Assam.[372] Theatre in India melds music, dance, and improvised or written dialogue.[373] Often based on Hindu mythology, but also borrowing from medieval romances or social and political events, Indian theatre includes: the bhavai of Gujarat, the jatra of West Bengal, the nautanki and ramlila of North India, tamasha of Maharashtra, burrakatha of Andhra Pradesh, terukkuttu of Tamil Nadu, and the yakshagana of Karnataka.[374] India has a theatre training institute the National School of Drama (NSD) that is situated at New Delhi It is an autonomous organisation under the Ministry of Culture, Government of India.[375]
|
149 |
+
The Indian film industry produces the world's most-watched cinema.[376] Established regional cinematic traditions exist in the Assamese, Bengali, Bhojpuri, Hindi, Kannada, Malayalam, Punjabi, Gujarati, Marathi, Odia, Tamil, and Telugu languages.[377] The Hindi language film industry (Bollywood) is the largest sector representing 43% of box office revenue, followed by the South Indian Telugu and Tamil film industries which represent 36% combined.[378]
|
150 |
+
|
151 |
+
Television broadcasting began in India in 1959 as a state-run medium of communication and expanded slowly for more than two decades.[379][380] The state monopoly on television broadcast ended in the 1990s. Since then, satellite channels have increasingly shaped the popular culture of Indian society.[381] Today, television is the most penetrative media in India; industry estimates indicate that as of 2012[update] there are over 554 million TV consumers, 462 million with satellite or cable connections compared to other forms of mass media such as the press (350 million), radio (156 million) or internet (37 million).[382]
|
152 |
+
|
153 |
+
Traditional Indian society is sometimes defined by social hierarchy. The Indian caste system embodies much of the social stratification and many of the social restrictions found in the Indian subcontinent. Social classes are defined by thousands of endogamous hereditary groups, often termed as jātis, or "castes".[383] India declared untouchability to be illegal[384] in 1947 and has since enacted other anti-discriminatory laws and social welfare initiatives. At the workplace in urban India, and in international or leading Indian companies, caste-related identification has pretty much lost its importance.[385][386]
|
154 |
+
|
155 |
+
Family values are important in the Indian tradition, and multi-generational patriarchal joint families have been the norm in India, though nuclear families are becoming common in urban areas.[387] An overwhelming majority of Indians, with their consent, have their marriages arranged by their parents or other family elders.[388] Marriage is thought to be for life,[388] and the divorce rate is extremely low,[389] with less than one in a thousand marriages ending in divorce.[390] Child marriages are common, especially in rural areas; many women wed before reaching 18, which is their legal marriageable age.[391] Female infanticide in India, and lately female foeticide, have created skewed gender ratios; the number of missing women in the country quadrupled from 15 million to 63 million in the 50-year period ending in 2014, faster than the population growth during the same period, and constituting 20 percent of India's female electorate.[392] Accord to an Indian government study, an additional 21 million girls are unwanted and do not receive adequate care.[393] Despite a government ban on sex-selective foeticide, the practice remains commonplace in India, the result of a preference for boys in a patriarchal society.[394] The payment of dowry, although illegal, remains widespread across class lines.[395] Deaths resulting from dowry, mostly from bride burning, are on the rise, despite stringent anti-dowry laws.[396]
|
156 |
+
|
157 |
+
Many Indian festivals are religious in origin. The best known include: Diwali, Ganesh Chaturthi, Thai Pongal, Holi, Durga Puja, Eid ul-Fitr, Bakr-Id, Christmas, and Vaisakhi.[397][398]
|
158 |
+
|
159 |
+
The most widely worn traditional dress in India, for both women and men, from ancient times until the advent of modern times, was draped.[399] For women it eventually took the form of a sari, a single long piece of cloth, famously six yards long, and of width spanning the lower body.[399] The sari is tied around the waist and knotted at one end, wrapped around the lower body, and then over the shoulder.[399] In its more modern form, it has been used to cover the head, and sometimes the face, as a veil.[399] It has been combined with an underskirt, or Indian petticoat, and tucked in the waist band for more secure fastening, It is also commonly worn with an Indian blouse, or choli, which serves as the primary upper-body garment, the sari's end, passing over the shoulder, now serving to obscure the upper body's contours, and to cover the midriff.[399]
|
160 |
+
|
161 |
+
For men, a similar but shorter length of cloth, the dhoti, has served as a lower-body garment.[400] It too is tied around the waist and wrapped.[400] In south India, it is usually wrapped around the lower body, the upper end tucked in the waistband, the lower left free. In addition, in northern India, it is also wrapped once around each leg before being brought up through the legs to be tucked in at the back. Other forms of traditional apparel that involve no stitching or tailoring are the chaddar (a shawl worn by both sexes to cover the upper body during colder weather, or a large veil worn by women for framing the head, or covering it) and the pagri (a turban or a scarf worn around the head as a part of a tradition, or to keep off the sun or the cold).[400]
|
162 |
+
|
163 |
+
Until the beginning of the first millennium CE, the ordinary dress of people in India was entirely unstitched.[401] The arrival of the Kushans from Central Asia, circa 48 CE, popularised cut and sewn garments in the style of Central Asian favoured by the elite in northern India.[401] However, it was not until Muslim rule was established, first with the Delhi sultanate and then the Mughal Empire, that the range of stitched clothes in India grew and their use became significantly more widespread.[401] Among the various garments gradually establishing themselves in northern India during medieval and early-modern times and now commonly worn are: the shalwars and pyjamas both forms of trousers, as well as the tunics kurta and kameez.[401] In southern India, however, the traditional draped garments were to see much longer continuous use.[401]
|
164 |
+
|
165 |
+
Shalwars are atypically wide at the waist but narrow to a cuffed bottom. They are held up by a drawstring or elastic belt, which causes them to become pleated around the waist.[402] The pants can be wide and baggy, or they can be cut quite narrow, on the bias, in which case they are called churidars. The kameez is a long shirt or tunic.[403] The side seams are left open below the waist-line,[404]), which gives the wearer greater freedom of movement. The kameez is usually cut straight and flat; older kameez use traditional cuts; modern kameez are more likely to have European-inspired set-in sleeves. The kameez may have a European-style collar, a Mandarin-collar, or it may be collarless; in the latter case, its design as a women's garment is similar to a kurta.[405] At first worn by Muslim women, the use of shalwar kameez gradually spread, making them a regional style,[406][407] especially in the Punjab region.[408]
|
166 |
+
[409]
|
167 |
+
|
168 |
+
A kurta, which traces its roots to Central Asian nomadic tunics, has evolved stylistically in India as a garment for everyday wear as well as for formal occasions.[401] It is traditionally made of cotton or silk; it is worn plain or with embroidered decoration, such as chikan; and it can be loose or tight in the torso, typically falling either just above or somewhere below the wearer's knees.[410] The sleeves of a traditional kurta fall to the wrist without narrowing, the ends hemmed but not cuffed; the kurta can be worn by both men and women; it is traditionally collarless, though standing collars are increasingly popular; and it can be worn over ordinary pyjamas, loose shalwars, churidars, or less traditionally over jeans.[410]
|
169 |
+
|
170 |
+
In the last 50 years, fashions have changed a great deal in India. Increasingly, in urban settings in northern India, the sari is no longer the apparel of everyday wear, transformed instead into one for formal occasions.[411] The traditional shalwar kameez is rarely worn by younger women, who favour churidars or jeans.[411] The kurtas worn by young men usually fall to the shins and are seldom plain. In white-collar office settings, ubiquitous air conditioning allows men to wear sports jackets year-round.[411] For weddings and formal occasions, men in the middle- and upper classes often wear bandgala, or short Nehru jackets, with pants, with the groom and his groomsmen sporting sherwanis and churidars.[411] The dhoti, the once universal garment of Hindu India, the wearing of which in the homespun and handwoven form of khadi allowed Gandhi to bring Indian nationalism to the millions,[412]
|
171 |
+
is seldom seen in the cities,[411] reduced now, with brocaded border, to the liturgical vestments of Hindu priests.
|
172 |
+
|
173 |
+
Indian cuisine consists of a wide variety of regional and traditional cuisines. Given the range of diversity in soil type, climate, culture, ethnic groups, and occupations, these cuisines vary substantially from each other, using locally available spices, herbs, vegetables, and fruit. Indian foodways have been influenced by religion, in particular Hindu cultural choices and traditions.[413] They have been also shaped by Islamic rule, particularly that of the Mughals, by the arrival of the Portuguese on India's southwestern shores, and by British rule. These three influences are reflected, respectively, in the dishes of pilaf and biryani; the vindaloo; and the tiffin and the Railway mutton curry.[414] Earlier, the Columbian exchange had brought the potato, the tomato, maize, peanuts, cashew nuts, pineapples, guavas, and most notably, chilli peppers, to India. Each became staples of use.[415] In turn, the spice trade between India and Europe was a catalyst for Europe's Age of Discovery.[416]
|
174 |
+
|
175 |
+
The cereals grown in India, their choice, times, and regions of planting, correspond strongly to the timing of India's monsoons, and the variation across regions in their associated rainfall.[417] In general, the broad division of cereal zones in India, as determined by their dependence on rain, was firmly in place before the arrival of artificial irrigation.[417] Rice, which requires a lot of water, has been grown traditionally in regions of high rainfall in the northeast and the western coast, wheat in regions of moderate rainfall, like India's northern plains, and millet in regions of low rainfall, such as on the Deccan Plateau and in Rajasthan.[418][417]
|
176 |
+
|
177 |
+
The foundation of a typical Indian meal is a cereal cooked in plain fashion, and complemented with flavourful savoury dishes.[419] The latter includes lentils, pulses and vegetables spiced commonly with ginger and garlic, but also more discerningly with a combination of spices that may include coriander, cumin, turmeric, cinnamon, cardamon and others as informed by culinary conventions.[419] In an actual meal, this mental representation takes the form of a platter, or thali, with a central place for the cooked cereal, peripheral ones, often in small bowls, for the flavourful accompaniments, and the simultaneous, rather than piecemeal, ingestion of the two in each act of eating, whether by actual mixing—for example of rice and lentils—or in the folding of one—such as bread—around the other, such as cooked vegetables.[419]
|
178 |
+
|
179 |
+
A notable feature of Indian food is the existence of a number of distinctive vegetarian cuisines, each a feature of the geographical and cultural histories of its adherents.[420] The appearance of ahimsa, or the avoidance of violence toward all forms of life in many religious orders early in Indian history, especially Upanishadic Hinduism, Buddhism and Jainism, is thought to have been a notable factor in the prevalence of vegetarianism among a segment of India's Hindu population, especially in southern India, Gujarat, and the Hindi-speaking belt of north-central India, as well as among Jains.[420] Among these groups, strong discomfort is felt at thoughts of eating meat,[421] and contributes to the low proportional consumption of meat to overall diet in India.[421] Unlike China, which has increased its per capita meat consumption substantially in its years of increased economic growth, in India the strong dietary traditions have contributed to dairy, rather than meat, becoming the preferred form of animal protein consumption accompanying higher economic growth.[422]
|
180 |
+
|
181 |
+
In the last millennium, the most significant import of cooking techniques into India occurred during the Mughal Empire. The cultivation of rice had spread much earlier from India to Central and West Asia; however, it was during Mughal rule that dishes, such as the pilaf,[418] developed in the interim during the Abbasid caliphate,[423] and cooking techniques such as the marinating of meat in yogurt, spread into northern India from regions to its northwest.[424] To the simple yogurt marinade of Persia, onions, garlic, almonds, and spices began to be added in India.[424] Rice grown to the southwest of the Mughal capital, Agra, which had become famous in the Islamic world for its fine grain, was partially cooked and layered alternately with the sauteed meat, the pot sealed tightly, and slow cooked according to another Persian cooking technique, to produce what has today become the Indian biryani,[424] a feature of festive dining in many parts of India.[425]
|
182 |
+
In food served in restaurants in urban north India, and internationally, the diversity of Indian food has been partially concealed by the dominance of Punjabi cuisine. This was caused in large part by an entrepreneurial response among people from the Punjab region who had been displaced by the 1947 partition of India, and had arrived in India as refugees.[420] The identification of Indian cuisine with the tandoori chicken—cooked in the tandoor oven, which had traditionally been used for baking bread in the rural Punjab and the Delhi region, especially among Muslims, but which is originally from Central Asia—dates to this period.[420]
|
183 |
+
|
184 |
+
In India, several traditional indigenous sports remain fairly popular, such as kabaddi, kho kho, pehlwani and gilli-danda. Some of the earliest forms of Asian martial arts, such as kalarippayattu, musti yuddha, silambam, and marma adi, originated in India. Chess, commonly held to have originated in India as chaturaṅga, is regaining widespread popularity with the rise in the number of Indian grandmasters.[426][427] Pachisi, from which parcheesi derives, was played on a giant marble court by Akbar.[428]
|
185 |
+
|
186 |
+
The improved results garnered by the Indian Davis Cup team and other Indian tennis players in the early 2010s have made tennis increasingly popular in the country.[429] India has a comparatively strong presence in shooting sports, and has won several medals at the Olympics, the World Shooting Championships, and the Commonwealth Games.[430][431] Other sports in which Indians have succeeded internationally include badminton[432] (Saina Nehwal and P V Sindhu are two of the top-ranked female badminton players in the world), boxing,[433] and wrestling.[434] Football is popular in West Bengal, Goa, Tamil Nadu, Kerala, and the north-eastern states.[435]
|
187 |
+
|
188 |
+
Cricket is the most popular sport in India.[437] Major domestic competitions include the Indian Premier League, which is the most-watched cricket league in the world and ranks sixth among all sports leagues.[438]
|
189 |
+
|
190 |
+
India has hosted or co-hosted several international sporting events: the 1951 and 1982 Asian Games; the 1987, 1996, and 2011 Cricket World Cup tournaments; the 2003 Afro-Asian Games; the 2006 ICC Champions Trophy; the 2010 Hockey World Cup; the 2010 Commonwealth Games; and the 2017 FIFA U-17 World Cup. Major international sporting events held annually in India include the Chennai Open, the Mumbai Marathon, the Delhi Half Marathon, and the Indian Masters. The first Formula 1 Indian Grand Prix featured in late 2011 but has been discontinued from the F1 season calendar since 2014.[439] India has traditionally been the dominant country at the South Asian Games. An example of this dominance is the basketball competition where the Indian team won three out of four tournaments to date.[440]
|
191 |
+
|
192 |
+
Overview
|
193 |
+
|
194 |
+
Etymology
|
195 |
+
|
196 |
+
History
|
197 |
+
|
198 |
+
Geography
|
199 |
+
|
200 |
+
Biodiversity
|
201 |
+
|
202 |
+
Politics
|
203 |
+
|
204 |
+
Foreign relations and military
|
205 |
+
|
206 |
+
Economy
|
207 |
+
|
208 |
+
Demographics
|
209 |
+
|
210 |
+
Culture
|
211 |
+
|
212 |
+
Government
|
213 |
+
|
214 |
+
General information
|
215 |
+
|
216 |
+
Coordinates: 21°N 78°E / 21°N 78°E / 21; 78
|
en/2717.html.txt
ADDED
@@ -0,0 +1,216 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
India, officially the Republic of India (Hindi: Bhārat Gaṇarājya),[23] is a country in South Asia. It is the second-most populous country, the seventh-largest country by area, and the most populous democracy in the world. Bounded by the Indian Ocean on the south, the Arabian Sea on the southwest, and the Bay of Bengal on the southeast, it shares land borders with Pakistan to the west;[f] China, Nepal, and Bhutan to the north; and Bangladesh and Myanmar to the east. In the Indian Ocean, India is in the vicinity of Sri Lanka and the Maldives; its Andaman and Nicobar Islands share a maritime border with Thailand and Indonesia.
|
6 |
+
|
7 |
+
Modern humans arrived on the Indian subcontinent from Africa no later than 55,000 years ago.[24]
|
8 |
+
Their long occupation, initially in varying forms of isolation as hunter-gatherers, has made the region highly diverse, second only to Africa in human genetic diversity.[25] Settled life emerged on the subcontinent in the western margins of the Indus river basin 9,000 years ago, evolving gradually into the Indus Valley Civilisation of the third millennium BCE.[26]
|
9 |
+
By 1200 BCE, an archaic form of Sanskrit, an Indo-European language, had diffused into India from the northwest, unfolding as the language of the Rigveda, and recording the dawning of Hinduism in India.[27]
|
10 |
+
The Dravidian languages of India were supplanted in the northern regions.[28]
|
11 |
+
By 400 BCE, stratification and exclusion by caste had emerged within Hinduism,[29]
|
12 |
+
and Buddhism and Jainism had arisen, proclaiming social orders unlinked to heredity.[30]
|
13 |
+
Early political consolidations gave rise to the loose-knit Maurya and Gupta Empires based in the Ganges Basin.[31]
|
14 |
+
Their collective era was suffused with wide-ranging creativity,[32] but also marked by the declining status of women,[33] and the incorporation of untouchability into an organised system of belief.[g][34] In South India, the Middle kingdoms exported Dravidian-languages scripts and religious cultures to the kingdoms of Southeast Asia.[35]
|
15 |
+
|
16 |
+
In the early medieval era, Christianity, Islam, Judaism, and Zoroastrianism put down roots on India's southern and western coasts.[36]
|
17 |
+
Muslim armies from Central Asia intermittently overran India's northern plains,[37]
|
18 |
+
eventually establishing the Delhi Sultanate, and drawing northern India into the cosmopolitan networks of medieval Islam.[38]
|
19 |
+
In the 15th century, the Vijayanagara Empire created a long-lasting composite Hindu culture in south India.[39]
|
20 |
+
In the Punjab, Sikhism emerged, rejecting institutionalised religion.[40]
|
21 |
+
The Mughal Empire, in 1526, ushered in two centuries of relative peace,[41]
|
22 |
+
leaving a legacy of luminous architecture.[h][42]
|
23 |
+
Gradually expanding rule of the British East India Company followed, turning India into a colonial economy, but also consolidating its sovereignty.[43] British Crown rule began in 1858. The rights promised to Indians were granted slowly,[44] but technological changes were introduced, and ideas of education, modernity and the public life took root.[45]
|
24 |
+
A pioneering and influential nationalist movement emerged, which was noted for nonviolent resistance and became the major factor in ending British rule.[46] In 1947 the British Indian Empire was partitioned into two independent dominions, a Hindu-majority Dominion of India and a Muslim-majority Dominion of Pakistan, amid large-scale loss of life and an unprecedented migration.[47][48]
|
25 |
+
|
26 |
+
India has been a secular federal republic since 1950, governed in a democratic parliamentary system. It is a pluralistic, multilingual and multi-ethnic society. India's population grew from 361 million in 1951 to 1,211 million in 2011.[49]
|
27 |
+
During the same time, its nominal per capita income increased from US$64 annually to US$1,498, and its literacy rate from 16.6% to 74%. From being a comparatively destitute country in 1951,[50]
|
28 |
+
India has become a fast-growing major economy, a hub for information technology services, with an expanding middle class.[51] It has a space programme which includes several planned or completed extraterrestrial missions. Indian movies, music, and spiritual teachings play an increasing role in global culture.[52]
|
29 |
+
India has substantially reduced its rate of poverty, though at the cost of increasing economic inequality.[53]
|
30 |
+
India is a nuclear weapons state, which ranks high in military expenditure. It has disputes over Kashmir with its neighbours, Pakistan and China, unresolved since the mid-20th century.[54]
|
31 |
+
Among the socio-economic challenges India faces are gender inequality, child malnutrition,[55]
|
32 |
+
and rising levels of air pollution.[56]
|
33 |
+
India's land is megadiverse, with four biodiversity hotspots.[57] Its forest cover comprises 21.4% of its area.[58] India's wildlife, which has traditionally been viewed with tolerance in India's culture,[59] is supported among these forests, and elsewhere, in protected habitats.
|
34 |
+
|
35 |
+
According to the Oxford English Dictionary (Third Edition 2009), the name "India" is derived from the Classical Latin India, a reference to South Asia and an uncertain region to its east; and in turn derived successively from: Hellenistic Greek India ( Ἰνδία); ancient Greek Indos ( Ἰνδός); Old Persian Hindush, an eastern province of the Achaemenid empire; and ultimately its cognate, the Sanskrit Sindhu, or "river," specifically the Indus river and, by implication, its well-settled southern basin.[60][61] The ancient Greeks referred to the Indians as Indoi (Ἰνδοί), which translates as "The people of the Indus".[62]
|
36 |
+
|
37 |
+
The term Bharat (Bhārat; pronounced [ˈbʱaːɾət] (listen)), mentioned in both Indian epic poetry and the Constitution of India,[63][64] is used in its variations by many Indian languages. A modern rendering of the historical name Bharatavarsha, which applied originally to a region of the Gangetic Valley,[65][66] Bharat gained increased currency from the mid-19th century as a native name for India.[63][67]
|
38 |
+
|
39 |
+
Hindustan ([ɦɪndʊˈstaːn] (listen)) is a Middle Persian name for India, introduced during the Mughal Empire and used widely since. Its meaning has varied, referring to a region encompassing present-day northern India and Pakistan or to India in its near entirety.[63][67][68]
|
40 |
+
|
41 |
+
By 55,000 years ago, the first modern humans, or Homo sapiens, had arrived on the Indian subcontinent from Africa, where they had earlier evolved.[69][70][71]
|
42 |
+
The earliest known modern human remains in South Asia date to about 30,000 years ago.[72] After 6500 BCE, evidence for domestication of food crops and animals, construction of permanent structures, and storage of agricultural surplus appeared in Mehrgarh and other sites in what is now Balochistan.[73] These gradually developed into the Indus Valley Civilisation,[74][73] the first urban culture in South Asia,[75] which flourished during 2500–1900 BCE in what is now Pakistan and western India.[76] Centred around cities such as Mohenjo-daro, Harappa, Dholavira, and Kalibangan, and relying on varied forms of subsistence, the civilisation engaged robustly in crafts production and wide-ranging trade.[75]
|
43 |
+
|
44 |
+
During the period 2000–500 BCE, many regions of the subcontinent transitioned from the Chalcolithic cultures to the Iron Age ones.[77] The Vedas, the oldest scriptures associated with Hinduism,[78] were composed during this period,[79] and historians have analysed these to posit a Vedic culture in the Punjab region and the upper Gangetic Plain.[77] Most historians also consider this period to have encompassed several waves of Indo-Aryan migration into the subcontinent from the north-west.[78] The caste system, which created a hierarchy of priests, warriors, and free peasants, but which excluded indigenous peoples by labelling their occupations impure, arose during this period.[80] On the Deccan Plateau, archaeological evidence from this period suggests the existence of a chiefdom stage of political organisation.[77] In South India, a progression to sedentary life is indicated by the large number of megalithic monuments dating from this period,[81] as well as by nearby traces of agriculture, irrigation tanks, and craft traditions.[81]
|
45 |
+
|
46 |
+
In the late Vedic period, around the 6th century BCE, the small states and chiefdoms of the Ganges Plain and the north-western regions had consolidated into 16 major oligarchies and monarchies that were known as the mahajanapadas.[82][83] The emerging urbanisation gave rise to non-Vedic religious movements, two of which became independent religions. Jainism came into prominence during the life of its exemplar, Mahavira.[84] Buddhism, based on the teachings of Gautama Buddha, attracted followers from all social classes excepting the middle class; chronicling the life of the Buddha was central to the beginnings of recorded history in India.[85][86][87] In an age of increasing urban wealth, both religions held up renunciation as an ideal,[88] and both established long-lasting monastic traditions. Politically, by the 3rd century BCE, the kingdom of Magadha had annexed or reduced other states to emerge as the Mauryan Empire.[89] The empire was once thought to have controlled most of the subcontinent except the far south, but its core regions are now thought to have been separated by large autonomous areas.[90][91] The Mauryan kings are known as much for their empire-building and determined management of public life as for Ashoka's renunciation of militarism and far-flung advocacy of the Buddhist dhamma.[92][93]
|
47 |
+
|
48 |
+
The Sangam literature of the Tamil language reveals that, between 200 BCE and 200 CE, the southern peninsula was ruled by the Cheras, the Cholas, and the Pandyas, dynasties that traded extensively with the Roman Empire and with West and South-East Asia.[94][95] In North India, Hinduism asserted patriarchal control within the family, leading to increased subordination of women.[96][89] By the 4th and 5th centuries, the Gupta Empire had created a complex system of administration and taxation in the greater Ganges Plain; this system became a model for later Indian kingdoms.[97][98] Under the Guptas, a renewed Hinduism based on devotion, rather than the management of ritual, began to assert itself.[99] This renewal was reflected in a flowering of sculpture and architecture, which found patrons among an urban elite.[98] Classical Sanskrit literature flowered as well, and Indian science, astronomy, medicine, and mathematics made significant advances.[98]
|
49 |
+
|
50 |
+
The Indian early medieval age, 600 CE to 1200 CE, is defined by regional kingdoms and cultural diversity.[100] When Harsha of Kannauj, who ruled much of the Indo-Gangetic Plain from 606 to 647 CE, attempted to expand southwards, he was defeated by the Chalukya ruler of the Deccan.[101] When his successor attempted to expand eastwards, he was defeated by the Pala king of Bengal.[101] When the Chalukyas attempted to expand southwards, they were defeated by the Pallavas from farther south, who in turn were opposed by the Pandyas and the Cholas from still farther south.[101] No ruler of this period was able to create an empire and consistently control lands much beyond his core region.[100] During this time, pastoral peoples, whose land had been cleared to make way for the growing agricultural economy, were accommodated within caste society, as were new non-traditional ruling classes.[102] The caste system consequently began to show regional differences.[102]
|
51 |
+
|
52 |
+
In the 6th and 7th centuries, the first devotional hymns were created in the Tamil language.[103] They were imitated all over India and led to both the resurgence of Hinduism and the development of all modern languages of the subcontinent.[103] Indian royalty, big and small, and the temples they patronised drew citizens in great numbers to the capital cities, which became economic hubs as well.[104] Temple towns of various sizes began to appear everywhere as India underwent another urbanisation.[104] By the 8th and 9th centuries, the effects were felt in South-East Asia, as South Indian culture and political systems were exported to lands that became part of modern-day Myanmar, Thailand, Laos, Cambodia, Vietnam, Philippines, Malaysia, and Java.[105] Indian merchants, scholars, and sometimes armies were involved in this transmission; South-East Asians took the initiative as well, with many sojourning in Indian seminaries and translating Buddhist and Hindu texts into their languages.[105]
|
53 |
+
|
54 |
+
After the 10th century, Muslim Central Asian nomadic clans, using swift-horse cavalry and raising vast armies united by ethnicity and religion, repeatedly overran South Asia's north-western plains, leading eventually to the establishment of the Islamic Delhi Sultanate in 1206.[106] The sultanate was to control much of North India and to make many forays into South India. Although at first disruptive for the Indian elites, the sultanate largely left its vast non-Muslim subject population to its own laws and customs.[107][108] By repeatedly repulsing Mongol raiders in the 13th century, the sultanate saved India from the devastation visited on West and Central Asia, setting the scene for centuries of migration of fleeing soldiers, learned men, mystics, traders, artists, and artisans from that region into the subcontinent, thereby creating a syncretic Indo-Islamic culture in the north.[109][110] The sultanate's raiding and weakening of the regional kingdoms of South India paved the way for the indigenous Vijayanagara Empire.[111] Embracing a strong Shaivite tradition and building upon the military technology of the sultanate, the empire came to control much of peninsular India,[112] and was to influence South Indian society for long afterwards.[111]
|
55 |
+
|
56 |
+
In the early 16th century, northern India, then under mainly Muslim rulers,[113] fell again to the superior mobility and firepower of a new generation of Central Asian warriors.[114] The resulting Mughal Empire did not stamp out the local societies it came to rule. Instead, it balanced and pacified them through new administrative practices[115][116] and diverse and inclusive ruling elites,[117] leading to more systematic, centralised, and uniform rule.[118] Eschewing tribal bonds and Islamic identity, especially under Akbar, the Mughals united their far-flung realms through loyalty, expressed through a Persianised culture, to an emperor who had near-divine status.[117] The Mughal state's economic policies, deriving most revenues from agriculture[119] and mandating that taxes be paid in the well-regulated silver currency,[120] caused peasants and artisans to enter larger markets.[118] The relative peace maintained by the empire during much of the 17th century was a factor in India's economic expansion,[118] resulting in greater patronage of painting, literary forms, textiles, and architecture.[121] Newly coherent social groups in northern and western India, such as the Marathas, the Rajputs, and the Sikhs, gained military and governing ambitions during Mughal rule, which, through collaboration or adversity, gave them both recognition and military experience.[122] Expanding commerce during Mughal rule gave rise to new Indian commercial and political elites along the coasts of southern and eastern India.[122] As the empire disintegrated, many among these elites were able to seek and control their own affairs.[123]
|
57 |
+
|
58 |
+
By the early 18th century, with the lines between commercial and political dominance being increasingly blurred, a number of European trading companies, including the English East India Company, had established coastal outposts.[124][125] The East India Company's control of the seas, greater resources, and more advanced military training and technology led it to increasingly flex its military muscle and caused it to become attractive to a portion of the Indian elite; these factors were crucial in allowing the company to gain control over the Bengal region by 1765 and sideline the other European companies.[126][124][127][128] Its further access to the riches of Bengal and the subsequent increased strength and size of its army enabled it to annexe or subdue most of India by the 1820s.[129] India was then no longer exporting manufactured goods as it long had, but was instead supplying the British Empire with raw materials. Many historians consider this to be the onset of India's colonial period.[124] By this time, with its economic power severely curtailed by the British parliament and having effectively been made an arm of British administration, the company began more consciously to enter non-economic arenas like education, social reform, and culture.[130]
|
59 |
+
|
60 |
+
Historians consider India's modern age to have begun sometime between 1848 and 1885. The appointment in 1848 of Lord Dalhousie as Governor General of the East India Company set the stage for changes essential to a modern state. These included the consolidation and demarcation of sovereignty, the surveillance of the population, and the education of citizens. Technological changes—among them, railways, canals, and the telegraph—were introduced not long after their introduction in Europe.[131][132][133][134] However, disaffection with the company also grew during this time and set off the Indian Rebellion of 1857. Fed by diverse resentments and perceptions, including invasive British-style social reforms, harsh land taxes, and summary treatment of some rich landowners and princes, the rebellion rocked many regions of northern and central India and shook the foundations of Company rule.[135][136] Although the rebellion was suppressed by 1858, it led to the dissolution of the East India Company and the direct administration of India by the British government. Proclaiming a unitary state and a gradual but limited British-style parliamentary system, the new rulers also protected princes and landed gentry as a feudal safeguard against future unrest.[137][138] In the decades following, public life gradually emerged all over India, leading eventually to the founding of the Indian National Congress in 1885.[139][140][141][142]
|
61 |
+
|
62 |
+
The rush of technology and the commercialisation of agriculture in the second half of the 19th century was marked by economic setbacks and many small farmers became dependent on the whims of far-away markets.[143] There was an increase in the number of large-scale famines,[144] and, despite the risks of infrastructure development borne by Indian taxpayers, little industrial employment was generated for Indians.[145] There were also salutary effects: commercial cropping, especially in the newly canalled Punjab, led to increased food production for internal consumption.[146] The railway network provided critical famine relief,[147] notably reduced the cost of moving goods,[147] and helped nascent Indian-owned industry.[146]
|
63 |
+
|
64 |
+
After World War I, in which approximately one million Indians served,[148] a new period began. It was marked by British reforms but also repressive legislation, by more strident Indian calls for self-rule, and by the beginnings of a nonviolent movement of non-co-operation, of which Mohandas Karamchand Gandhi would become the leader and enduring symbol.[149] During the 1930s, slow legislative reform was enacted by the British; the Indian National Congress won victories in the resulting elections.[150] The next decade was beset with crises: Indian participation in World War II, the Congress's final push for non-co-operation, and an upsurge of Muslim nationalism. All were capped by the advent of independence in 1947, but tempered by the partition of India into two states: India and Pakistan.[151]
|
65 |
+
|
66 |
+
Vital to India's self-image as an independent nation was its constitution, completed in 1950, which put in place a secular and democratic republic.[152] It has remained a democracy with civil liberties, an active Supreme Court, and a largely independent press.[153] Economic liberalisation, which began in the 1990s, has created a large urban middle class, transformed India into one of the world's fastest-growing economies,[154] and increased its geopolitical clout. Indian movies, music, and spiritual teachings play an increasing role in global culture.[153] Yet, India is also shaped by seemingly unyielding poverty, both rural and urban;[153] by religious and caste-related violence;[155] by Maoist-inspired Naxalite insurgencies;[156] and by separatism in Jammu and Kashmir and in Northeast India.[157] It has unresolved territorial disputes with China[158] and with Pakistan.[158] India's sustained democratic freedoms are unique among the world's newer nations; however, in spite of its recent economic successes, freedom from want for its disadvantaged population remains a goal yet to be achieved.[159]
|
67 |
+
|
68 |
+
India accounts for the bulk of the Indian subcontinent, lying atop the Indian tectonic plate, a part of the Indo-Australian Plate.[160] India's defining geological processes began 75 million years ago when the Indian Plate, then part of the southern supercontinent Gondwana, began a north-eastward drift caused by seafloor spreading to its south-west, and later, south and south-east.[160] Simultaneously, the vast Tethyan oceanic crust, to its northeast, began to subduct under the Eurasian Plate.[160] These dual processes, driven by convection in the Earth's mantle, both created the Indian Ocean and caused the Indian continental crust eventually to under-thrust Eurasia and to uplift the Himalayas.[160] Immediately south of the emerging Himalayas, plate movement created a vast trough that rapidly filled with river-borne sediment[161] and now constitutes the Indo-Gangetic Plain.[162] Cut off from the plain by the ancient Aravalli Range lies the Thar Desert.[163]
|
69 |
+
|
70 |
+
The original Indian Plate survives as peninsular India, the oldest and geologically most stable part of India. It extends as far north as the Satpura and Vindhya ranges in central India. These parallel chains run from the Arabian Sea coast in Gujarat in the west to the coal-rich Chota Nagpur Plateau in Jharkhand in the east.[164] To the south, the remaining peninsular landmass, the Deccan Plateau, is flanked on the west and east by coastal ranges known as the Western and Eastern Ghats;[165] the plateau contains the country's oldest rock formations, some over one billion years old. Constituted in such fashion, India lies to the north of the equator between 6° 44′ and 35° 30′ north latitude[i] and 68° 7′ and 97° 25′ east longitude.[166]
|
71 |
+
|
72 |
+
India's coastline measures 7,517 kilometres (4,700 mi) in length; of this distance, 5,423 kilometres (3,400 mi) belong to peninsular India and 2,094 kilometres (1,300 mi) to the Andaman, Nicobar, and Lakshadweep island chains.[167] According to the Indian naval hydrographic charts, the mainland coastline consists of the following: 43% sandy beaches; 11% rocky shores, including cliffs; and 46% mudflats or marshy shores.[167]
|
73 |
+
|
74 |
+
Major Himalayan-origin rivers that substantially flow through India include the Ganges and the Brahmaputra, both of which drain into the Bay of Bengal.[169] Important tributaries of the Ganges include the Yamuna and the Kosi; the latter's extremely low gradient, caused by long-term silt deposition, leads to severe floods and course changes.[170][171] Major peninsular rivers, whose steeper gradients prevent their waters from flooding, include the Godavari, the Mahanadi, the Kaveri, and the Krishna, which also drain into the Bay of Bengal;[172] and the Narmada and the Tapti, which drain into the Arabian Sea.[173] Coastal features include the marshy Rann of Kutch of western India and the alluvial Sundarbans delta of eastern India; the latter is shared with Bangladesh.[174] India has two archipelagos: the Lakshadweep, coral atolls off India's south-western coast; and the Andaman and Nicobar Islands, a volcanic chain in the Andaman Sea.[175]
|
75 |
+
|
76 |
+
The Indian climate is strongly influenced by the Himalayas and the Thar Desert, both of which drive the economically and culturally pivotal summer and winter monsoons.[176] The Himalayas prevent cold Central Asian katabatic winds from blowing in, keeping the bulk of the Indian subcontinent warmer than most locations at similar latitudes.[177][178] The Thar Desert plays a crucial role in attracting the moisture-laden south-west summer monsoon winds that, between June and October, provide the majority of India's rainfall.[176] Four major climatic groupings predominate in India: tropical wet, tropical dry, subtropical humid, and montane.[179]
|
77 |
+
|
78 |
+
India is a megadiverse country, a term employed for 17 countries which display high biological diversity and contain many species exclusively indigenous, or endemic, to them.[181] India is a habitat for 8.6% of all mammal species, 13.7% of bird species, 7.9% of reptile species, 6% of amphibian species, 12.2% of fish species, and 6.0% of all flowering plant species.[182][183] Fully a third of Indian plant species are endemic.[184] India also contains four of the world's 34 biodiversity hotspots,[57] or regions that display significant habitat loss in the presence of high endemism.[j][185]
|
79 |
+
|
80 |
+
India's forest cover is 701,673 km2 (270,917 sq mi), which is 21.35% of the country's total land area. It can be subdivided further into broad categories of canopy density, or the proportion of the area of a forest covered by its tree canopy.[186] Very dense forest, whose canopy density is greater than 70%, occupies 2.61% of India's land area.[186] It predominates in the tropical moist forest of the Andaman Islands, the Western Ghats, and Northeast India.[187] Moderately dense forest, whose canopy density is between 40% and 70%, occupies 9.59% of India's land area.[186] It predominates in the temperate coniferous forest of the Himalayas, the moist deciduous sal forest of eastern India, and the dry deciduous teak forest of central and southern India.[187] Open forest, whose canopy density is between 10% and 40%, occupies 9.14% of India's land area,[186] and predominates in the babul-dominated thorn forest of the central Deccan Plateau and the western Gangetic plain.[187]
|
81 |
+
|
82 |
+
Among the Indian subcontinent's notable indigenous trees are the astringent Azadirachta indica, or neem, which is widely used in rural Indian herbal medicine,[188] and the luxuriant Ficus religiosa, or peepul,[189] which is displayed on the ancient seals of Mohenjo-daro,[190] and under which the Buddha is recorded in the Pali canon to have sought enlightenment,[191]
|
83 |
+
|
84 |
+
Many Indian species have descended from those of Gondwana, the southern supercontinent from which India separated more than 100 million years ago.[192] India's subsequent collision with Eurasia set off a mass exchange of species. However, volcanism and climatic changes later caused the extinction of many endemic Indian forms.[193] Still later, mammals entered India from Asia through two zoogeographical passes flanking the Himalayas.[187] This had the effect of lowering endemism among India's mammals, which stands at 12.6%, contrasting with 45.8% among reptiles and 55.8% among amphibians.[183] Notable endemics are the vulnerable[194] hooded leaf monkey[195] and the threatened[196] Beddom's toad[196][197] of the Western Ghats.
|
85 |
+
|
86 |
+
India contains 172 IUCN-designated threatened animal species, or 2.9% of endangered forms.[198] These include the endangered Bengal tiger and the Ganges river dolphin. Critically endangered species include: the gharial, a crocodilian; the great Indian bustard; and the Indian white-rumped vulture, which has become nearly extinct by having ingested the carrion of diclofenac-treated cattle.[199] The pervasive and ecologically devastating human encroachment of recent decades has critically endangered Indian wildlife. In response, the system of national parks and protected areas, first established in 1935, was expanded substantially. In 1972, India enacted the Wildlife Protection Act[200] and Project Tiger to safeguard crucial wilderness; the Forest Conservation Act was enacted in 1980 and amendments added in 1988.[201] India hosts more than five hundred wildlife sanctuaries and thirteen biosphere reserves,[202] four of which are part of the World Network of Biosphere Reserves; twenty-five wetlands are registered under the Ramsar Convention.[203]
|
87 |
+
|
88 |
+
India is the world's most populous democracy.[205] A parliamentary republic with a multi-party system,[206] it has eight recognised national parties, including the Indian National Congress and the Bharatiya Janata Party (BJP), and more than 40 regional parties.[207] The Congress is considered centre-left in Indian political culture,[208] and the BJP right-wing.[209][210][211] For most of the period between 1950—when India first became a republic—and the late 1980s, the Congress held a majority in the parliament. Since then, however, it has increasingly shared the political stage with the BJP,[212] as well as with powerful regional parties which have often forced the creation of multi-party coalition governments at the centre.[213]
|
89 |
+
|
90 |
+
In the Republic of India's first three general elections, in 1951, 1957, and 1962, the Jawaharlal Nehru-led Congress won easy victories. On Nehru's death in 1964, Lal Bahadur Shastri briefly became prime minister; he was succeeded, after his own unexpected death in 1966, by Nehru's daughter Indira Gandhi, who went on to lead the Congress to election victories in 1967 and 1971. Following public discontent with the state of emergency she declared in 1975, the Congress was voted out of power in 1977; the then-new Janata Party, which had opposed the emergency, was voted in. Its government lasted just over two years. Voted back into power in 1980, the Congress saw a change in leadership in 1984, when Indira Gandhi was assassinated; she was succeeded by her son Rajiv Gandhi, who won an easy victory in the general elections later that year. The Congress was voted out again in 1989 when a National Front coalition, led by the newly formed Janata Dal in alliance with the Left Front, won the elections; that government too proved relatively short-lived, lasting just under two years.[214] Elections were held again in 1991; no party won an absolute majority. The Congress, as the largest single party, was able to form a minority government led by P. V. Narasimha Rao.[215]
|
91 |
+
|
92 |
+
A two-year period of political turmoil followed the general election of 1996. Several short-lived alliances shared power at the centre. The BJP formed a government briefly in 1996; it was followed by two comparatively long-lasting United Front coalitions, which depended on external support. In 1998, the BJP was able to form a successful coalition, the National Democratic Alliance (NDA). Led by Atal Bihari Vajpayee, the NDA became the first non-Congress, coalition government to complete a five-year term.[216] Again in the 2004 Indian general elections, no party won an absolute majority, but the Congress emerged as the largest single party, forming another successful coalition: the United Progressive Alliance (UPA). It had the support of left-leaning parties and MPs who opposed the BJP. The UPA returned to power in the 2009 general election with increased numbers, and it no longer required external support from India's communist parties.[217] That year, Manmohan Singh became the first prime minister since Jawaharlal Nehru in 1957 and 1962 to be re-elected to a consecutive five-year term.[218] In the 2014 general election, the BJP became the first political party since 1984 to win a majority and govern without the support of other parties.[219] The incumbent prime minister is Narendra Modi, a former chief minister of Gujarat. On 20 July 2017, Ram Nath Kovind was elected India's 14th president and took the oath of office on 25 July 2017.[220][221][222]
|
93 |
+
|
94 |
+
India is a federation with a parliamentary system governed under the Constitution of India—the country's supreme legal document. It is a constitutional republic and representative democracy, in which "majority rule is tempered by minority rights protected by law". Federalism in India defines the power distribution between the union and the states. The Constitution of India, which came into effect on 26 January 1950,[224] originally stated India to be a "sovereign, democratic republic;" this characterisation was amended in 1971 to "a sovereign, socialist, secular, democratic republic".[225] India's form of government, traditionally described as "quasi-federal" with a strong centre and weak states,[226] has grown increasingly federal since the late 1990s as a result of political, economic, and social changes.[227][228]
|
95 |
+
|
96 |
+
The Government of India comprises three branches:[230]
|
97 |
+
|
98 |
+
India is a federal union comprising 28 states and 8 union territories.[245] All states, as well as the union territories of Jammu and Kashmir, Puducherry and the National Capital Territory of Delhi, have elected legislatures and governments following the Westminster system of governance. The remaining five union territories are directly ruled by the central government through appointed administrators. In 1956, under the States Reorganisation Act, states were reorganised on a linguistic basis.[246] There are over a quarter of a million local government bodies at city, town, block, district and village levels.[247]
|
99 |
+
|
100 |
+
In the 1950s, India strongly supported decolonisation in Africa and Asia and played a leading role in the Non-Aligned Movement.[249] After initially cordial relations with neighbouring China, India went to war with China in 1962, and was widely thought to have been humiliated. India has had tense relations with neighbouring Pakistan; the two nations have gone to war four times: in 1947, 1965, 1971, and 1999. Three of these wars were fought over the disputed territory of Kashmir, while the fourth, the 1971 war, followed from India's support for the independence of Bangladesh.[250] In the late 1980s, the Indian military twice intervened abroad at the invitation of the host country: a peace-keeping operation in Sri Lanka between 1987 and 1990; and an armed intervention to prevent a 1988 coup d'état attempt in the Maldives. After the 1965 war with Pakistan, India began to pursue close military and economic ties with the Soviet Union; by the late 1960s, the Soviet Union was its largest arms supplier.[251]
|
101 |
+
|
102 |
+
Aside from ongoing its special relationship with Russia,[252] India has wide-ranging defence relations with Israel and France. In recent years, it has played key roles in the South Asian Association for Regional Cooperation and the World Trade Organization. The nation has provided 100,000 military and police personnel to serve in 35 UN peacekeeping operations across four continents. It participates in the East Asia Summit, the G8+5, and other multilateral forums.[253] India has close economic ties with South America,[254] Asia, and Africa; it pursues a "Look East" policy that seeks to strengthen partnerships with the ASEAN nations, Japan, and South Korea that revolve around many issues, but especially those involving economic investment and regional security.[255][256]
|
103 |
+
|
104 |
+
China's nuclear test of 1964, as well as its repeated threats to intervene in support of Pakistan in the 1965 war, convinced India to develop nuclear weapons.[258] India conducted its first nuclear weapons test in 1974 and carried out additional underground testing in 1998. Despite criticism and military sanctions, India has signed neither the Comprehensive Nuclear-Test-Ban Treaty nor the Nuclear Non-Proliferation Treaty, considering both to be flawed and discriminatory.[259] India maintains a "no first use" nuclear policy and is developing a nuclear triad capability as a part of its "Minimum Credible Deterrence" doctrine.[260][261] It is developing a ballistic missile defence shield and, a fifth-generation fighter jet.[262][263] Other indigenous military projects involve the design and implementation of Vikrant-class aircraft carriers and Arihant-class nuclear submarines.[264]
|
105 |
+
|
106 |
+
Since the end of the Cold War, India has increased its economic, strategic, and military co-operation with the United States and the European Union.[265] In 2008, a civilian nuclear agreement was signed between India and the United States. Although India possessed nuclear weapons at the time and was not a party to the Nuclear Non-Proliferation Treaty, it received waivers from the International Atomic Energy Agency and the Nuclear Suppliers Group, ending earlier restrictions on India's nuclear technology and commerce. As a consequence, India became the sixth de facto nuclear weapons state.[266] India subsequently signed co-operation agreements involving civilian nuclear energy with Russia,[267] France,[268] the United Kingdom,[269] and Canada.[270]
|
107 |
+
|
108 |
+
The President of India is the supreme commander of the nation's armed forces; with 1.395 million active troops, they compose the world's second-largest military. It comprises the Indian Army, the Indian Navy, the Indian Air Force, and the Indian Coast Guard.[271] The official Indian defence budget for 2011 was US$36.03 billion, or 1.83% of GDP.[272] For the fiscal year spanning 2012–2013, US$40.44 billion was budgeted.[273] According to a 2008 Stockholm International Peace Research Institute (SIPRI) report, India's annual military expenditure in terms of purchasing power stood at US$72.7 billion.[274] In 2011, the annual defence budget increased by 11.6%,[275] although this does not include funds that reach the military through other branches of government.[276] As of 2012[update], India is the world's largest arms importer; between 2007 and 2011, it accounted for 10% of funds spent on international arms purchases.[277] Much of the military expenditure was focused on defence against Pakistan and countering growing Chinese influence in the Indian Ocean.[275] In May 2017, the Indian Space Research Organisation launched the South Asia Satellite, a gift from India to its neighbouring SAARC countries.[278] In October 2018, India signed a US$5.43 billion (over ₹400 billion) agreement with Russia to procure four S-400 Triumf surface-to-air missile defence systems, Russia's most advanced long-range missile defence system.[279]
|
109 |
+
|
110 |
+
According to the International Monetary Fund (IMF), the Indian economy in 2019 was nominally worth $2.9 trillion; it is the fifth-largest economy by market exchange rates, and is around $11 trillion, the third-largest by purchasing power parity, or PPP.[19] With its average annual GDP growth rate of 5.8% over the past two decades, and reaching 6.1% during 2011–2012,[283] India is one of the world's fastest-growing economies.[284] However, the country ranks 139th in the world in nominal GDP per capita and 118th in GDP per capita at PPP.[285] Until 1991, all Indian governments followed protectionist policies that were influenced by socialist economics. Widespread state intervention and regulation largely walled the economy off from the outside world. An acute balance of payments crisis in 1991 forced the nation to liberalise its economy;[286] since then it has moved slowly towards a free-market system[287][288] by emphasising both foreign trade and direct investment inflows.[289] India has been a member of WTO since 1 January 1995.[290]
|
111 |
+
|
112 |
+
The 513.7-million-worker Indian labour force is the world's second-largest, as of 2016[update].[271] The service sector makes up 55.6% of GDP, the industrial sector 26.3% and the agricultural sector 18.1%. India's foreign exchange remittances of US$70 billion in 2014, the largest in the world, were contributed to its economy by 25 million Indians working in foreign countries.[291] Major agricultural products include: rice, wheat, oilseed, cotton, jute, tea, sugarcane, and potatoes.[245] Major industries include: textiles, telecommunications, chemicals, pharmaceuticals, biotechnology, food processing, steel, transport equipment, cement, mining, petroleum, machinery, and software.[245] In 2006, the share of external trade in India's GDP stood at 24%, up from 6% in 1985.[287] In 2008, India's share of world trade was 1.68%;[292] In 2011, India was the world's tenth-largest importer and the nineteenth-largest exporter.[293] Major exports include: petroleum products, textile goods, jewellery, software, engineering goods, chemicals, and manufactured leather goods.[245] Major imports include: crude oil, machinery, gems, fertiliser, and chemicals.[245] Between 2001 and 2011, the contribution of petrochemical and engineering goods to total exports grew from 14% to 42%.[294] India was the world's second largest textile exporter after China in the 2013 calendar year.[295]
|
113 |
+
|
114 |
+
Averaging an economic growth rate of 7.5% for several years prior to 2007,[287] India has more than doubled its hourly wage rates during the first decade of the 21st century.[296] Some 431 million Indians have left poverty since 1985; India's middle classes are projected to number around 580 million by 2030.[297] Though ranking 51st in global competitiveness, as of 2010[update], India ranks 17th in financial market sophistication, 24th in the banking sector, 44th in business sophistication, and 39th in innovation, ahead of several advanced economies.[298] With seven of the world's top 15 information technology outsourcing companies based in India, as of 2009[update], the country is viewed as the second-most favourable outsourcing destination after the United States.[299] India's consumer market, the world's eleventh-largest, is expected to become fifth-largest by 2030.[297] However, barely 2% of Indians pay income taxes.[300]
|
115 |
+
|
116 |
+
Driven by growth, India's nominal GDP per capita increased steadily from US$329 in 1991, when economic liberalisation began, to US$1,265 in 2010, to an estimated US$1,723 in 2016. It is expected to grow to US$2,358 by 2020.[19] However, it has remained lower than those of other Asian developing countries like Indonesia, Malaysia, Philippines, Sri Lanka, and Thailand, and is expected to remain so in the near future. Its GDP per capita is higher than Bangladesh, Pakistan, Nepal, Afghanistan and others.[301]
|
117 |
+
|
118 |
+
According to a 2011 PricewaterhouseCoopers (PwC) report, India's GDP at purchasing power parity could overtake that of the United States by 2045.[303] During the next four decades, Indian GDP is expected to grow at an annualised average of 8%, making it potentially the world's fastest-growing major economy until 2050.[303] The report highlights key growth factors: a young and rapidly growing working-age population; growth in the manufacturing sector because of rising education and engineering skill levels; and sustained growth of the consumer market driven by a rapidly growing middle-class.[303] The World Bank cautions that, for India to achieve its economic potential, it must continue to focus on public sector reform, transport infrastructure, agricultural and rural development, removal of labour regulations, education, energy security, and public health and nutrition.[304]
|
119 |
+
|
120 |
+
According to the Worldwide Cost of Living Report 2017 released by the Economist Intelligence Unit (EIU) which was created by comparing more than 400 individual prices across 160 products and services, four of the cheapest cities were in India: Bangalore (3rd), Mumbai (5th), Chennai (5th) and New Delhi (8th).[305]
|
121 |
+
|
122 |
+
India's telecommunication industry, the world's fastest-growing, added 227 million subscribers during the period 2010–2011,[306] and after the third quarter of 2017, India surpassed the US to become the second largest smartphone market in the world after China.[307]
|
123 |
+
|
124 |
+
The Indian automotive industry, the world's second-fastest growing, increased domestic sales by 26% during 2009–2010,[308] and exports by 36% during 2008–2009.[309] India's capacity to generate electrical power is 300 gigawatts, of which 42 gigawatts is renewable.[310] At the end of 2011, the Indian IT industry employed 2.8 million professionals, generated revenues close to US$100 billion equalling 7.5% of Indian GDP, and contributed 26% of India's merchandise exports.[311]
|
125 |
+
|
126 |
+
The pharmaceutical industry in India is among the significant emerging markets for the global pharmaceutical industry. The Indian pharmaceutical market is expected to reach $48.5 billion by 2020. India's R & D spending constitutes 60% of the biopharmaceutical industry.[312][313] India is among the top 12 biotech destinations in the world.[314][315] The Indian biotech industry grew by 15.1% in 2012–2013, increasing its revenues from ₹204.4 billion (Indian rupees) to ₹235.24 billion (US$3.94 billion at June 2013 exchange rates).[316]
|
127 |
+
|
128 |
+
Despite economic growth during recent decades, India continues to face socio-economic challenges. In 2006, India contained the largest number of people living below the World Bank's international poverty line of US$1.25 per day.[318] The proportion decreased from 60% in 1981 to 42% in 2005.[319] Under the World Bank's later revised poverty line, it was 21% in 2011.[l][321] 30.7% of India's children under the age of five are underweight.[322] According to a Food and Agriculture Organization report in 2015, 15% of the population is undernourished.[323][324] The Mid-Day Meal Scheme attempts to lower these rates.[325]
|
129 |
+
|
130 |
+
According to a 2016 Walk Free Foundation report there were an estimated 18.3 million people in India, or 1.4% of the population, living in the forms of modern slavery, such as bonded labour, child labour, human trafficking, and forced begging, among others.[326][327][328] According to the 2011 census, there were 10.1 million child labourers in the country, a decline of 2.6 million from 12.6 million in 2001.[329]
|
131 |
+
|
132 |
+
Since 1991, economic inequality between India's states has consistently grown: the per-capita net state domestic product of the richest states in 2007 was 3.2 times that of the poorest.[330] Corruption in India is perceived to have decreased. According to the Corruption Perceptions Index, India ranked 78th out of 180 countries in 2018 with a score of 41 out of 100, an improvement from 85th in 2014.[331][332]
|
133 |
+
|
134 |
+
With 1,210,193,422 residents reported in the 2011 provisional census report,[333] India is the world's second-most populous country. Its population grew by 17.64% from 2001 to 2011,[334] compared to 21.54% growth in the previous decade (1991–2001).[334] The human sex ratio, according to the 2011 census, is 940 females per 1,000 males.[333] The median age was 27.6 as of 2016[update].[271] The first post-colonial census, conducted in 1951, counted 361 million people.[335] Medical advances made in the last 50 years as well as increased agricultural productivity brought about by the "Green Revolution" have caused India's population to grow rapidly.[336]
|
135 |
+
|
136 |
+
The average life expectancy in India is at 68 years—69.6 years for women, 67.3 years for men.[337] There are around 50 physicians per 100,000 Indians.[338] Migration from rural to urban areas has been an important dynamic in India's recent history. The number of people living in urban areas grew by 31.2% between 1991 and 2001.[339] Yet, in 2001, over 70% still lived in rural areas.[340][341] The level of urbanisation increased further from 27.81% in the 2001 Census to 31.16% in the 2011 Census. The slowing down of the overall population growth rate was due to the sharp decline in the growth rate in rural areas since 1991.[342] According to the 2011 census, there are 53 million-plus urban agglomerations in India; among them Mumbai, Delhi, Kolkata, Chennai, Bangalore, Hyderabad and Ahmedabad, in decreasing order by population.[343] The literacy rate in 2011 was 74.04%: 65.46% among females and 82.14% among males.[344] The rural-urban literacy gap, which was 21.2 percentage points in 2001, dropped to 16.1 percentage points in 2011. The improvement in the rural literacy rate is twice that of urban areas.[342] Kerala is the most literate state with 93.91% literacy; while Bihar the least with 63.82%.[344]
|
137 |
+
|
138 |
+
India is home to two major language families: Indo-Aryan (spoken by about 74% of the population) and Dravidian (spoken by 24% of the population). Other languages spoken in India come from the Austroasiatic and Sino-Tibetan language families. India has no national language.[345] Hindi, with the largest number of speakers, is the official language of the government.[346][347] English is used extensively in business and administration and has the status of a "subsidiary official language";[5] it is important in education, especially as a medium of higher education. Each state and union territory has one or more official languages, and the constitution recognises in particular 22 "scheduled languages".
|
139 |
+
|
140 |
+
The 2011 census reported the religion in India with the largest number of followers was Hinduism (79.80% of the population), followed by Islam (14.23%); the remaining were Christianity (2.30%), Sikhism (1.72%), Buddhism (0.70%), Jainism (0.36%) and others[m] (0.9%).[14] India has the world's largest Hindu, Sikh, Jain, Zoroastrian, and Bahá'í populations, and has the third-largest Muslim population—the largest for a non-Muslim majority country.[348][349]
|
141 |
+
|
142 |
+
Indian cultural history spans more than 4,500 years.[350] During the Vedic period (c. 1700 – c. 500 BCE), the foundations of Hindu philosophy, mythology, theology and literature were laid, and many beliefs and practices which still exist today, such as dhárma, kárma, yóga, and mokṣa, were established.[62] India is notable for its religious diversity, with Hinduism, Buddhism, Sikhism, Islam, Christianity, and Jainism among the nation's major religions.[351] The predominant religion, Hinduism, has been shaped by various historical schools of thought, including those of the Upanishads,[352] the Yoga Sutras, the Bhakti movement,[351] and by Buddhist philosophy.[353]
|
143 |
+
|
144 |
+
Much of Indian architecture, including the Taj Mahal, other works of Mughal architecture, and South Indian architecture, blends ancient local traditions with imported styles.[354] Vernacular architecture is also regional in its flavours. Vastu shastra, literally "science of construction" or "architecture" and ascribed to Mamuni Mayan,[355] explores how the laws of nature affect human dwellings;[356] it employs precise geometry and directional alignments to reflect perceived cosmic constructs.[357] As applied in Hindu temple architecture, it is influenced by the Shilpa Shastras, a series of foundational texts whose basic mythological form is the Vastu-Purusha mandala, a square that embodied the "absolute".[358] The Taj Mahal, built in Agra between 1631 and 1648 by orders of Emperor Shah Jahan in memory of his wife, has been described in the UNESCO World Heritage List as "the jewel of Muslim art in India and one of the universally admired masterpieces of the world's heritage".[359] Indo-Saracenic Revival architecture, developed by the British in the late 19th century, drew on Indo-Islamic architecture.[360]
|
145 |
+
|
146 |
+
The earliest literature in India, composed between 1500 BCE and 1200 CE, was in the Sanskrit language.[361] Major works of Sanskrit literature include the Rigveda (c. 1500 BCE – 1200 BCE), the epics: Mahābhārata (c. 400 BCE – 400 CE) and the Ramayana (c. 300 BCE and later); Abhijñānaśākuntalam (The Recognition of Śakuntalā, and other dramas of Kālidāsa (c. 5th century CE) and Mahākāvya poetry.[362][363][364] In Tamil literature, the Sangam literature (c. 600 BCE – 300 BCE) consisting of 2,381 poems, composed by 473 poets, is the earliest work.[365][366][367][368] From the 14th to the 18th centuries, India's literary traditions went through a period of drastic change because of the emergence of devotional poets like Kabīr, Tulsīdās, and Guru Nānak. This period was characterised by a varied and wide spectrum of thought and expression; as a consequence, medieval Indian literary works differed significantly from classical traditions.[369] In the 19th century, Indian writers took a new interest in social questions and psychological descriptions. In the 20th century, Indian literature was influenced by the works of the Bengali poet and novelist Rabindranath Tagore,[370] who was a recipient of the Nobel Prize in Literature.
|
147 |
+
|
148 |
+
Indian music ranges over various traditions and regional styles. Classical music encompasses two genres and their various folk offshoots: the northern Hindustani and southern Carnatic schools.[371] Regionalised popular forms include filmi and folk music; the syncretic tradition of the bauls is a well-known form of the latter. Indian dance also features diverse folk and classical forms. Among the better-known folk dances are: the bhangra of Punjab, the bihu of Assam, the Jhumair and chhau of Jharkhand, Odisha and West Bengal, garba and dandiya of Gujarat, ghoomar of Rajasthan, and the lavani of Maharashtra. Eight dance forms, many with narrative forms and mythological elements, have been accorded classical dance status by India's National Academy of Music, Dance, and Drama. These are: bharatanatyam of the state of Tamil Nadu, kathak of Uttar Pradesh, kathakali and mohiniyattam of Kerala, kuchipudi of Andhra Pradesh, manipuri of Manipur, odissi of Odisha, and the sattriya of Assam.[372] Theatre in India melds music, dance, and improvised or written dialogue.[373] Often based on Hindu mythology, but also borrowing from medieval romances or social and political events, Indian theatre includes: the bhavai of Gujarat, the jatra of West Bengal, the nautanki and ramlila of North India, tamasha of Maharashtra, burrakatha of Andhra Pradesh, terukkuttu of Tamil Nadu, and the yakshagana of Karnataka.[374] India has a theatre training institute the National School of Drama (NSD) that is situated at New Delhi It is an autonomous organisation under the Ministry of Culture, Government of India.[375]
|
149 |
+
The Indian film industry produces the world's most-watched cinema.[376] Established regional cinematic traditions exist in the Assamese, Bengali, Bhojpuri, Hindi, Kannada, Malayalam, Punjabi, Gujarati, Marathi, Odia, Tamil, and Telugu languages.[377] The Hindi language film industry (Bollywood) is the largest sector representing 43% of box office revenue, followed by the South Indian Telugu and Tamil film industries which represent 36% combined.[378]
|
150 |
+
|
151 |
+
Television broadcasting began in India in 1959 as a state-run medium of communication and expanded slowly for more than two decades.[379][380] The state monopoly on television broadcast ended in the 1990s. Since then, satellite channels have increasingly shaped the popular culture of Indian society.[381] Today, television is the most penetrative media in India; industry estimates indicate that as of 2012[update] there are over 554 million TV consumers, 462 million with satellite or cable connections compared to other forms of mass media such as the press (350 million), radio (156 million) or internet (37 million).[382]
|
152 |
+
|
153 |
+
Traditional Indian society is sometimes defined by social hierarchy. The Indian caste system embodies much of the social stratification and many of the social restrictions found in the Indian subcontinent. Social classes are defined by thousands of endogamous hereditary groups, often termed as jātis, or "castes".[383] India declared untouchability to be illegal[384] in 1947 and has since enacted other anti-discriminatory laws and social welfare initiatives. At the workplace in urban India, and in international or leading Indian companies, caste-related identification has pretty much lost its importance.[385][386]
|
154 |
+
|
155 |
+
Family values are important in the Indian tradition, and multi-generational patriarchal joint families have been the norm in India, though nuclear families are becoming common in urban areas.[387] An overwhelming majority of Indians, with their consent, have their marriages arranged by their parents or other family elders.[388] Marriage is thought to be for life,[388] and the divorce rate is extremely low,[389] with less than one in a thousand marriages ending in divorce.[390] Child marriages are common, especially in rural areas; many women wed before reaching 18, which is their legal marriageable age.[391] Female infanticide in India, and lately female foeticide, have created skewed gender ratios; the number of missing women in the country quadrupled from 15 million to 63 million in the 50-year period ending in 2014, faster than the population growth during the same period, and constituting 20 percent of India's female electorate.[392] Accord to an Indian government study, an additional 21 million girls are unwanted and do not receive adequate care.[393] Despite a government ban on sex-selective foeticide, the practice remains commonplace in India, the result of a preference for boys in a patriarchal society.[394] The payment of dowry, although illegal, remains widespread across class lines.[395] Deaths resulting from dowry, mostly from bride burning, are on the rise, despite stringent anti-dowry laws.[396]
|
156 |
+
|
157 |
+
Many Indian festivals are religious in origin. The best known include: Diwali, Ganesh Chaturthi, Thai Pongal, Holi, Durga Puja, Eid ul-Fitr, Bakr-Id, Christmas, and Vaisakhi.[397][398]
|
158 |
+
|
159 |
+
The most widely worn traditional dress in India, for both women and men, from ancient times until the advent of modern times, was draped.[399] For women it eventually took the form of a sari, a single long piece of cloth, famously six yards long, and of width spanning the lower body.[399] The sari is tied around the waist and knotted at one end, wrapped around the lower body, and then over the shoulder.[399] In its more modern form, it has been used to cover the head, and sometimes the face, as a veil.[399] It has been combined with an underskirt, or Indian petticoat, and tucked in the waist band for more secure fastening, It is also commonly worn with an Indian blouse, or choli, which serves as the primary upper-body garment, the sari's end, passing over the shoulder, now serving to obscure the upper body's contours, and to cover the midriff.[399]
|
160 |
+
|
161 |
+
For men, a similar but shorter length of cloth, the dhoti, has served as a lower-body garment.[400] It too is tied around the waist and wrapped.[400] In south India, it is usually wrapped around the lower body, the upper end tucked in the waistband, the lower left free. In addition, in northern India, it is also wrapped once around each leg before being brought up through the legs to be tucked in at the back. Other forms of traditional apparel that involve no stitching or tailoring are the chaddar (a shawl worn by both sexes to cover the upper body during colder weather, or a large veil worn by women for framing the head, or covering it) and the pagri (a turban or a scarf worn around the head as a part of a tradition, or to keep off the sun or the cold).[400]
|
162 |
+
|
163 |
+
Until the beginning of the first millennium CE, the ordinary dress of people in India was entirely unstitched.[401] The arrival of the Kushans from Central Asia, circa 48 CE, popularised cut and sewn garments in the style of Central Asian favoured by the elite in northern India.[401] However, it was not until Muslim rule was established, first with the Delhi sultanate and then the Mughal Empire, that the range of stitched clothes in India grew and their use became significantly more widespread.[401] Among the various garments gradually establishing themselves in northern India during medieval and early-modern times and now commonly worn are: the shalwars and pyjamas both forms of trousers, as well as the tunics kurta and kameez.[401] In southern India, however, the traditional draped garments were to see much longer continuous use.[401]
|
164 |
+
|
165 |
+
Shalwars are atypically wide at the waist but narrow to a cuffed bottom. They are held up by a drawstring or elastic belt, which causes them to become pleated around the waist.[402] The pants can be wide and baggy, or they can be cut quite narrow, on the bias, in which case they are called churidars. The kameez is a long shirt or tunic.[403] The side seams are left open below the waist-line,[404]), which gives the wearer greater freedom of movement. The kameez is usually cut straight and flat; older kameez use traditional cuts; modern kameez are more likely to have European-inspired set-in sleeves. The kameez may have a European-style collar, a Mandarin-collar, or it may be collarless; in the latter case, its design as a women's garment is similar to a kurta.[405] At first worn by Muslim women, the use of shalwar kameez gradually spread, making them a regional style,[406][407] especially in the Punjab region.[408]
|
166 |
+
[409]
|
167 |
+
|
168 |
+
A kurta, which traces its roots to Central Asian nomadic tunics, has evolved stylistically in India as a garment for everyday wear as well as for formal occasions.[401] It is traditionally made of cotton or silk; it is worn plain or with embroidered decoration, such as chikan; and it can be loose or tight in the torso, typically falling either just above or somewhere below the wearer's knees.[410] The sleeves of a traditional kurta fall to the wrist without narrowing, the ends hemmed but not cuffed; the kurta can be worn by both men and women; it is traditionally collarless, though standing collars are increasingly popular; and it can be worn over ordinary pyjamas, loose shalwars, churidars, or less traditionally over jeans.[410]
|
169 |
+
|
170 |
+
In the last 50 years, fashions have changed a great deal in India. Increasingly, in urban settings in northern India, the sari is no longer the apparel of everyday wear, transformed instead into one for formal occasions.[411] The traditional shalwar kameez is rarely worn by younger women, who favour churidars or jeans.[411] The kurtas worn by young men usually fall to the shins and are seldom plain. In white-collar office settings, ubiquitous air conditioning allows men to wear sports jackets year-round.[411] For weddings and formal occasions, men in the middle- and upper classes often wear bandgala, or short Nehru jackets, with pants, with the groom and his groomsmen sporting sherwanis and churidars.[411] The dhoti, the once universal garment of Hindu India, the wearing of which in the homespun and handwoven form of khadi allowed Gandhi to bring Indian nationalism to the millions,[412]
|
171 |
+
is seldom seen in the cities,[411] reduced now, with brocaded border, to the liturgical vestments of Hindu priests.
|
172 |
+
|
173 |
+
Indian cuisine consists of a wide variety of regional and traditional cuisines. Given the range of diversity in soil type, climate, culture, ethnic groups, and occupations, these cuisines vary substantially from each other, using locally available spices, herbs, vegetables, and fruit. Indian foodways have been influenced by religion, in particular Hindu cultural choices and traditions.[413] They have been also shaped by Islamic rule, particularly that of the Mughals, by the arrival of the Portuguese on India's southwestern shores, and by British rule. These three influences are reflected, respectively, in the dishes of pilaf and biryani; the vindaloo; and the tiffin and the Railway mutton curry.[414] Earlier, the Columbian exchange had brought the potato, the tomato, maize, peanuts, cashew nuts, pineapples, guavas, and most notably, chilli peppers, to India. Each became staples of use.[415] In turn, the spice trade between India and Europe was a catalyst for Europe's Age of Discovery.[416]
|
174 |
+
|
175 |
+
The cereals grown in India, their choice, times, and regions of planting, correspond strongly to the timing of India's monsoons, and the variation across regions in their associated rainfall.[417] In general, the broad division of cereal zones in India, as determined by their dependence on rain, was firmly in place before the arrival of artificial irrigation.[417] Rice, which requires a lot of water, has been grown traditionally in regions of high rainfall in the northeast and the western coast, wheat in regions of moderate rainfall, like India's northern plains, and millet in regions of low rainfall, such as on the Deccan Plateau and in Rajasthan.[418][417]
|
176 |
+
|
177 |
+
The foundation of a typical Indian meal is a cereal cooked in plain fashion, and complemented with flavourful savoury dishes.[419] The latter includes lentils, pulses and vegetables spiced commonly with ginger and garlic, but also more discerningly with a combination of spices that may include coriander, cumin, turmeric, cinnamon, cardamon and others as informed by culinary conventions.[419] In an actual meal, this mental representation takes the form of a platter, or thali, with a central place for the cooked cereal, peripheral ones, often in small bowls, for the flavourful accompaniments, and the simultaneous, rather than piecemeal, ingestion of the two in each act of eating, whether by actual mixing—for example of rice and lentils—or in the folding of one—such as bread—around the other, such as cooked vegetables.[419]
|
178 |
+
|
179 |
+
A notable feature of Indian food is the existence of a number of distinctive vegetarian cuisines, each a feature of the geographical and cultural histories of its adherents.[420] The appearance of ahimsa, or the avoidance of violence toward all forms of life in many religious orders early in Indian history, especially Upanishadic Hinduism, Buddhism and Jainism, is thought to have been a notable factor in the prevalence of vegetarianism among a segment of India's Hindu population, especially in southern India, Gujarat, and the Hindi-speaking belt of north-central India, as well as among Jains.[420] Among these groups, strong discomfort is felt at thoughts of eating meat,[421] and contributes to the low proportional consumption of meat to overall diet in India.[421] Unlike China, which has increased its per capita meat consumption substantially in its years of increased economic growth, in India the strong dietary traditions have contributed to dairy, rather than meat, becoming the preferred form of animal protein consumption accompanying higher economic growth.[422]
|
180 |
+
|
181 |
+
In the last millennium, the most significant import of cooking techniques into India occurred during the Mughal Empire. The cultivation of rice had spread much earlier from India to Central and West Asia; however, it was during Mughal rule that dishes, such as the pilaf,[418] developed in the interim during the Abbasid caliphate,[423] and cooking techniques such as the marinating of meat in yogurt, spread into northern India from regions to its northwest.[424] To the simple yogurt marinade of Persia, onions, garlic, almonds, and spices began to be added in India.[424] Rice grown to the southwest of the Mughal capital, Agra, which had become famous in the Islamic world for its fine grain, was partially cooked and layered alternately with the sauteed meat, the pot sealed tightly, and slow cooked according to another Persian cooking technique, to produce what has today become the Indian biryani,[424] a feature of festive dining in many parts of India.[425]
|
182 |
+
In food served in restaurants in urban north India, and internationally, the diversity of Indian food has been partially concealed by the dominance of Punjabi cuisine. This was caused in large part by an entrepreneurial response among people from the Punjab region who had been displaced by the 1947 partition of India, and had arrived in India as refugees.[420] The identification of Indian cuisine with the tandoori chicken—cooked in the tandoor oven, which had traditionally been used for baking bread in the rural Punjab and the Delhi region, especially among Muslims, but which is originally from Central Asia—dates to this period.[420]
|
183 |
+
|
184 |
+
In India, several traditional indigenous sports remain fairly popular, such as kabaddi, kho kho, pehlwani and gilli-danda. Some of the earliest forms of Asian martial arts, such as kalarippayattu, musti yuddha, silambam, and marma adi, originated in India. Chess, commonly held to have originated in India as chaturaṅga, is regaining widespread popularity with the rise in the number of Indian grandmasters.[426][427] Pachisi, from which parcheesi derives, was played on a giant marble court by Akbar.[428]
|
185 |
+
|
186 |
+
The improved results garnered by the Indian Davis Cup team and other Indian tennis players in the early 2010s have made tennis increasingly popular in the country.[429] India has a comparatively strong presence in shooting sports, and has won several medals at the Olympics, the World Shooting Championships, and the Commonwealth Games.[430][431] Other sports in which Indians have succeeded internationally include badminton[432] (Saina Nehwal and P V Sindhu are two of the top-ranked female badminton players in the world), boxing,[433] and wrestling.[434] Football is popular in West Bengal, Goa, Tamil Nadu, Kerala, and the north-eastern states.[435]
|
187 |
+
|
188 |
+
Cricket is the most popular sport in India.[437] Major domestic competitions include the Indian Premier League, which is the most-watched cricket league in the world and ranks sixth among all sports leagues.[438]
|
189 |
+
|
190 |
+
India has hosted or co-hosted several international sporting events: the 1951 and 1982 Asian Games; the 1987, 1996, and 2011 Cricket World Cup tournaments; the 2003 Afro-Asian Games; the 2006 ICC Champions Trophy; the 2010 Hockey World Cup; the 2010 Commonwealth Games; and the 2017 FIFA U-17 World Cup. Major international sporting events held annually in India include the Chennai Open, the Mumbai Marathon, the Delhi Half Marathon, and the Indian Masters. The first Formula 1 Indian Grand Prix featured in late 2011 but has been discontinued from the F1 season calendar since 2014.[439] India has traditionally been the dominant country at the South Asian Games. An example of this dominance is the basketball competition where the Indian team won three out of four tournaments to date.[440]
|
191 |
+
|
192 |
+
Overview
|
193 |
+
|
194 |
+
Etymology
|
195 |
+
|
196 |
+
History
|
197 |
+
|
198 |
+
Geography
|
199 |
+
|
200 |
+
Biodiversity
|
201 |
+
|
202 |
+
Politics
|
203 |
+
|
204 |
+
Foreign relations and military
|
205 |
+
|
206 |
+
Economy
|
207 |
+
|
208 |
+
Demographics
|
209 |
+
|
210 |
+
Culture
|
211 |
+
|
212 |
+
Government
|
213 |
+
|
214 |
+
General information
|
215 |
+
|
216 |
+
Coordinates: 21°N 78°E / 21°N 78°E / 21; 78
|
en/2718.html.txt
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Elasmosaurus was a large marine reptile in the order Plesiosauria. The genus lived about 80.5 million years ago, during the Late Cretaceous. The first specimen was sent to the American paleontologist Edward Drinker Cope after its discovery in 1867 near Fort Wallace, Kansas. Only one incomplete skeleton is definitely known, consisting of a fragmentary skull, the spine, and the pectoral and pelvic girdles, and a single species, E. platyurus, is recognized today. Measuring 10.3 meters (34 ft) long, the genus had a streamlined body with paddle-like limbs or flippers, a short tail, and a small, slender, triangular head. With a neck around 7.1 meters (23 ft) long, Elasmosaurus was one of the longest-necked animals to have lived, with the largest number of neck vertebrae known, 72. It probably ate small fish and marine invertebrates, seizing them with long teeth. Elasmosaurus is known from the Pierre Shale formation, which represents marine deposits from the Western Interior Seaway. (Full article...)
|
2 |
+
|
3 |
+
July 28
|
4 |
+
|
5 |
+
Cirsium eriophorum, the woolly thistle, is a large herbaceous biennial plant in the daisy family, Asteraceae. It is native to Central and Western Europe, where it grows in grassland and open scrubland. Several parts of the plant are edible; the young leaves can be eaten raw, the young stems can be peeled and boiled, and the flower buds can be consumed in a similar way to artichokes. This picture shows a C. eriophorum flower head photographed in Kozara National Park, in Republika Srpska, Bosnia and Herzegovina.
|
6 |
+
|
7 |
+
Photograph credit: Petar Milošević
|
8 |
+
|
9 |
+
Wikipedia is hosted by the Wikimedia Foundation, a non-profit organization that also hosts a range of other projects:
|
10 |
+
|
11 |
+
This Wikipedia is written in English. Started in 2001 (2001), it currently contains 6,130,233 articles.
|
12 |
+
Many other Wikipedias are available; some of the largest are listed below.
|
en/2719.html.txt
ADDED
@@ -0,0 +1,167 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
In monotheistic thought, God is conceived of as the supreme being, creator deity, and principal object of faith.[1] God is usually conceived as being omniscient (all-knowing), omnipotent (all-powerful), omnipresent (all-present) and as having an eternal and necessary existence. These attributes are used either in way of analogy or are taken literally. God is most often held to be incorporeal (immaterial).[1][2][3] Incorporeality and corporeality of God are related to conceptions of transcendence (being outside nature) and immanence (being in nature) of God, with positions of synthesis such as the "immanent transcendence".
|
4 |
+
|
5 |
+
Some religions describe God without reference to gender, while others or their translations use terminology that is gender-specific and gender-biased.
|
6 |
+
|
7 |
+
God has been conceived as either personal or impersonal. In theism, God is the creator and sustainer of the universe, while in deism, God is the creator, but not the sustainer, of the universe. In pantheism, God is the universe itself. In atheism, there is an absence of belief in God. In agnosticism, the existence of God is deemed unknown or unknowable. God has also been conceived as the source of all moral obligation, and the "greatest conceivable existent".[1] Many notable philosophers have developed arguments for and against the existence of God.[4]
|
8 |
+
|
9 |
+
Monotheists refer to their gods using names prescribed by their respective religions, with some of these names referring to certain cultural ideas about their god's identity and attributes. In the ancient Egyptian era of Atenism, possibly the earliest recorded monotheistic religion, this deity was called Aten,[5] premised on being the one "true" Supreme Being and creator of the universe.[6] In the Hebrew Bible and Judaism, Elohim, Adonai, YHWH (Hebrew: יהוה) and other names are used as the names of God. Yahweh and Jehovah, possible vocalizations of YHWH, are used in Christianity. In the Christian doctrine of the Trinity, God, coexisting in three "persons", is called the Father, the Son, and the Holy Spirit. In Islam, the name Allah is used, while Muslims also have a multitude of titular names for God. In Hinduism, Brahman is often considered a monistic concept of God.[7] In Chinese religion, Shangdi is conceived as the progenitor (first ancestor) of the universe, intrinsic to it and constantly bringing order to it. Other religions have names for the concept of God, including Baha in the Bahá'í Faith,[8] Waheguru in Sikhism,[9] Sang Hyang Widhi Wasa in Balinese Hinduism,[10] and Ahura Mazda in Zoroastrianism.[11]
|
10 |
+
|
11 |
+
|
12 |
+
|
13 |
+
The earliest written form of the Germanic word God comes from the 6th-century Christian Codex Argenteus. The English word itself is derived from the Proto-Germanic * ǥuđan. The reconstructed Proto-Indo-European form * ǵhu-tó-m was likely based on the root * ǵhau(ə)-, which meant either "to call" or "to invoke".[12] The Germanic words for God were originally neuter—applying to both genders—but during the process of the Christianization of the Germanic peoples from their indigenous Germanic paganism, the words became a masculine syntactic form.[13]
|
14 |
+
|
15 |
+
In the English language, capitalization is used for names by which a god is known, including 'God'.[14] Consequently, the capitalized form of god is not used for multiple gods (polytheism) or when used to refer to the generic idea of a deity.[15][16]
|
16 |
+
The English word God and its counterparts in other languages are normally used for any and all conceptions and, in spite of significant differences between religions, the term remains an English translation common to all. The same holds for Hebrew El, but in Judaism, God is also given a proper name, the tetragrammaton YHWH, in origin possibly the name of an Edomite or Midianite deity, Yahweh.
|
17 |
+
In many translations of the Bible, when the word LORD is in all capitals, it signifies that the word represents the tetragrammaton.[17]
|
18 |
+
|
19 |
+
Allāh (Arabic: الله) is the Arabic term with no plural used by Muslims and Arabic speaking Christians and Jews meaning "The God" (with the first letter capitalized), while "ʾilāh" (Arabic: إِلَٰه, plural “`āliha” آلِهَة) is the term used for a deity or a god in general.[18][19][20]
|
20 |
+
|
21 |
+
God may also be given a proper name in monotheistic currents of Hinduism which emphasize the personal nature of God, with early references to his name as Krishna-Vasudeva in Bhagavata or later Vishnu and Hari.[21]
|
22 |
+
|
23 |
+
Ahura Mazda is the name for God used in Zoroastrianism. "Mazda", or rather the Avestan stem-form Mazdā-, nominative Mazdå, reflects Proto-Iranian *Mazdāh (female). It is generally taken to be the proper name of the spirit, and like its Sanskrit cognate medhā, means "intelligence" or "wisdom". Both the Avestan and Sanskrit words reflect Proto-Indo-Iranian *mazdhā-, from Proto-Indo-European mn̩sdʰeh1, literally meaning "placing (dʰeh1) one's mind (*mn̩-s)", hence "wise".[22]
|
24 |
+
|
25 |
+
Waheguru (Punjabi: vāhigurū) is a term most often used in Sikhism to refer to God. It means "Wonderful Teacher" in the Punjabi language. Vāhi (a Middle Persian borrowing) means "wonderful" and guru (Sanskrit: guru) is a term denoting "teacher". Waheguru is also described by some as an experience of ecstasy which is beyond all descriptions. The most common usage of the word "Waheguru" is in the greeting Sikhs use with each other:
|
26 |
+
|
27 |
+
Waheguru Ji Ka Khalsa, Waheguru Ji Ki Fateh
|
28 |
+
Wonderful Lord's Khalsa, Victory is to the Wonderful Lord.
|
29 |
+
|
30 |
+
Baha, the "greatest" name for God in the Baha'i faith, is Arabic for "All-Glorious".
|
31 |
+
|
32 |
+
There is no clear consensus on the nature or the existence of God.[23] The Abrahamic conceptions of God include the monotheistic definition of God in Judaism, the trinitarian view of Christians, and the Islamic concept of God.
|
33 |
+
|
34 |
+
There were also various conceptions of God in the ancient Greco-Roman world, such as Aristotle's view of an unmoved mover, the Neoplatonic concept of the One and the pantheistic God of Stoic Physics.
|
35 |
+
|
36 |
+
The dharmic religions differ in their view of the divine: views of God in Hinduism vary by region, sect, and caste, ranging from monotheistic to polytheistic. Many polytheistic religions share the idea of a creator deity, although having a name other than "God" and without all of the other roles attributed to a singular God by monotheistic religions. Sikhism is sometimes seen as being pantheistic about God, see: God in Sikhism.
|
37 |
+
|
38 |
+
Śramaṇa religions are generally non-creationist, while also holding that there divine beings (called Devas in Buddhism and Jainism) of limited power and lifespan. Jainism has generally rejected creationism, holding that soul substances (Jīva) are uncreated and that time is beginningless.[24] Depending on one's interpretation and tradition, Buddhism can be conceived as being either non-theistic, trans-theistic, pantheistic, or polytheistic. However, Buddhism has generally rejected the specific monotheistic view of a Creator God. The Buddha criticizes the theory of creationism in the early buddhist texts.[25][26] Also, the major Indian Buddhist philosophers, such as such as Nagarjuna, Vasubandhu, Dharmakirti and Buddhaghosa, consistently critiqued Creator God views put forth by Hindu thinkers.[27][28][29]
|
39 |
+
|
40 |
+
Monotheists believe that there is only one god, and may also believe this god is worshipped in different religions under different names. The view that all theists actually worship the same god, whether they know it or not, is especially emphasized in the Bahá'í Faith, Hinduism[30] and Sikhism.[31]
|
41 |
+
|
42 |
+
In Christianity, the doctrine of the Trinity describes God as one God in three divine Persons (each of the three Persons is God himself). The Most Holy Trinity comprises[32] God the Father, God the Son (Jesus), and God the Holy Spirit. In the past centuries, this fundamental mystery of the Christian faith was also summarized by the Latin formula Sancta Trinitas, Unus Deus (Holy Trinity, Unique God), reported in the Litanias Lauretanas.
|
43 |
+
|
44 |
+
Islam's most fundamental concept is tawhid meaning "oneness" or "uniqueness". God is described in the Quran as: "He is Allah, the One and Only; Allah, the Eternal, Absolute; He begetteth not, nor is He begotten; And there is none like unto Him."[33][34] Muslims repudiate the Christian doctrine of the Trinity and the divinity of Jesus, comparing it to polytheism. In Islam, God is transcendent and does not resemble any of his creations in any way. Thus, Muslims are not iconodules, and are not expected to visualize God.[35]
|
45 |
+
|
46 |
+
Henotheism is the belief and worship of a single god while accepting the existence or possible existence of other deities.[36]
|
47 |
+
|
48 |
+
Theism generally holds that God exists realistically, objectively, and independently of human thought; that God created and sustains everything; that God is omnipotent and eternal; and that God is personal and interacting with the universe through, for example, religious experience and the prayers of humans.[37] Theism holds that God is both transcendent and immanent; thus, God is simultaneously infinite and, in some way, present in the affairs of the world.[38] Not all theists subscribe to all of these propositions, but each usually subscribes to some of them (see, by way of comparison, family resemblance).[37] Catholic theology holds that God is infinitely simple and is not involuntarily subject to time. Most theists hold that God is omnipotent, omniscient, and benevolent, although this belief raises questions about God's responsibility for evil and suffering in the world. Some theists ascribe to God a self-conscious or purposeful limiting of omnipotence, omniscience, or benevolence. Open Theism, by contrast, contends that, due to the nature of time, God's omniscience does not mean the deity can predict the future. Theism is sometimes used to refer in general to any belief in a god or gods, i.e., monotheism or polytheism.[39][40]
|
49 |
+
|
50 |
+
Deism holds that God is wholly transcendent: God exists, but does not intervene in the world beyond what was necessary to create it.[38] In this view, God is not anthropomorphic, and neither answers prayers nor produces miracles. Common in Deism is a belief that God has no interest in humanity and may not even be aware of humanity. Pandeism combines Deism with Pantheistic beliefs.[41][42][43] Pandeism is proposed to explain as to Deism why God would create a universe and then abandon it,[44] and as to Pantheism, the origin and purpose of the universe.[44][45]
|
51 |
+
|
52 |
+
Pantheism holds that God is the universe and the universe is God, whereas Panentheism holds that God contains, but is not identical to, the Universe.[46] It is also the view of the Liberal Catholic Church; Theosophy; some views of Hinduism except Vaishnavism, which believes in panentheism; Sikhism; some divisions of Neopaganism and Taoism, along with many varying denominations and individuals within denominations. Kabbalah, Jewish mysticism, paints a pantheistic/panentheistic view of God—which has wide acceptance in Hasidic Judaism, particularly from their founder The Baal Shem Tov—but only as an addition to the Jewish view of a personal god, not in the original pantheistic sense that denies or limits persona to God.[citation needed]
|
53 |
+
|
54 |
+
Dystheism, which is related to theodicy, is a form of theism which holds that God is either not wholly good or is fully malevolent as a consequence of the problem of evil. One such example comes from Dostoevsky's The Brothers Karamazov, in which Ivan Karamazov rejects God on the grounds that he allows children to suffer.[47]
|
55 |
+
|
56 |
+
In modern times, some more abstract concepts have been developed, such as process theology and open theism. The contemporaneous French philosopher Michel Henry has however proposed a phenomenological approach and definition of God as phenomenological essence of Life.[48]
|
57 |
+
|
58 |
+
God has also been conceived as being incorporeal (immaterial), a personal being, the source of all moral obligation, and the "greatest conceivable existent".[1] These attributes were all supported to varying degrees by the early Jewish, Christian and Muslim theologian philosophers, including Maimonides,[49] Augustine of Hippo,[49] and Al-Ghazali,[4] respectively.
|
59 |
+
|
60 |
+
Non-theist views about God also vary. Some non-theists avoid the concept of God, whilst accepting that it is significant to many; other non-theists understand God as a symbol of human values and aspirations. The nineteenth-century English atheist Charles Bradlaugh declared that he refused to say "There is no God", because "the word 'God' is to me a sound conveying no clear or distinct affirmation";[50] he said more specifically that he disbelieved in the Christian god. Stephen Jay Gould proposed an approach dividing the world of philosophy into what he called "non-overlapping magisteria" (NOMA). In this view, questions of the supernatural, such as those relating to the existence and nature of God, are non-empirical and are the proper domain of theology. The methods of science should then be used to answer any empirical question about the natural world, and theology should be used to answer questions about ultimate meaning and moral value. In this view, the perceived lack of any empirical footprint from the magisterium of the supernatural onto natural events makes science the sole player in the natural world.[51]
|
61 |
+
|
62 |
+
Another view, advanced by Richard Dawkins, is that the existence of God is an empirical question, on the grounds that "a universe with a god would be a completely different kind of universe from one without, and it would be a scientific difference."[52] Carl Sagan argued that the doctrine of a Creator of the Universe was difficult to prove or disprove and that the only conceivable scientific discovery that could disprove the existence of a Creator (not necessarily a God) would be the discovery that the universe is infinitely old.[53]
|
63 |
+
|
64 |
+
Stephen Hawking and co-author Leonard Mlodinow state in their book, The Grand Design, that it is reasonable to ask who or what created the universe, but if the answer is God, then the question has merely been deflected to that of who created God. Both authors claim however, that it is possible to answer these questions purely within the realm of science, and without invoking any divine beings.[54]
|
65 |
+
|
66 |
+
Agnosticism is the view that the truth values of certain claims—especially metaphysical and religious claims such as whether God, the divine or the supernatural exist—are unknown and perhaps unknowable.[55][56][57]
|
67 |
+
|
68 |
+
Atheism is, in a broad sense, the rejection of belief in the existence of deities.[58][59] In a narrower sense, atheism is specifically the position that there are no deities, although it can be defined as a lack of belief in the existence of any deities, rather than a positive belief in the nonexistence of any deities.[60]
|
69 |
+
|
70 |
+
Pascal Boyer argues that while there is a wide array of supernatural concepts found around the world, in general, supernatural beings tend to behave much like people. The construction of gods and spirits like persons is one of the best known traits of religion. He cites examples from Greek mythology, which is, in his opinion, more like a modern soap opera than other religious systems.[61]
|
71 |
+
Bertrand du Castel and Timothy Jurgensen demonstrate through formalization that Boyer's explanatory model matches physics' epistemology in positing not directly observable entities as intermediaries.[62]
|
72 |
+
Anthropologist Stewart Guthrie contends that people project human features onto non-human aspects of the world because it makes those aspects more familiar. Sigmund Freud also suggested that god concepts are projections of one's father.[63]
|
73 |
+
|
74 |
+
Likewise, Émile Durkheim was one of the earliest to suggest that gods represent an extension of human social life to include supernatural beings. In line with this reasoning, psychologist Matt Rossano contends that when humans began living in larger groups, they may have created gods as a means of enforcing morality. In small groups, morality can be enforced by social forces such as gossip or reputation. However, it is much harder to enforce morality using social forces in much larger groups. Rossano indicates that by including ever-watchful gods and spirits, humans discovered an effective strategy for restraining selfishness and building more cooperative groups.[64]
|
75 |
+
|
76 |
+
Arguments about the existence of God typically include empirical, deductive, and inductive types. Different views include that: "God does not exist" (strong atheism); "God almost certainly does not exist" (de facto atheism); "no one knows whether God exists" (agnosticism[65]); "God exists, but this cannot be proven or disproven" (de facto theism); and that "God exists and this can be proven" (strong theism).[51]
|
77 |
+
|
78 |
+
Countless arguments have been proposed to prove the existence of God.[66] Some of the most notable arguments are the Five Ways of Aquinas, the Argument from desire proposed by C.S. Lewis, and the Ontological Argument formulated both by St. Anselm and René Descartes.[67]
|
79 |
+
|
80 |
+
St. Anselm's approach was to define God as, "that than which nothing greater can be conceived". Famed pantheist philosopher Baruch Spinoza would later carry this idea to its extreme: "By God I understand a being absolutely infinite, i.e., a substance consisting of infinite attributes, of which each one expresses an eternal and infinite essence." For Spinoza, the whole of the natural universe is made of one substance, God, or its equivalent, Nature.[68] His proof for the existence of God was a variation of the Ontological argument.[69]
|
81 |
+
|
82 |
+
Scientist Isaac Newton saw the nontrinitarian God[70] as the masterful creator whose existence could not be denied in the face of the grandeur of all creation.[71] Nevertheless, he rejected polymath Leibniz' thesis that God would necessarily make a perfect world which requires no intervention from the creator. In Query 31 of the Opticks, Newton simultaneously made an argument from design and for the necessity of intervention:
|
83 |
+
|
84 |
+
For while comets move in very eccentric orbs in all manner of positions, blind fate could never make all the planets move one and the same way in orbs concentric, some inconsiderable irregularities excepted which may have arisen from the mutual actions of comets and planets on one another, and which will be apt to increase, till this system wants a reformation.[72]
|
85 |
+
|
86 |
+
St. Thomas believed that the existence of God is self-evident in itself, but not to us. "Therefore I say that this proposition, "God exists", of itself is self-evident, for the predicate is the same as the subject.... Now because we do not know the essence of God, the proposition is not self-evident to us; but needs to be demonstrated by things that are more known to us, though less known in their nature—namely, by effects."[73]
|
87 |
+
St. Thomas believed that the existence of God can be demonstrated. Briefly in the Summa theologiae and more extensively in the Summa contra Gentiles, he considered in great detail five arguments for the existence of God, widely known as the quinque viae (Five Ways).
|
88 |
+
|
89 |
+
Some theologians, such as the scientist and theologian A.E. McGrath, argue that the existence of God is not a question that can be answered using the scientific method.[75][76] Agnostic Stephen Jay Gould argues that science and religion are not in conflict and do not overlap.[77]
|
90 |
+
|
91 |
+
Some findings in the fields of cosmology, evolutionary biology and neuroscience are interpreted by some atheists (including Lawrence M. Krauss and Sam Harris) as evidence that God is an imaginary entity only, with no basis in reality.[78][79] These atheists claim that a single, omniscient God who is imagined to have created the universe and is particularly attentive to the lives of humans has been imagined, embellished and promulgated in a trans-generational manner.[80] Richard Dawkins interprets such findings not only as a lack of evidence for the material existence of such a God, but as extensive evidence to the contrary.[51] However, his views are opposed by some theologians and scientists including Alister McGrath, who argues that existence of God is compatible with science.[81]
|
92 |
+
|
93 |
+
Different religious traditions assign differing (though often similar) attributes and characteristics to God, including expansive powers and abilities, psychological characteristics, gender characteristics, and preferred nomenclature. The assignment of these attributes often differs according to the conceptions of God in the culture from which they arise. For example, attributes of God in Christianity, attributes of God in Islam, and the Thirteen Attributes of Mercy in Judaism share certain similarities arising from their common roots.
|
94 |
+
|
95 |
+
The word God is "one of the most complex and difficult in the English language." In the Judeo-Christian tradition, "the Bible has been the principal source of the conceptions of God". That the Bible "includes many different images, concepts, and ways of thinking about" God has resulted in perpetual "disagreements about how God is to be conceived and understood".[82]
|
96 |
+
|
97 |
+
Many traditions see God as incorporeal and eternal, and regard him as a point of living light like human souls, but without a physical body, as he does not enter the cycle of birth, death and rebirth. God is seen as the perfect and constant embodiment of all virtues, powers and values and that he is the unconditionally loving Father of all souls, irrespective of their religion, gender, or culture.[83]
|
98 |
+
|
99 |
+
Throughout the Hebrew and Christian Bibles there are many names for God. One of them is Elohim. Another one is El Shaddai, translated "God Almighty".[84] A third notable name is El Elyon, which means "The High God".[85] Also noted in the Hebrew and Christian Bibles is the name "I Am that I Am".[86]
|
100 |
+
|
101 |
+
God is described and referred in the Quran and hadith by certain names or attributes, the most common being Al-Rahman, meaning "Most Compassionate" and Al-Rahim, meaning "Most Merciful" (See Names of God in Islam).[87] Many of these names are also used in the scriptures of the Bahá'í Faith.
|
102 |
+
|
103 |
+
Vaishnavism, a tradition in Hinduism, has a list of titles and names of Krishna.
|
104 |
+
|
105 |
+
The gender of God may be viewed as either a literal or an allegorical aspect of a deity who, in classical western philosophy, transcends bodily form.[88][89] Polytheistic religions commonly attribute to each of the gods a gender, allowing each to interact with any of the others, and perhaps with humans, sexually. In most monotheistic religions, God has no counterpart with which to relate sexually. Thus, in classical western philosophy the gender of this one-and-only deity is most likely to be an analogical statement of how humans and God address, and relate to, each other. Namely, God is seen as begetter of the world and revelation which corresponds to the active (as opposed to the receptive) role in sexual intercourse.[90]
|
106 |
+
|
107 |
+
Biblical sources usually refer to God using male words, except Genesis 1:26–27,[91][92] Psalm 123:2–3, and Luke 15:8–10 (female); Hosea 11:3–4, Deuteronomy 32:18, Isaiah 66:13, Isaiah 49:15, Isaiah 42:14, Psalm 131:2 (a mother); Deuteronomy 32:11–12 (a mother eagle); and Matthew 23:37 and Luke 13:34 (a mother hen).
|
108 |
+
|
109 |
+
Prayer plays a significant role among many believers. Muslims believe that the purpose of existence is to worship God.[93][94] He is viewed as a personal God and there are no intermediaries, such as clergy, to contact God. Prayer often also includes supplication and asking forgiveness. God is often believed to be forgiving. For example, a hadith states God would replace a sinless people with one who sinned but still asked repentance.[95] Christian theologian Alister McGrath writes that there are good reasons to suggest that a "personal god" is integral to the Christian outlook, but that one has to understand it is an analogy. "To say that God is like a person is to affirm the divine ability and willingness to relate to others. This does not imply that God is human, or located at a specific point in the universe."[96]
|
110 |
+
|
111 |
+
Adherents of different religions generally disagree as to how to best worship God and what is God's plan for mankind, if there is one. There are different approaches to reconciling the contradictory claims of monotheistic religions. One view is taken by exclusivists, who believe they are the chosen people or have exclusive access to absolute truth, generally through revelation or encounter with the Divine, which adherents of other religions do not. Another view is religious pluralism. A pluralist typically believes that his religion is the right one, but does not deny the partial truth of other religions. An example of a pluralist view in Christianity is supersessionism, i.e., the belief that one's religion is the fulfillment of previous religions. A third approach is relativistic inclusivism, where everybody is seen as equally right; an example being universalism: the doctrine that salvation is eventually available for everyone. A fourth approach is syncretism, mixing different elements from different religions. An example of syncretism is the New Age movement.
|
112 |
+
|
113 |
+
Jews and Christians believe that humans are created in the image of God, and are the center, crown and key to God's creation, stewards for God, supreme over everything else God had made (Gen 1:26); for this reason, humans are in Christianity called the "Children of God".[97]
|
114 |
+
|
115 |
+
During the early Parthian Empire, Ahura Mazda was visually represented for worship. This practice ended during the beginning of the Sassanid empire. Zoroastrian iconoclasm, which can be traced to the end of the Parthian period and the beginning of the Sassanid, eventually put an end to the use of all images of Ahura Mazda in worship. However, Ahura Mazda continued to be symbolized by a dignified male figure, standing or on horseback which is found in Sassanian investiture.[98]
|
116 |
+
|
117 |
+
At least some Jews do not use any image for God, since God is the unimaginable Being who cannot be represented in material forms.[99]
|
118 |
+
|
119 |
+
The burning bush that was not consumed by the flames is described in Book of Exodus as a symbolic representation of God when he appeared to Moses.[100]
|
120 |
+
|
121 |
+
Early Christians believed that the words of the Gospel of John 1:18: "No man has seen God at any time" and numerous other statements were meant to apply not only to God, but to all attempts at the depiction of God.[101]
|
122 |
+
|
123 |
+
However, later depictions of God are found. Some, like the Hand of God, are depiction borrowed from Jewish art.
|
124 |
+
|
125 |
+
The beginning of the 8th century witnessed the suppression and destruction of religious icons as the period of Byzantine iconoclasm (literally image-breaking) started. The Second Council of Nicaea in 787 effectively ended the first period of Byzantine iconoclasm and restored the honouring of icons and holy images in general.[102] However, this did not immediately translate into large scale depictions of God the Father. Even supporters of the use of icons in the 8th century, such as Saint John of Damascus, drew a distinction between images of God the Father and those of Christ.
|
126 |
+
|
127 |
+
Prior to the 10th century no attempt was made to use a human to symbolize God the Father in Western art.[101] Yet, Western art eventually required some way to illustrate the presence of the Father, so through successive representations a set of artistic styles for symbolizing the Father using a man gradually emerged around the 10th century AD. A rationale for the use of a human is the belief that God created the soul of Man in the image of his own (thus allowing Human to transcend the other animals).
|
128 |
+
|
129 |
+
It appears that when early artists designed to represent God the Father, fear and awe restrained them from a usage of the whole human figure. Typically only a small part would be used as the image, usually the hand, or sometimes the face, but rarely a whole human. In many images, the figure of the Son supplants the Father, so a smaller portion of the person of the Father is depicted.[103]
|
130 |
+
|
131 |
+
By the 12th century depictions of God the Father had started to appear in French illuminated manuscripts, which as a less public form could often be more adventurous in their iconography, and in stained glass church windows in England. Initially the head or bust was usually shown in some form of frame of clouds in the top of the picture space, where the Hand of God had formerly appeared; the Baptism of Christ on the famous baptismal font in Liège of Rainer of Huy is an example from 1118 (a Hand of God is used in another scene). Gradually the amount of the human symbol shown can increase to a half-length figure, then a full-length, usually enthroned, as in Giotto's fresco of c. 1305 in Padua.[104] In the 14th century the Naples Bible carried a depiction of God the Father in the Burning bush. By the early 15th century, the Très Riches Heures du Duc de Berry has a considerable number of symbols, including an elderly but tall and elegant full-length figure walking in the Garden of Eden, which show a considerable diversity of apparent ages and dress. The "Gates of Paradise" of the Florence Baptistry by Lorenzo Ghiberti, begun in 1425 use a similar tall full-length symbol for the Father. The Rohan Book of Hours of about 1430 also included depictions of God the Father in half-length human form, which were now becoming standard, and the Hand of God becoming rarer. At the same period other works, like the large Genesis altarpiece by the Hamburg painter Meister Bertram, continued to use the old depiction of Christ as Logos in Genesis scenes. In the 15th century there was a brief fashion for depicting all three persons of the Trinity as similar or identical figures with the usual appearance of Christ.
|
132 |
+
|
133 |
+
In an early Venetian school Coronation of the Virgin by Giovanni d'Alemagna and Antonio Vivarini (c. 1443), The Father is depicted using the symbol consistently used by other artists later, namely a patriarch, with benign, yet powerful countenance and with long white hair and a beard, a depiction largely derived from, and justified by, the near-physical, but still figurative, description of the Ancient of Days.[105]
|
134 |
+
|
135 |
+
. ...the Ancient of Days did sit, whose garment was white as snow, and the hair of his head like the pure wool: his throne was like the fiery flame, and his wheels as burning fire. (Daniel 7:9)
|
136 |
+
|
137 |
+
In the Annunciation by Benvenuto di Giovanni in 1470, God the Father is portrayed in the red robe and a hat that resembles that of a Cardinal. However, even in the later part of the 15th century, the symbolic representation of the Father and the Holy Spirit as "hands and dove" continued, e.g. in Verrocchio's Baptism of Christ in 1472.[106]
|
138 |
+
|
139 |
+
In Renaissance paintings of the adoration of the Trinity, God may be depicted in two ways, either with emphasis on The Father, or the three elements of the Trinity. The most usual depiction of the Trinity in Renaissance art depicts God the Father using an old man, usually with a long beard and patriarchal in appearance, sometimes with a triangular halo (as a reference to the Trinity), or with a papal crown, specially in Northern Renaissance painting. In these depictions The Father may hold a globe or book (to symbolize God's knowledge and as a reference to how knowledge is deemed divine). He is behind and above Christ on the Cross in the Throne of Mercy iconography. A dove, the symbol of the Holy Spirit may hover above. Various people from different classes of society, e.g. kings, popes or martyrs may be present in the picture. In a Trinitarian Pietà, God the Father is often symbolized using a man wearing a papal dress and a papal crown, supporting the dead Christ in his arms. They are depicted as floating in heaven with angels who carry the instruments of the Passion.[107]
|
140 |
+
|
141 |
+
Representations of God the Father and the Trinity were attacked both by Protestants and within Catholicism, by the Jansenist and Baianist movements as well as more orthodox theologians. As with other attacks on Catholic imagery, this had the effect both of reducing Church support for the less central depictions, and strengthening it for the core ones. In the Western Church, the pressure to restrain religious imagery resulted in the highly influential decrees of the final session of the Council of Trent in 1563. The Council of Trent decrees confirmed the traditional Catholic doctrine that images only represented the person depicted, and that veneration to them was paid to the person, not the image.[108]
|
142 |
+
|
143 |
+
Artistic depictions of God the Father were uncontroversial in Catholic art thereafter, but less common depictions of the Trinity were condemned. In 1745 Pope Benedict XIV explicitly supported the Throne of Mercy depiction, referring to the "Ancient of Days", but in 1786 it was still necessary for Pope Pius VI to issue a papal bull condemning the decision of an Italian church council to remove all images of the Trinity from churches.[109]
|
144 |
+
|
145 |
+
God the Father is symbolized in several Genesis scenes in Michelangelo's Sistine Chapel ceiling, most famously The Creation of Adam (whose image of near touching hands of God and Adam is iconic of humanity, being a reminder that Man is created in the Image and Likeness of God (Gen 1:26)).God the Father is depicted as a powerful figure, floating in the clouds in Titian's Assumption of the Virgin in the Frari of Venice, long admired as a masterpiece of High Renaissance art.[110] The Church of the Gesù in Rome includes a number of 16th-century depictions of God the Father. In some of these paintings the Trinity is still alluded to in terms of three angels, but Giovanni Battista Fiammeri also depicted God the Father as a man riding on a cloud, above the scenes.[111]
|
146 |
+
|
147 |
+
In both the Last Judgment and the Coronation of the Virgin paintings by Rubens he depicted God the Father using the image that by then had become widely accepted, a bearded patriarchal figure above the fray. In the 17th century, the two Spanish artists Diego Velázquez (whose father-in-law Francisco Pacheco was in charge of the approval of new images for the Inquisition) and Bartolomé Esteban Murillo both depicted God the Father using a patriarchal figure with a white beard in a purple robe.
|
148 |
+
|
149 |
+
While representations of God the Father were growing in Italy, Spain, Germany and the Low Countries, there was resistance elsewhere in Europe, even during the 17th century. In 1632 most members of the Star Chamber court in England (except the Archbishop of York) condemned the use of the images of the Trinity in church windows, and some considered them illegal.[112] Later in the 17th century Sir Thomas Browne wrote that he considered the representation of God the Father using an old man "a dangerous act" that might lead to Egyptian symbolism.[113] In 1847, Charles Winston was still critical of such images as a "Romish trend" (a term used to refer to Roman Catholics) that he considered best avoided in England.[114]
|
150 |
+
|
151 |
+
In 1667 the 43rd chapter of the Great Moscow Council specifically included a ban on a number of symbolic depictions of God the Father and the Holy Spirit, which then also resulted in a whole range of other icons being placed on the forbidden list,[115][116] mostly affecting Western-style depictions which had been gaining ground in Orthodox icons. The Council also declared that the person of the Trinity who was the "Ancient of Days" was Christ, as Logos, not God the Father. However some icons continued to be produced in Russia, as well as Greece, Romania, and other Orthodox countries.
|
152 |
+
|
153 |
+
Muslims believe that God (Allah) is beyond all comprehension and equal, and does not resemble any of his creations in any way. Thus, Muslims are not iconodules, are not expected to visualize God, and instead of having pictures of Allah in their mosques, typically have religious calligraphy written on the wall.[35]
|
154 |
+
|
155 |
+
In the Kitáb-i-Íqán, the primary theological work of the Bahá’í Faith, God is described as “Him Who is the central Orb of the universe, its Essence and ultimate Purpose.” Bahá'u'lláh taught that God is directly unknowable to common mortals, but that his attributes and qualities can be indirectly known by learning from and imitating his divine Manifestations, which in Bahá'í theology are somewhat comparable to Hindu avatars or Abrahamic prophets. These Manifestations are the great prophets and teachers of many of the major religious traditions. These include Krishna, Buddha, Jesus, Zoroaster, Muhammad, Bahá'ú'lláh, and others. Although the faith is strictly monotheistic, it also preaches the unity of all religions and focuses on these multiple epiphanies as necessary for meeting the needs of humanity at different points in history and for different cultures, and as part of a scheme of progressive revelation and education of humanity.
|
156 |
+
|
157 |
+
Classical theists (such as ancient Greco-Medieval philosophers, Roman Catholics, Eastern Orthodox Christians, many Jews and Muslims, and some Protestants)[a] speak of God as a divinely simple 'nothing' that is completely transcendent (totally independent of all else), and having attributes such as immutability, impassibility, and timelessness.[118] Theologians of theistic personalism (the view held by Rene Descartes, Isaac Newton, Alvin Plantinga, Richard Swinburne, William Lane Craig, and most modern evangelicals) argue that God is most generally the ground of all being, immanent in and transcendent over the whole world of reality, with immanence and transcendence being the contrapletes of personality.[119] Carl Jung equated religious ideas of God with transcendental metaphors of higher consciousness, in which God can be just as easily be imagined "as an eternally flowing current of vital energy that endlessly changes shape ... as an eternally unmoved, unchangeable essence."[120]
|
158 |
+
|
159 |
+
Many philosophers developed arguments for the existence of God,[4] while attempting to comprehend the precise implications of God's attributes. Reconciling some of those attributes—particularly the attributes of the God of theistic personalism—generated important philosophical problems and debates. For example, God's omniscience may seem to imply that God knows how free agents will choose to act. If God does know this, their ostensible free will might be illusory, or foreknowledge does not imply predestination, and if God does not know it, God may not be omniscient.[121]
|
160 |
+
|
161 |
+
The last centuries of philosophy have seen vigorous questions regarding the arguments for God's existence raised by such philosophers as Immanuel Kant, David Hume and Antony Flew, although Kant held that the argument from morality was valid. The theist response has been either to contend, as does Alvin Plantinga, that faith is "properly basic", or to take, as does Richard Swinburne, the evidentialist position.[122] Some theists agree that only some of the arguments for God's existence are compelling, but argue that faith is not a product of reason, but requires risk. There would be no risk, they say, if the arguments for God's existence were as solid as the laws of logic, a position summed up by Pascal as "the heart has reasons of which reason does not know."[123]
|
162 |
+
|
163 |
+
Many religious believers allow for the existence of other, less powerful spiritual beings such as angels, saints, jinn, demons, and devas.[124][125][126][127][128]
|
164 |
+
|
165 |
+
Footnotes
|
166 |
+
|
167 |
+
Citations
|
en/272.html.txt
ADDED
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Anthony Horowitz, OBE (born 5 April 1955) is an English novelist and screenwriter specialising in mystery and suspense.
|
4 |
+
|
5 |
+
His work for young adult readers includes The Diamond Brothers series, the Alex Rider series, and The Power of Five series (a.k.a. The Gatekeepers). His work for adults includes the play Mindgame (2001); two Sherlock Holmes novels, The House of Silk (2011) and Moriarty (2014); and three novels featuring his own detectives, Magpie Murders (2016), The Word Is Murder (2017), and The Sentence is Death (2018). He was also chosen to write James Bond novels by the Ian Fleming estate, starting with Trigger Mortis (2015).
|
6 |
+
|
7 |
+
He has also written for television, contributing scripts to ITV's Agatha Christie's Poirot and Midsomer Murders. He was the creator and writer of the ITV series Foyle's War, Collision and Injustice and the BBC series New Blood.
|
8 |
+
|
9 |
+
Horowitz was born in Stanmore, Middlesex, into a Jewish family, and in his early years lived an upper middle class lifestyle.[2][3][4] As an overweight and unhappy child, Horowitz enjoyed reading books from his father's library.
|
10 |
+
|
11 |
+
Horowitz started writing at the age of 8 or 9 and he instantly "knew" he would be a professional writer. This was because he was an underachiever in school and nor was he physically fit, he wasn't very intelligent either and found his escape in books and telling stories, in an interview Horowitz states " I was quite certain, from my earliest memory, that I would be a professional writer and nothing but."[5]
|
12 |
+
|
13 |
+
At age 13 he went to Rugby School, a public school in Rugby, Warwickshire. Horowitz's mother introduced him to Frankenstein and Dracula. She also gave him a human skull for his 13th birthday. Horowitz said in an interview that it reminds him to get to the end of each story since he will soon look like the skull.[6]He graduated from the University of York with a lower second class degree in English literature and art history in 1977, where he was in Vanbrugh College.[7][8]
|
14 |
+
|
15 |
+
In at least one interview, Horowitz claims to believe that H. P. Lovecraft based his fictional Necronomicon on a real text, and to have read some of that text.[9]
|
16 |
+
|
17 |
+
Horowitz's father was associated with some of the politicians in the "circle" of prime minister Harold Wilson, including Eric Miller.[10] Facing bankruptcy, he moved his assets into Swiss numbered bank accounts. He died from cancer when his son Anthony was 22, and the family was never able to track down the missing money despite years of trying.[4]
|
18 |
+
|
19 |
+
Horowitz now lives in Central London with his wife Jill Green, whom he married in Hong Kong on 15 April 1988. Green produced Foyle's War, the series Horowitz wrote for ITV. They have two sons. He credits his family with much of his success in writing, as he says they help him with ideas and research. He is a patron of child protection charity Kidscape.[11] Politically, he considers himself to be "vaguely conservative".[12]
|
20 |
+
|
21 |
+
Anthony Horowitz's first book, The Sinister Secret of Frederick K Bower, was a humorous adventure for children, published in 1979[13] and later reissued as Enter Frederick K Bower. In 1981 his second novel, Misha, the Magician and the Mysterious Amulet was published and he moved to Paris to write his third book.[14] In 1983 the first of the Pentagram series, The Devil's Door-Bell, was released. This story saw Martin Hopkins battling an ancient evil that threatened the whole world. Only three of four remaining stories in the series were ever written: The Night of the Scorpion (1984), The Silver Citadel (1986) and Day of the Dragon (1986). In 1985, he released Myths and Legends, a collection of retold tales from around the world.
|
22 |
+
|
23 |
+
In between writing these novels, Horowitz turned his attention to legendary characters, working with Richard Carpenter on the Robin of Sherwood television series, writing five episodes of the third season. He also novelised three of Carpenter's episodes as a children's book under the title Robin Sherwood: The Hooded Man (1986). In addition, he created Crossbow (1987), a half-hour action adventure series loosely based on William Tell.
|
24 |
+
|
25 |
+
In 1988, Groosham Grange was published. This book went on to win the 1989 Lancashire Children's Book of the Year Award.[15] It was partially based on the years Horowitz spent at boarding school. Its central character is a thirteen-year-old "witch", David Eliot, gifted as the seventh son of a seventh son. Like Horowitz's, Eliot's childhood is unhappy. The Groosham Grange books are aimed at a slightly younger audience than Horowitz's previous books.
|
26 |
+
|
27 |
+
This era in Horowitz's career also saw Adventurer (1987), a thriller about a convict stuck on a prisoner ship with his sworn enemy, and Starting Out (1990), a collection of screenplays by the author himself, published. However, the most major release of Horowitz's early career was The Falcon's Malteser (1986). This book was the first in the successful Diamond Brothers series, and was filmed for television in 1989 as Just Ask for Diamond, with an all star cast that included Bill Paterson, Jimmy Nail, Roy Kinnear, Susannah York, Michael Robbins and Patricia Hodge, and featured Colin Dale and Dursley McLinden as Nick and Tim Diamond. It was followed in 1987 with Public Enemy Number Two, and by South by South East in 1991 followed by The French Confection, I Know What You Did Last Wednesday, The Blurred Man and most recently The Greek Who Stole Christmas.
|
28 |
+
|
29 |
+
Horowitz wrote many stand-alone novels in the 1990s. 1994's Granny, a comedy thriller about an evil grandmother, was Horowitz's first book in three years, and it was the first of three books for an audience similar to that of Groosham Grange. The second of these was The Switch, a body swap story, first published in 1996. The third was 1997's The Devil and His Boy, which is set in the Elizabethan era and explores the rumour of Elizabeth I's secret son.
|
30 |
+
|
31 |
+
In 1999, The Unholy Grail was published as a sequel to Groosham Grange. The Unholy Grail was renamed as Return to Groosham Grange in 2003, possibly to help readers understand the connection between the books. Horowitz Horror (1999) and More Horowitz Horror (2000) saw Horowitz exploring a darker side of his writing. Each book contains several short horror stories. Many of these stories were repackaged in twos or threes as the Pocket Horowitz series.
|
32 |
+
|
33 |
+
Horowitz began his most famous and successful series in the new millennium with the Alex Rider novels. These books are about a 14-year-old boy becoming a spy, a member of the British Secret Service branch MI6. There are eleven books where Alex Rider is the protagonist, and a twelfth is connected to the Alex Rider series : Stormbreaker (2000), Point Blanc (2001), Skeleton Key (2002), Eagle Strike (2003), Scorpia (2004) Ark Angel (2005), Snakehead (2007), Crocodile Tears (novel) (2009), Scorpia Rising (2011), and the 'connector, Russian Roulette (2013).[16] Horowitz had stated that Scorpia Rising was to be the last book in the Alex Rider series prior to writing Russian Roulette about the life of Yassen Gregorovich,[17] but he has returned to the series with Never Say Die (2017) and Nightshade (2020)
|
34 |
+
|
35 |
+
In 2003, Horowitz also wrote three novels featuring the Diamond Brothers: The Blurred Man, The French Confection and I Know What You Did Last Wednesday, which were republished together as Three of Diamonds in 2004. The author information page in early editions of Scorpia and the introduction to Three of Diamonds claimed that Horowitz had travelled to Australia to research a new Diamond Brothers book, entitled Radius of the Lost Shark. However, this book has not been mentioned since, so it is doubtful it is still planned. A new Diamond Brothers "short" book entitled The Greek who Stole Christmas! was later released. It is hinted at the end of The Greek who Stole Christmas that Radius of the Lost Shark may turn out to be the eighth book in the series.[18] Anthony Horowitz was asked in 2012 on Twitter by a fan when this book would come out, to which Horowitz replied that he had not started on the book yet, so certainly not for another 3 years.[19] In 2015, Horowitz stated in a newspaper interview that there would be at least another 6 books written by him before continuing the Diamond Brothers series.[20]
|
36 |
+
|
37 |
+
In 2004, Horowitz branched out to an adult audience with The Killing Joke, a comedy about a man who tries to track a joke to its source with disastrous consequences. Horowitz's second adult novel, Magpie Murders, is about "a whodunit writer who is murdered while he's writing his latest whodunit".[21] Having previously spoken about the book in 2005, Horowitz expected to finish it in late 2015,[22] and it was published in October 2016.[23]
|
38 |
+
|
39 |
+
In August 2005, Horowitz released a book called Raven's Gate which began another series entitled The Power of Five (The Gatekeepers in the United States). He describes it as "Alex Rider with witches and devils".[24] The second book in the series, Evil Star, was released in April 2006. The third in the series is called Nightrise, and was released on 2 April 2007. The fourth book Necropolis was released in October 2008. The fifth and last book was released in October 2012 and is named Oblivion.
|
40 |
+
|
41 |
+
In October 2008, Anthony Horowitz's play Mindgame opened Off Broadway at the SoHo Playhouse in New York City.[25] Mindgame starred Keith Carradine, Lee Godart, and Kathleen McNenny. The production was the New York stage directorial debut for Ken Russell. In 2008 also he got into a joke dispute with Darren Shan over his use of the name Antoine Horwitzer for an objectionable character. Rather than suing, Horowitz plotted a literary revenge.[26]
|
42 |
+
|
43 |
+
In March 2009 he was a guest on Private Passions, the biographical music discussion programme on BBC Radio 3.[27]
|
44 |
+
|
45 |
+
On 19 January 2011, the estate of Arthur Conan Doyle announced that Horowitz was to be the writer of a new Sherlock Holmes novel, the first such effort to receive an official endorsement from them and to be entitled The House of Silk. It was both published[28][29][30] in November 2011 and broadcast on BBC Radio 4.[31] A follow-up novel, Moriarty, was published in 2014.
|
46 |
+
|
47 |
+
In October 2014, the Ian Fleming estate commissioned Horowitz to write a James Bond novel, Trigger Mortis, which was released in 2015. It was followed by a second novel, Forever and A Day, which came out on 31 May 2018.[32]
|
48 |
+
|
49 |
+
Horowitz was appointed Officer of the Order of the British Empire (OBE) in the 2014 New Year Honours for services to literature.[33]
|
50 |
+
|
51 |
+
Horowitz began writing for television in the 1980s, contributing to the children's anthology series Dramarama, and also writing for the popular fantasy series Robin of Sherwood. His association with murder mysteries began with the adaptation of several Hercule Poirot stories for ITV's popular Agatha Christie's Poirot series during the 1990s.
|
52 |
+
|
53 |
+
Often his work has a comic edge, such as with the comic murder anthology Murder Most Horrid (BBC Two, 1991) and the comedy-drama The Last Englishman (1995), starring Jim Broadbent. From 1997, he wrote the majority of the episodes in the early series of Midsomer Murders. In 2001, he created a drama anthology series of his own for the BBC, Murder in Mind, an occasional series which deals with a different set of characters and a different murder every one-hour episode.
|
54 |
+
|
55 |
+
He is also less-favourably known for the creation of two short-lived and sometimes derided science-fiction shows, Crime Traveller (1997) for BBC One and The Vanishing Man (pilot 1996, series 1998) for ITV. While Crime Traveller received favourable viewing figures it was not renewed for a second season, which Horowitz accounts to temporary personnel transitioning within the BBC. In 2002, the detective series Foyle's War launched, set during the Second World War.
|
56 |
+
|
57 |
+
He devised the 2009 ITV crime drama Collision and co-wrote the screenplay with Michael A. Walker.
|
58 |
+
|
59 |
+
Horowitz is the writer of a feature film screenplay, The Gathering, which was released in 2003 and starred Christina Ricci. He wrote the screenplay for Alex Rider's first major motion picture, Stormbreaker.
|
en/2720.html.txt
ADDED
@@ -0,0 +1,216 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
India, officially the Republic of India (Hindi: Bhārat Gaṇarājya),[23] is a country in South Asia. It is the second-most populous country, the seventh-largest country by area, and the most populous democracy in the world. Bounded by the Indian Ocean on the south, the Arabian Sea on the southwest, and the Bay of Bengal on the southeast, it shares land borders with Pakistan to the west;[f] China, Nepal, and Bhutan to the north; and Bangladesh and Myanmar to the east. In the Indian Ocean, India is in the vicinity of Sri Lanka and the Maldives; its Andaman and Nicobar Islands share a maritime border with Thailand and Indonesia.
|
6 |
+
|
7 |
+
Modern humans arrived on the Indian subcontinent from Africa no later than 55,000 years ago.[24]
|
8 |
+
Their long occupation, initially in varying forms of isolation as hunter-gatherers, has made the region highly diverse, second only to Africa in human genetic diversity.[25] Settled life emerged on the subcontinent in the western margins of the Indus river basin 9,000 years ago, evolving gradually into the Indus Valley Civilisation of the third millennium BCE.[26]
|
9 |
+
By 1200 BCE, an archaic form of Sanskrit, an Indo-European language, had diffused into India from the northwest, unfolding as the language of the Rigveda, and recording the dawning of Hinduism in India.[27]
|
10 |
+
The Dravidian languages of India were supplanted in the northern regions.[28]
|
11 |
+
By 400 BCE, stratification and exclusion by caste had emerged within Hinduism,[29]
|
12 |
+
and Buddhism and Jainism had arisen, proclaiming social orders unlinked to heredity.[30]
|
13 |
+
Early political consolidations gave rise to the loose-knit Maurya and Gupta Empires based in the Ganges Basin.[31]
|
14 |
+
Their collective era was suffused with wide-ranging creativity,[32] but also marked by the declining status of women,[33] and the incorporation of untouchability into an organised system of belief.[g][34] In South India, the Middle kingdoms exported Dravidian-languages scripts and religious cultures to the kingdoms of Southeast Asia.[35]
|
15 |
+
|
16 |
+
In the early medieval era, Christianity, Islam, Judaism, and Zoroastrianism put down roots on India's southern and western coasts.[36]
|
17 |
+
Muslim armies from Central Asia intermittently overran India's northern plains,[37]
|
18 |
+
eventually establishing the Delhi Sultanate, and drawing northern India into the cosmopolitan networks of medieval Islam.[38]
|
19 |
+
In the 15th century, the Vijayanagara Empire created a long-lasting composite Hindu culture in south India.[39]
|
20 |
+
In the Punjab, Sikhism emerged, rejecting institutionalised religion.[40]
|
21 |
+
The Mughal Empire, in 1526, ushered in two centuries of relative peace,[41]
|
22 |
+
leaving a legacy of luminous architecture.[h][42]
|
23 |
+
Gradually expanding rule of the British East India Company followed, turning India into a colonial economy, but also consolidating its sovereignty.[43] British Crown rule began in 1858. The rights promised to Indians were granted slowly,[44] but technological changes were introduced, and ideas of education, modernity and the public life took root.[45]
|
24 |
+
A pioneering and influential nationalist movement emerged, which was noted for nonviolent resistance and became the major factor in ending British rule.[46] In 1947 the British Indian Empire was partitioned into two independent dominions, a Hindu-majority Dominion of India and a Muslim-majority Dominion of Pakistan, amid large-scale loss of life and an unprecedented migration.[47][48]
|
25 |
+
|
26 |
+
India has been a secular federal republic since 1950, governed in a democratic parliamentary system. It is a pluralistic, multilingual and multi-ethnic society. India's population grew from 361 million in 1951 to 1,211 million in 2011.[49]
|
27 |
+
During the same time, its nominal per capita income increased from US$64 annually to US$1,498, and its literacy rate from 16.6% to 74%. From being a comparatively destitute country in 1951,[50]
|
28 |
+
India has become a fast-growing major economy, a hub for information technology services, with an expanding middle class.[51] It has a space programme which includes several planned or completed extraterrestrial missions. Indian movies, music, and spiritual teachings play an increasing role in global culture.[52]
|
29 |
+
India has substantially reduced its rate of poverty, though at the cost of increasing economic inequality.[53]
|
30 |
+
India is a nuclear weapons state, which ranks high in military expenditure. It has disputes over Kashmir with its neighbours, Pakistan and China, unresolved since the mid-20th century.[54]
|
31 |
+
Among the socio-economic challenges India faces are gender inequality, child malnutrition,[55]
|
32 |
+
and rising levels of air pollution.[56]
|
33 |
+
India's land is megadiverse, with four biodiversity hotspots.[57] Its forest cover comprises 21.4% of its area.[58] India's wildlife, which has traditionally been viewed with tolerance in India's culture,[59] is supported among these forests, and elsewhere, in protected habitats.
|
34 |
+
|
35 |
+
According to the Oxford English Dictionary (Third Edition 2009), the name "India" is derived from the Classical Latin India, a reference to South Asia and an uncertain region to its east; and in turn derived successively from: Hellenistic Greek India ( Ἰνδία); ancient Greek Indos ( Ἰνδός); Old Persian Hindush, an eastern province of the Achaemenid empire; and ultimately its cognate, the Sanskrit Sindhu, or "river," specifically the Indus river and, by implication, its well-settled southern basin.[60][61] The ancient Greeks referred to the Indians as Indoi (Ἰνδοί), which translates as "The people of the Indus".[62]
|
36 |
+
|
37 |
+
The term Bharat (Bhārat; pronounced [ˈbʱaːɾət] (listen)), mentioned in both Indian epic poetry and the Constitution of India,[63][64] is used in its variations by many Indian languages. A modern rendering of the historical name Bharatavarsha, which applied originally to a region of the Gangetic Valley,[65][66] Bharat gained increased currency from the mid-19th century as a native name for India.[63][67]
|
38 |
+
|
39 |
+
Hindustan ([ɦɪndʊˈstaːn] (listen)) is a Middle Persian name for India, introduced during the Mughal Empire and used widely since. Its meaning has varied, referring to a region encompassing present-day northern India and Pakistan or to India in its near entirety.[63][67][68]
|
40 |
+
|
41 |
+
By 55,000 years ago, the first modern humans, or Homo sapiens, had arrived on the Indian subcontinent from Africa, where they had earlier evolved.[69][70][71]
|
42 |
+
The earliest known modern human remains in South Asia date to about 30,000 years ago.[72] After 6500 BCE, evidence for domestication of food crops and animals, construction of permanent structures, and storage of agricultural surplus appeared in Mehrgarh and other sites in what is now Balochistan.[73] These gradually developed into the Indus Valley Civilisation,[74][73] the first urban culture in South Asia,[75] which flourished during 2500–1900 BCE in what is now Pakistan and western India.[76] Centred around cities such as Mohenjo-daro, Harappa, Dholavira, and Kalibangan, and relying on varied forms of subsistence, the civilisation engaged robustly in crafts production and wide-ranging trade.[75]
|
43 |
+
|
44 |
+
During the period 2000–500 BCE, many regions of the subcontinent transitioned from the Chalcolithic cultures to the Iron Age ones.[77] The Vedas, the oldest scriptures associated with Hinduism,[78] were composed during this period,[79] and historians have analysed these to posit a Vedic culture in the Punjab region and the upper Gangetic Plain.[77] Most historians also consider this period to have encompassed several waves of Indo-Aryan migration into the subcontinent from the north-west.[78] The caste system, which created a hierarchy of priests, warriors, and free peasants, but which excluded indigenous peoples by labelling their occupations impure, arose during this period.[80] On the Deccan Plateau, archaeological evidence from this period suggests the existence of a chiefdom stage of political organisation.[77] In South India, a progression to sedentary life is indicated by the large number of megalithic monuments dating from this period,[81] as well as by nearby traces of agriculture, irrigation tanks, and craft traditions.[81]
|
45 |
+
|
46 |
+
In the late Vedic period, around the 6th century BCE, the small states and chiefdoms of the Ganges Plain and the north-western regions had consolidated into 16 major oligarchies and monarchies that were known as the mahajanapadas.[82][83] The emerging urbanisation gave rise to non-Vedic religious movements, two of which became independent religions. Jainism came into prominence during the life of its exemplar, Mahavira.[84] Buddhism, based on the teachings of Gautama Buddha, attracted followers from all social classes excepting the middle class; chronicling the life of the Buddha was central to the beginnings of recorded history in India.[85][86][87] In an age of increasing urban wealth, both religions held up renunciation as an ideal,[88] and both established long-lasting monastic traditions. Politically, by the 3rd century BCE, the kingdom of Magadha had annexed or reduced other states to emerge as the Mauryan Empire.[89] The empire was once thought to have controlled most of the subcontinent except the far south, but its core regions are now thought to have been separated by large autonomous areas.[90][91] The Mauryan kings are known as much for their empire-building and determined management of public life as for Ashoka's renunciation of militarism and far-flung advocacy of the Buddhist dhamma.[92][93]
|
47 |
+
|
48 |
+
The Sangam literature of the Tamil language reveals that, between 200 BCE and 200 CE, the southern peninsula was ruled by the Cheras, the Cholas, and the Pandyas, dynasties that traded extensively with the Roman Empire and with West and South-East Asia.[94][95] In North India, Hinduism asserted patriarchal control within the family, leading to increased subordination of women.[96][89] By the 4th and 5th centuries, the Gupta Empire had created a complex system of administration and taxation in the greater Ganges Plain; this system became a model for later Indian kingdoms.[97][98] Under the Guptas, a renewed Hinduism based on devotion, rather than the management of ritual, began to assert itself.[99] This renewal was reflected in a flowering of sculpture and architecture, which found patrons among an urban elite.[98] Classical Sanskrit literature flowered as well, and Indian science, astronomy, medicine, and mathematics made significant advances.[98]
|
49 |
+
|
50 |
+
The Indian early medieval age, 600 CE to 1200 CE, is defined by regional kingdoms and cultural diversity.[100] When Harsha of Kannauj, who ruled much of the Indo-Gangetic Plain from 606 to 647 CE, attempted to expand southwards, he was defeated by the Chalukya ruler of the Deccan.[101] When his successor attempted to expand eastwards, he was defeated by the Pala king of Bengal.[101] When the Chalukyas attempted to expand southwards, they were defeated by the Pallavas from farther south, who in turn were opposed by the Pandyas and the Cholas from still farther south.[101] No ruler of this period was able to create an empire and consistently control lands much beyond his core region.[100] During this time, pastoral peoples, whose land had been cleared to make way for the growing agricultural economy, were accommodated within caste society, as were new non-traditional ruling classes.[102] The caste system consequently began to show regional differences.[102]
|
51 |
+
|
52 |
+
In the 6th and 7th centuries, the first devotional hymns were created in the Tamil language.[103] They were imitated all over India and led to both the resurgence of Hinduism and the development of all modern languages of the subcontinent.[103] Indian royalty, big and small, and the temples they patronised drew citizens in great numbers to the capital cities, which became economic hubs as well.[104] Temple towns of various sizes began to appear everywhere as India underwent another urbanisation.[104] By the 8th and 9th centuries, the effects were felt in South-East Asia, as South Indian culture and political systems were exported to lands that became part of modern-day Myanmar, Thailand, Laos, Cambodia, Vietnam, Philippines, Malaysia, and Java.[105] Indian merchants, scholars, and sometimes armies were involved in this transmission; South-East Asians took the initiative as well, with many sojourning in Indian seminaries and translating Buddhist and Hindu texts into their languages.[105]
|
53 |
+
|
54 |
+
After the 10th century, Muslim Central Asian nomadic clans, using swift-horse cavalry and raising vast armies united by ethnicity and religion, repeatedly overran South Asia's north-western plains, leading eventually to the establishment of the Islamic Delhi Sultanate in 1206.[106] The sultanate was to control much of North India and to make many forays into South India. Although at first disruptive for the Indian elites, the sultanate largely left its vast non-Muslim subject population to its own laws and customs.[107][108] By repeatedly repulsing Mongol raiders in the 13th century, the sultanate saved India from the devastation visited on West and Central Asia, setting the scene for centuries of migration of fleeing soldiers, learned men, mystics, traders, artists, and artisans from that region into the subcontinent, thereby creating a syncretic Indo-Islamic culture in the north.[109][110] The sultanate's raiding and weakening of the regional kingdoms of South India paved the way for the indigenous Vijayanagara Empire.[111] Embracing a strong Shaivite tradition and building upon the military technology of the sultanate, the empire came to control much of peninsular India,[112] and was to influence South Indian society for long afterwards.[111]
|
55 |
+
|
56 |
+
In the early 16th century, northern India, then under mainly Muslim rulers,[113] fell again to the superior mobility and firepower of a new generation of Central Asian warriors.[114] The resulting Mughal Empire did not stamp out the local societies it came to rule. Instead, it balanced and pacified them through new administrative practices[115][116] and diverse and inclusive ruling elites,[117] leading to more systematic, centralised, and uniform rule.[118] Eschewing tribal bonds and Islamic identity, especially under Akbar, the Mughals united their far-flung realms through loyalty, expressed through a Persianised culture, to an emperor who had near-divine status.[117] The Mughal state's economic policies, deriving most revenues from agriculture[119] and mandating that taxes be paid in the well-regulated silver currency,[120] caused peasants and artisans to enter larger markets.[118] The relative peace maintained by the empire during much of the 17th century was a factor in India's economic expansion,[118] resulting in greater patronage of painting, literary forms, textiles, and architecture.[121] Newly coherent social groups in northern and western India, such as the Marathas, the Rajputs, and the Sikhs, gained military and governing ambitions during Mughal rule, which, through collaboration or adversity, gave them both recognition and military experience.[122] Expanding commerce during Mughal rule gave rise to new Indian commercial and political elites along the coasts of southern and eastern India.[122] As the empire disintegrated, many among these elites were able to seek and control their own affairs.[123]
|
57 |
+
|
58 |
+
By the early 18th century, with the lines between commercial and political dominance being increasingly blurred, a number of European trading companies, including the English East India Company, had established coastal outposts.[124][125] The East India Company's control of the seas, greater resources, and more advanced military training and technology led it to increasingly flex its military muscle and caused it to become attractive to a portion of the Indian elite; these factors were crucial in allowing the company to gain control over the Bengal region by 1765 and sideline the other European companies.[126][124][127][128] Its further access to the riches of Bengal and the subsequent increased strength and size of its army enabled it to annexe or subdue most of India by the 1820s.[129] India was then no longer exporting manufactured goods as it long had, but was instead supplying the British Empire with raw materials. Many historians consider this to be the onset of India's colonial period.[124] By this time, with its economic power severely curtailed by the British parliament and having effectively been made an arm of British administration, the company began more consciously to enter non-economic arenas like education, social reform, and culture.[130]
|
59 |
+
|
60 |
+
Historians consider India's modern age to have begun sometime between 1848 and 1885. The appointment in 1848 of Lord Dalhousie as Governor General of the East India Company set the stage for changes essential to a modern state. These included the consolidation and demarcation of sovereignty, the surveillance of the population, and the education of citizens. Technological changes—among them, railways, canals, and the telegraph—were introduced not long after their introduction in Europe.[131][132][133][134] However, disaffection with the company also grew during this time and set off the Indian Rebellion of 1857. Fed by diverse resentments and perceptions, including invasive British-style social reforms, harsh land taxes, and summary treatment of some rich landowners and princes, the rebellion rocked many regions of northern and central India and shook the foundations of Company rule.[135][136] Although the rebellion was suppressed by 1858, it led to the dissolution of the East India Company and the direct administration of India by the British government. Proclaiming a unitary state and a gradual but limited British-style parliamentary system, the new rulers also protected princes and landed gentry as a feudal safeguard against future unrest.[137][138] In the decades following, public life gradually emerged all over India, leading eventually to the founding of the Indian National Congress in 1885.[139][140][141][142]
|
61 |
+
|
62 |
+
The rush of technology and the commercialisation of agriculture in the second half of the 19th century was marked by economic setbacks and many small farmers became dependent on the whims of far-away markets.[143] There was an increase in the number of large-scale famines,[144] and, despite the risks of infrastructure development borne by Indian taxpayers, little industrial employment was generated for Indians.[145] There were also salutary effects: commercial cropping, especially in the newly canalled Punjab, led to increased food production for internal consumption.[146] The railway network provided critical famine relief,[147] notably reduced the cost of moving goods,[147] and helped nascent Indian-owned industry.[146]
|
63 |
+
|
64 |
+
After World War I, in which approximately one million Indians served,[148] a new period began. It was marked by British reforms but also repressive legislation, by more strident Indian calls for self-rule, and by the beginnings of a nonviolent movement of non-co-operation, of which Mohandas Karamchand Gandhi would become the leader and enduring symbol.[149] During the 1930s, slow legislative reform was enacted by the British; the Indian National Congress won victories in the resulting elections.[150] The next decade was beset with crises: Indian participation in World War II, the Congress's final push for non-co-operation, and an upsurge of Muslim nationalism. All were capped by the advent of independence in 1947, but tempered by the partition of India into two states: India and Pakistan.[151]
|
65 |
+
|
66 |
+
Vital to India's self-image as an independent nation was its constitution, completed in 1950, which put in place a secular and democratic republic.[152] It has remained a democracy with civil liberties, an active Supreme Court, and a largely independent press.[153] Economic liberalisation, which began in the 1990s, has created a large urban middle class, transformed India into one of the world's fastest-growing economies,[154] and increased its geopolitical clout. Indian movies, music, and spiritual teachings play an increasing role in global culture.[153] Yet, India is also shaped by seemingly unyielding poverty, both rural and urban;[153] by religious and caste-related violence;[155] by Maoist-inspired Naxalite insurgencies;[156] and by separatism in Jammu and Kashmir and in Northeast India.[157] It has unresolved territorial disputes with China[158] and with Pakistan.[158] India's sustained democratic freedoms are unique among the world's newer nations; however, in spite of its recent economic successes, freedom from want for its disadvantaged population remains a goal yet to be achieved.[159]
|
67 |
+
|
68 |
+
India accounts for the bulk of the Indian subcontinent, lying atop the Indian tectonic plate, a part of the Indo-Australian Plate.[160] India's defining geological processes began 75 million years ago when the Indian Plate, then part of the southern supercontinent Gondwana, began a north-eastward drift caused by seafloor spreading to its south-west, and later, south and south-east.[160] Simultaneously, the vast Tethyan oceanic crust, to its northeast, began to subduct under the Eurasian Plate.[160] These dual processes, driven by convection in the Earth's mantle, both created the Indian Ocean and caused the Indian continental crust eventually to under-thrust Eurasia and to uplift the Himalayas.[160] Immediately south of the emerging Himalayas, plate movement created a vast trough that rapidly filled with river-borne sediment[161] and now constitutes the Indo-Gangetic Plain.[162] Cut off from the plain by the ancient Aravalli Range lies the Thar Desert.[163]
|
69 |
+
|
70 |
+
The original Indian Plate survives as peninsular India, the oldest and geologically most stable part of India. It extends as far north as the Satpura and Vindhya ranges in central India. These parallel chains run from the Arabian Sea coast in Gujarat in the west to the coal-rich Chota Nagpur Plateau in Jharkhand in the east.[164] To the south, the remaining peninsular landmass, the Deccan Plateau, is flanked on the west and east by coastal ranges known as the Western and Eastern Ghats;[165] the plateau contains the country's oldest rock formations, some over one billion years old. Constituted in such fashion, India lies to the north of the equator between 6° 44′ and 35° 30′ north latitude[i] and 68° 7′ and 97° 25′ east longitude.[166]
|
71 |
+
|
72 |
+
India's coastline measures 7,517 kilometres (4,700 mi) in length; of this distance, 5,423 kilometres (3,400 mi) belong to peninsular India and 2,094 kilometres (1,300 mi) to the Andaman, Nicobar, and Lakshadweep island chains.[167] According to the Indian naval hydrographic charts, the mainland coastline consists of the following: 43% sandy beaches; 11% rocky shores, including cliffs; and 46% mudflats or marshy shores.[167]
|
73 |
+
|
74 |
+
Major Himalayan-origin rivers that substantially flow through India include the Ganges and the Brahmaputra, both of which drain into the Bay of Bengal.[169] Important tributaries of the Ganges include the Yamuna and the Kosi; the latter's extremely low gradient, caused by long-term silt deposition, leads to severe floods and course changes.[170][171] Major peninsular rivers, whose steeper gradients prevent their waters from flooding, include the Godavari, the Mahanadi, the Kaveri, and the Krishna, which also drain into the Bay of Bengal;[172] and the Narmada and the Tapti, which drain into the Arabian Sea.[173] Coastal features include the marshy Rann of Kutch of western India and the alluvial Sundarbans delta of eastern India; the latter is shared with Bangladesh.[174] India has two archipelagos: the Lakshadweep, coral atolls off India's south-western coast; and the Andaman and Nicobar Islands, a volcanic chain in the Andaman Sea.[175]
|
75 |
+
|
76 |
+
The Indian climate is strongly influenced by the Himalayas and the Thar Desert, both of which drive the economically and culturally pivotal summer and winter monsoons.[176] The Himalayas prevent cold Central Asian katabatic winds from blowing in, keeping the bulk of the Indian subcontinent warmer than most locations at similar latitudes.[177][178] The Thar Desert plays a crucial role in attracting the moisture-laden south-west summer monsoon winds that, between June and October, provide the majority of India's rainfall.[176] Four major climatic groupings predominate in India: tropical wet, tropical dry, subtropical humid, and montane.[179]
|
77 |
+
|
78 |
+
India is a megadiverse country, a term employed for 17 countries which display high biological diversity and contain many species exclusively indigenous, or endemic, to them.[181] India is a habitat for 8.6% of all mammal species, 13.7% of bird species, 7.9% of reptile species, 6% of amphibian species, 12.2% of fish species, and 6.0% of all flowering plant species.[182][183] Fully a third of Indian plant species are endemic.[184] India also contains four of the world's 34 biodiversity hotspots,[57] or regions that display significant habitat loss in the presence of high endemism.[j][185]
|
79 |
+
|
80 |
+
India's forest cover is 701,673 km2 (270,917 sq mi), which is 21.35% of the country's total land area. It can be subdivided further into broad categories of canopy density, or the proportion of the area of a forest covered by its tree canopy.[186] Very dense forest, whose canopy density is greater than 70%, occupies 2.61% of India's land area.[186] It predominates in the tropical moist forest of the Andaman Islands, the Western Ghats, and Northeast India.[187] Moderately dense forest, whose canopy density is between 40% and 70%, occupies 9.59% of India's land area.[186] It predominates in the temperate coniferous forest of the Himalayas, the moist deciduous sal forest of eastern India, and the dry deciduous teak forest of central and southern India.[187] Open forest, whose canopy density is between 10% and 40%, occupies 9.14% of India's land area,[186] and predominates in the babul-dominated thorn forest of the central Deccan Plateau and the western Gangetic plain.[187]
|
81 |
+
|
82 |
+
Among the Indian subcontinent's notable indigenous trees are the astringent Azadirachta indica, or neem, which is widely used in rural Indian herbal medicine,[188] and the luxuriant Ficus religiosa, or peepul,[189] which is displayed on the ancient seals of Mohenjo-daro,[190] and under which the Buddha is recorded in the Pali canon to have sought enlightenment,[191]
|
83 |
+
|
84 |
+
Many Indian species have descended from those of Gondwana, the southern supercontinent from which India separated more than 100 million years ago.[192] India's subsequent collision with Eurasia set off a mass exchange of species. However, volcanism and climatic changes later caused the extinction of many endemic Indian forms.[193] Still later, mammals entered India from Asia through two zoogeographical passes flanking the Himalayas.[187] This had the effect of lowering endemism among India's mammals, which stands at 12.6%, contrasting with 45.8% among reptiles and 55.8% among amphibians.[183] Notable endemics are the vulnerable[194] hooded leaf monkey[195] and the threatened[196] Beddom's toad[196][197] of the Western Ghats.
|
85 |
+
|
86 |
+
India contains 172 IUCN-designated threatened animal species, or 2.9% of endangered forms.[198] These include the endangered Bengal tiger and the Ganges river dolphin. Critically endangered species include: the gharial, a crocodilian; the great Indian bustard; and the Indian white-rumped vulture, which has become nearly extinct by having ingested the carrion of diclofenac-treated cattle.[199] The pervasive and ecologically devastating human encroachment of recent decades has critically endangered Indian wildlife. In response, the system of national parks and protected areas, first established in 1935, was expanded substantially. In 1972, India enacted the Wildlife Protection Act[200] and Project Tiger to safeguard crucial wilderness; the Forest Conservation Act was enacted in 1980 and amendments added in 1988.[201] India hosts more than five hundred wildlife sanctuaries and thirteen biosphere reserves,[202] four of which are part of the World Network of Biosphere Reserves; twenty-five wetlands are registered under the Ramsar Convention.[203]
|
87 |
+
|
88 |
+
India is the world's most populous democracy.[205] A parliamentary republic with a multi-party system,[206] it has eight recognised national parties, including the Indian National Congress and the Bharatiya Janata Party (BJP), and more than 40 regional parties.[207] The Congress is considered centre-left in Indian political culture,[208] and the BJP right-wing.[209][210][211] For most of the period between 1950—when India first became a republic—and the late 1980s, the Congress held a majority in the parliament. Since then, however, it has increasingly shared the political stage with the BJP,[212] as well as with powerful regional parties which have often forced the creation of multi-party coalition governments at the centre.[213]
|
89 |
+
|
90 |
+
In the Republic of India's first three general elections, in 1951, 1957, and 1962, the Jawaharlal Nehru-led Congress won easy victories. On Nehru's death in 1964, Lal Bahadur Shastri briefly became prime minister; he was succeeded, after his own unexpected death in 1966, by Nehru's daughter Indira Gandhi, who went on to lead the Congress to election victories in 1967 and 1971. Following public discontent with the state of emergency she declared in 1975, the Congress was voted out of power in 1977; the then-new Janata Party, which had opposed the emergency, was voted in. Its government lasted just over two years. Voted back into power in 1980, the Congress saw a change in leadership in 1984, when Indira Gandhi was assassinated; she was succeeded by her son Rajiv Gandhi, who won an easy victory in the general elections later that year. The Congress was voted out again in 1989 when a National Front coalition, led by the newly formed Janata Dal in alliance with the Left Front, won the elections; that government too proved relatively short-lived, lasting just under two years.[214] Elections were held again in 1991; no party won an absolute majority. The Congress, as the largest single party, was able to form a minority government led by P. V. Narasimha Rao.[215]
|
91 |
+
|
92 |
+
A two-year period of political turmoil followed the general election of 1996. Several short-lived alliances shared power at the centre. The BJP formed a government briefly in 1996; it was followed by two comparatively long-lasting United Front coalitions, which depended on external support. In 1998, the BJP was able to form a successful coalition, the National Democratic Alliance (NDA). Led by Atal Bihari Vajpayee, the NDA became the first non-Congress, coalition government to complete a five-year term.[216] Again in the 2004 Indian general elections, no party won an absolute majority, but the Congress emerged as the largest single party, forming another successful coalition: the United Progressive Alliance (UPA). It had the support of left-leaning parties and MPs who opposed the BJP. The UPA returned to power in the 2009 general election with increased numbers, and it no longer required external support from India's communist parties.[217] That year, Manmohan Singh became the first prime minister since Jawaharlal Nehru in 1957 and 1962 to be re-elected to a consecutive five-year term.[218] In the 2014 general election, the BJP became the first political party since 1984 to win a majority and govern without the support of other parties.[219] The incumbent prime minister is Narendra Modi, a former chief minister of Gujarat. On 20 July 2017, Ram Nath Kovind was elected India's 14th president and took the oath of office on 25 July 2017.[220][221][222]
|
93 |
+
|
94 |
+
India is a federation with a parliamentary system governed under the Constitution of India—the country's supreme legal document. It is a constitutional republic and representative democracy, in which "majority rule is tempered by minority rights protected by law". Federalism in India defines the power distribution between the union and the states. The Constitution of India, which came into effect on 26 January 1950,[224] originally stated India to be a "sovereign, democratic republic;" this characterisation was amended in 1971 to "a sovereign, socialist, secular, democratic republic".[225] India's form of government, traditionally described as "quasi-federal" with a strong centre and weak states,[226] has grown increasingly federal since the late 1990s as a result of political, economic, and social changes.[227][228]
|
95 |
+
|
96 |
+
The Government of India comprises three branches:[230]
|
97 |
+
|
98 |
+
India is a federal union comprising 28 states and 8 union territories.[245] All states, as well as the union territories of Jammu and Kashmir, Puducherry and the National Capital Territory of Delhi, have elected legislatures and governments following the Westminster system of governance. The remaining five union territories are directly ruled by the central government through appointed administrators. In 1956, under the States Reorganisation Act, states were reorganised on a linguistic basis.[246] There are over a quarter of a million local government bodies at city, town, block, district and village levels.[247]
|
99 |
+
|
100 |
+
In the 1950s, India strongly supported decolonisation in Africa and Asia and played a leading role in the Non-Aligned Movement.[249] After initially cordial relations with neighbouring China, India went to war with China in 1962, and was widely thought to have been humiliated. India has had tense relations with neighbouring Pakistan; the two nations have gone to war four times: in 1947, 1965, 1971, and 1999. Three of these wars were fought over the disputed territory of Kashmir, while the fourth, the 1971 war, followed from India's support for the independence of Bangladesh.[250] In the late 1980s, the Indian military twice intervened abroad at the invitation of the host country: a peace-keeping operation in Sri Lanka between 1987 and 1990; and an armed intervention to prevent a 1988 coup d'état attempt in the Maldives. After the 1965 war with Pakistan, India began to pursue close military and economic ties with the Soviet Union; by the late 1960s, the Soviet Union was its largest arms supplier.[251]
|
101 |
+
|
102 |
+
Aside from ongoing its special relationship with Russia,[252] India has wide-ranging defence relations with Israel and France. In recent years, it has played key roles in the South Asian Association for Regional Cooperation and the World Trade Organization. The nation has provided 100,000 military and police personnel to serve in 35 UN peacekeeping operations across four continents. It participates in the East Asia Summit, the G8+5, and other multilateral forums.[253] India has close economic ties with South America,[254] Asia, and Africa; it pursues a "Look East" policy that seeks to strengthen partnerships with the ASEAN nations, Japan, and South Korea that revolve around many issues, but especially those involving economic investment and regional security.[255][256]
|
103 |
+
|
104 |
+
China's nuclear test of 1964, as well as its repeated threats to intervene in support of Pakistan in the 1965 war, convinced India to develop nuclear weapons.[258] India conducted its first nuclear weapons test in 1974 and carried out additional underground testing in 1998. Despite criticism and military sanctions, India has signed neither the Comprehensive Nuclear-Test-Ban Treaty nor the Nuclear Non-Proliferation Treaty, considering both to be flawed and discriminatory.[259] India maintains a "no first use" nuclear policy and is developing a nuclear triad capability as a part of its "Minimum Credible Deterrence" doctrine.[260][261] It is developing a ballistic missile defence shield and, a fifth-generation fighter jet.[262][263] Other indigenous military projects involve the design and implementation of Vikrant-class aircraft carriers and Arihant-class nuclear submarines.[264]
|
105 |
+
|
106 |
+
Since the end of the Cold War, India has increased its economic, strategic, and military co-operation with the United States and the European Union.[265] In 2008, a civilian nuclear agreement was signed between India and the United States. Although India possessed nuclear weapons at the time and was not a party to the Nuclear Non-Proliferation Treaty, it received waivers from the International Atomic Energy Agency and the Nuclear Suppliers Group, ending earlier restrictions on India's nuclear technology and commerce. As a consequence, India became the sixth de facto nuclear weapons state.[266] India subsequently signed co-operation agreements involving civilian nuclear energy with Russia,[267] France,[268] the United Kingdom,[269] and Canada.[270]
|
107 |
+
|
108 |
+
The President of India is the supreme commander of the nation's armed forces; with 1.395 million active troops, they compose the world's second-largest military. It comprises the Indian Army, the Indian Navy, the Indian Air Force, and the Indian Coast Guard.[271] The official Indian defence budget for 2011 was US$36.03 billion, or 1.83% of GDP.[272] For the fiscal year spanning 2012–2013, US$40.44 billion was budgeted.[273] According to a 2008 Stockholm International Peace Research Institute (SIPRI) report, India's annual military expenditure in terms of purchasing power stood at US$72.7 billion.[274] In 2011, the annual defence budget increased by 11.6%,[275] although this does not include funds that reach the military through other branches of government.[276] As of 2012[update], India is the world's largest arms importer; between 2007 and 2011, it accounted for 10% of funds spent on international arms purchases.[277] Much of the military expenditure was focused on defence against Pakistan and countering growing Chinese influence in the Indian Ocean.[275] In May 2017, the Indian Space Research Organisation launched the South Asia Satellite, a gift from India to its neighbouring SAARC countries.[278] In October 2018, India signed a US$5.43 billion (over ₹400 billion) agreement with Russia to procure four S-400 Triumf surface-to-air missile defence systems, Russia's most advanced long-range missile defence system.[279]
|
109 |
+
|
110 |
+
According to the International Monetary Fund (IMF), the Indian economy in 2019 was nominally worth $2.9 trillion; it is the fifth-largest economy by market exchange rates, and is around $11 trillion, the third-largest by purchasing power parity, or PPP.[19] With its average annual GDP growth rate of 5.8% over the past two decades, and reaching 6.1% during 2011–2012,[283] India is one of the world's fastest-growing economies.[284] However, the country ranks 139th in the world in nominal GDP per capita and 118th in GDP per capita at PPP.[285] Until 1991, all Indian governments followed protectionist policies that were influenced by socialist economics. Widespread state intervention and regulation largely walled the economy off from the outside world. An acute balance of payments crisis in 1991 forced the nation to liberalise its economy;[286] since then it has moved slowly towards a free-market system[287][288] by emphasising both foreign trade and direct investment inflows.[289] India has been a member of WTO since 1 January 1995.[290]
|
111 |
+
|
112 |
+
The 513.7-million-worker Indian labour force is the world's second-largest, as of 2016[update].[271] The service sector makes up 55.6% of GDP, the industrial sector 26.3% and the agricultural sector 18.1%. India's foreign exchange remittances of US$70 billion in 2014, the largest in the world, were contributed to its economy by 25 million Indians working in foreign countries.[291] Major agricultural products include: rice, wheat, oilseed, cotton, jute, tea, sugarcane, and potatoes.[245] Major industries include: textiles, telecommunications, chemicals, pharmaceuticals, biotechnology, food processing, steel, transport equipment, cement, mining, petroleum, machinery, and software.[245] In 2006, the share of external trade in India's GDP stood at 24%, up from 6% in 1985.[287] In 2008, India's share of world trade was 1.68%;[292] In 2011, India was the world's tenth-largest importer and the nineteenth-largest exporter.[293] Major exports include: petroleum products, textile goods, jewellery, software, engineering goods, chemicals, and manufactured leather goods.[245] Major imports include: crude oil, machinery, gems, fertiliser, and chemicals.[245] Between 2001 and 2011, the contribution of petrochemical and engineering goods to total exports grew from 14% to 42%.[294] India was the world's second largest textile exporter after China in the 2013 calendar year.[295]
|
113 |
+
|
114 |
+
Averaging an economic growth rate of 7.5% for several years prior to 2007,[287] India has more than doubled its hourly wage rates during the first decade of the 21st century.[296] Some 431 million Indians have left poverty since 1985; India's middle classes are projected to number around 580 million by 2030.[297] Though ranking 51st in global competitiveness, as of 2010[update], India ranks 17th in financial market sophistication, 24th in the banking sector, 44th in business sophistication, and 39th in innovation, ahead of several advanced economies.[298] With seven of the world's top 15 information technology outsourcing companies based in India, as of 2009[update], the country is viewed as the second-most favourable outsourcing destination after the United States.[299] India's consumer market, the world's eleventh-largest, is expected to become fifth-largest by 2030.[297] However, barely 2% of Indians pay income taxes.[300]
|
115 |
+
|
116 |
+
Driven by growth, India's nominal GDP per capita increased steadily from US$329 in 1991, when economic liberalisation began, to US$1,265 in 2010, to an estimated US$1,723 in 2016. It is expected to grow to US$2,358 by 2020.[19] However, it has remained lower than those of other Asian developing countries like Indonesia, Malaysia, Philippines, Sri Lanka, and Thailand, and is expected to remain so in the near future. Its GDP per capita is higher than Bangladesh, Pakistan, Nepal, Afghanistan and others.[301]
|
117 |
+
|
118 |
+
According to a 2011 PricewaterhouseCoopers (PwC) report, India's GDP at purchasing power parity could overtake that of the United States by 2045.[303] During the next four decades, Indian GDP is expected to grow at an annualised average of 8%, making it potentially the world's fastest-growing major economy until 2050.[303] The report highlights key growth factors: a young and rapidly growing working-age population; growth in the manufacturing sector because of rising education and engineering skill levels; and sustained growth of the consumer market driven by a rapidly growing middle-class.[303] The World Bank cautions that, for India to achieve its economic potential, it must continue to focus on public sector reform, transport infrastructure, agricultural and rural development, removal of labour regulations, education, energy security, and public health and nutrition.[304]
|
119 |
+
|
120 |
+
According to the Worldwide Cost of Living Report 2017 released by the Economist Intelligence Unit (EIU) which was created by comparing more than 400 individual prices across 160 products and services, four of the cheapest cities were in India: Bangalore (3rd), Mumbai (5th), Chennai (5th) and New Delhi (8th).[305]
|
121 |
+
|
122 |
+
India's telecommunication industry, the world's fastest-growing, added 227 million subscribers during the period 2010–2011,[306] and after the third quarter of 2017, India surpassed the US to become the second largest smartphone market in the world after China.[307]
|
123 |
+
|
124 |
+
The Indian automotive industry, the world's second-fastest growing, increased domestic sales by 26% during 2009–2010,[308] and exports by 36% during 2008–2009.[309] India's capacity to generate electrical power is 300 gigawatts, of which 42 gigawatts is renewable.[310] At the end of 2011, the Indian IT industry employed 2.8 million professionals, generated revenues close to US$100 billion equalling 7.5% of Indian GDP, and contributed 26% of India's merchandise exports.[311]
|
125 |
+
|
126 |
+
The pharmaceutical industry in India is among the significant emerging markets for the global pharmaceutical industry. The Indian pharmaceutical market is expected to reach $48.5 billion by 2020. India's R & D spending constitutes 60% of the biopharmaceutical industry.[312][313] India is among the top 12 biotech destinations in the world.[314][315] The Indian biotech industry grew by 15.1% in 2012–2013, increasing its revenues from ₹204.4 billion (Indian rupees) to ₹235.24 billion (US$3.94 billion at June 2013 exchange rates).[316]
|
127 |
+
|
128 |
+
Despite economic growth during recent decades, India continues to face socio-economic challenges. In 2006, India contained the largest number of people living below the World Bank's international poverty line of US$1.25 per day.[318] The proportion decreased from 60% in 1981 to 42% in 2005.[319] Under the World Bank's later revised poverty line, it was 21% in 2011.[l][321] 30.7% of India's children under the age of five are underweight.[322] According to a Food and Agriculture Organization report in 2015, 15% of the population is undernourished.[323][324] The Mid-Day Meal Scheme attempts to lower these rates.[325]
|
129 |
+
|
130 |
+
According to a 2016 Walk Free Foundation report there were an estimated 18.3 million people in India, or 1.4% of the population, living in the forms of modern slavery, such as bonded labour, child labour, human trafficking, and forced begging, among others.[326][327][328] According to the 2011 census, there were 10.1 million child labourers in the country, a decline of 2.6 million from 12.6 million in 2001.[329]
|
131 |
+
|
132 |
+
Since 1991, economic inequality between India's states has consistently grown: the per-capita net state domestic product of the richest states in 2007 was 3.2 times that of the poorest.[330] Corruption in India is perceived to have decreased. According to the Corruption Perceptions Index, India ranked 78th out of 180 countries in 2018 with a score of 41 out of 100, an improvement from 85th in 2014.[331][332]
|
133 |
+
|
134 |
+
With 1,210,193,422 residents reported in the 2011 provisional census report,[333] India is the world's second-most populous country. Its population grew by 17.64% from 2001 to 2011,[334] compared to 21.54% growth in the previous decade (1991–2001).[334] The human sex ratio, according to the 2011 census, is 940 females per 1,000 males.[333] The median age was 27.6 as of 2016[update].[271] The first post-colonial census, conducted in 1951, counted 361 million people.[335] Medical advances made in the last 50 years as well as increased agricultural productivity brought about by the "Green Revolution" have caused India's population to grow rapidly.[336]
|
135 |
+
|
136 |
+
The average life expectancy in India is at 68 years—69.6 years for women, 67.3 years for men.[337] There are around 50 physicians per 100,000 Indians.[338] Migration from rural to urban areas has been an important dynamic in India's recent history. The number of people living in urban areas grew by 31.2% between 1991 and 2001.[339] Yet, in 2001, over 70% still lived in rural areas.[340][341] The level of urbanisation increased further from 27.81% in the 2001 Census to 31.16% in the 2011 Census. The slowing down of the overall population growth rate was due to the sharp decline in the growth rate in rural areas since 1991.[342] According to the 2011 census, there are 53 million-plus urban agglomerations in India; among them Mumbai, Delhi, Kolkata, Chennai, Bangalore, Hyderabad and Ahmedabad, in decreasing order by population.[343] The literacy rate in 2011 was 74.04%: 65.46% among females and 82.14% among males.[344] The rural-urban literacy gap, which was 21.2 percentage points in 2001, dropped to 16.1 percentage points in 2011. The improvement in the rural literacy rate is twice that of urban areas.[342] Kerala is the most literate state with 93.91% literacy; while Bihar the least with 63.82%.[344]
|
137 |
+
|
138 |
+
India is home to two major language families: Indo-Aryan (spoken by about 74% of the population) and Dravidian (spoken by 24% of the population). Other languages spoken in India come from the Austroasiatic and Sino-Tibetan language families. India has no national language.[345] Hindi, with the largest number of speakers, is the official language of the government.[346][347] English is used extensively in business and administration and has the status of a "subsidiary official language";[5] it is important in education, especially as a medium of higher education. Each state and union territory has one or more official languages, and the constitution recognises in particular 22 "scheduled languages".
|
139 |
+
|
140 |
+
The 2011 census reported the religion in India with the largest number of followers was Hinduism (79.80% of the population), followed by Islam (14.23%); the remaining were Christianity (2.30%), Sikhism (1.72%), Buddhism (0.70%), Jainism (0.36%) and others[m] (0.9%).[14] India has the world's largest Hindu, Sikh, Jain, Zoroastrian, and Bahá'í populations, and has the third-largest Muslim population—the largest for a non-Muslim majority country.[348][349]
|
141 |
+
|
142 |
+
Indian cultural history spans more than 4,500 years.[350] During the Vedic period (c. 1700 – c. 500 BCE), the foundations of Hindu philosophy, mythology, theology and literature were laid, and many beliefs and practices which still exist today, such as dhárma, kárma, yóga, and mokṣa, were established.[62] India is notable for its religious diversity, with Hinduism, Buddhism, Sikhism, Islam, Christianity, and Jainism among the nation's major religions.[351] The predominant religion, Hinduism, has been shaped by various historical schools of thought, including those of the Upanishads,[352] the Yoga Sutras, the Bhakti movement,[351] and by Buddhist philosophy.[353]
|
143 |
+
|
144 |
+
Much of Indian architecture, including the Taj Mahal, other works of Mughal architecture, and South Indian architecture, blends ancient local traditions with imported styles.[354] Vernacular architecture is also regional in its flavours. Vastu shastra, literally "science of construction" or "architecture" and ascribed to Mamuni Mayan,[355] explores how the laws of nature affect human dwellings;[356] it employs precise geometry and directional alignments to reflect perceived cosmic constructs.[357] As applied in Hindu temple architecture, it is influenced by the Shilpa Shastras, a series of foundational texts whose basic mythological form is the Vastu-Purusha mandala, a square that embodied the "absolute".[358] The Taj Mahal, built in Agra between 1631 and 1648 by orders of Emperor Shah Jahan in memory of his wife, has been described in the UNESCO World Heritage List as "the jewel of Muslim art in India and one of the universally admired masterpieces of the world's heritage".[359] Indo-Saracenic Revival architecture, developed by the British in the late 19th century, drew on Indo-Islamic architecture.[360]
|
145 |
+
|
146 |
+
The earliest literature in India, composed between 1500 BCE and 1200 CE, was in the Sanskrit language.[361] Major works of Sanskrit literature include the Rigveda (c. 1500 BCE – 1200 BCE), the epics: Mahābhārata (c. 400 BCE – 400 CE) and the Ramayana (c. 300 BCE and later); Abhijñānaśākuntalam (The Recognition of Śakuntalā, and other dramas of Kālidāsa (c. 5th century CE) and Mahākāvya poetry.[362][363][364] In Tamil literature, the Sangam literature (c. 600 BCE – 300 BCE) consisting of 2,381 poems, composed by 473 poets, is the earliest work.[365][366][367][368] From the 14th to the 18th centuries, India's literary traditions went through a period of drastic change because of the emergence of devotional poets like Kabīr, Tulsīdās, and Guru Nānak. This period was characterised by a varied and wide spectrum of thought and expression; as a consequence, medieval Indian literary works differed significantly from classical traditions.[369] In the 19th century, Indian writers took a new interest in social questions and psychological descriptions. In the 20th century, Indian literature was influenced by the works of the Bengali poet and novelist Rabindranath Tagore,[370] who was a recipient of the Nobel Prize in Literature.
|
147 |
+
|
148 |
+
Indian music ranges over various traditions and regional styles. Classical music encompasses two genres and their various folk offshoots: the northern Hindustani and southern Carnatic schools.[371] Regionalised popular forms include filmi and folk music; the syncretic tradition of the bauls is a well-known form of the latter. Indian dance also features diverse folk and classical forms. Among the better-known folk dances are: the bhangra of Punjab, the bihu of Assam, the Jhumair and chhau of Jharkhand, Odisha and West Bengal, garba and dandiya of Gujarat, ghoomar of Rajasthan, and the lavani of Maharashtra. Eight dance forms, many with narrative forms and mythological elements, have been accorded classical dance status by India's National Academy of Music, Dance, and Drama. These are: bharatanatyam of the state of Tamil Nadu, kathak of Uttar Pradesh, kathakali and mohiniyattam of Kerala, kuchipudi of Andhra Pradesh, manipuri of Manipur, odissi of Odisha, and the sattriya of Assam.[372] Theatre in India melds music, dance, and improvised or written dialogue.[373] Often based on Hindu mythology, but also borrowing from medieval romances or social and political events, Indian theatre includes: the bhavai of Gujarat, the jatra of West Bengal, the nautanki and ramlila of North India, tamasha of Maharashtra, burrakatha of Andhra Pradesh, terukkuttu of Tamil Nadu, and the yakshagana of Karnataka.[374] India has a theatre training institute the National School of Drama (NSD) that is situated at New Delhi It is an autonomous organisation under the Ministry of Culture, Government of India.[375]
|
149 |
+
The Indian film industry produces the world's most-watched cinema.[376] Established regional cinematic traditions exist in the Assamese, Bengali, Bhojpuri, Hindi, Kannada, Malayalam, Punjabi, Gujarati, Marathi, Odia, Tamil, and Telugu languages.[377] The Hindi language film industry (Bollywood) is the largest sector representing 43% of box office revenue, followed by the South Indian Telugu and Tamil film industries which represent 36% combined.[378]
|
150 |
+
|
151 |
+
Television broadcasting began in India in 1959 as a state-run medium of communication and expanded slowly for more than two decades.[379][380] The state monopoly on television broadcast ended in the 1990s. Since then, satellite channels have increasingly shaped the popular culture of Indian society.[381] Today, television is the most penetrative media in India; industry estimates indicate that as of 2012[update] there are over 554 million TV consumers, 462 million with satellite or cable connections compared to other forms of mass media such as the press (350 million), radio (156 million) or internet (37 million).[382]
|
152 |
+
|
153 |
+
Traditional Indian society is sometimes defined by social hierarchy. The Indian caste system embodies much of the social stratification and many of the social restrictions found in the Indian subcontinent. Social classes are defined by thousands of endogamous hereditary groups, often termed as jātis, or "castes".[383] India declared untouchability to be illegal[384] in 1947 and has since enacted other anti-discriminatory laws and social welfare initiatives. At the workplace in urban India, and in international or leading Indian companies, caste-related identification has pretty much lost its importance.[385][386]
|
154 |
+
|
155 |
+
Family values are important in the Indian tradition, and multi-generational patriarchal joint families have been the norm in India, though nuclear families are becoming common in urban areas.[387] An overwhelming majority of Indians, with their consent, have their marriages arranged by their parents or other family elders.[388] Marriage is thought to be for life,[388] and the divorce rate is extremely low,[389] with less than one in a thousand marriages ending in divorce.[390] Child marriages are common, especially in rural areas; many women wed before reaching 18, which is their legal marriageable age.[391] Female infanticide in India, and lately female foeticide, have created skewed gender ratios; the number of missing women in the country quadrupled from 15 million to 63 million in the 50-year period ending in 2014, faster than the population growth during the same period, and constituting 20 percent of India's female electorate.[392] Accord to an Indian government study, an additional 21 million girls are unwanted and do not receive adequate care.[393] Despite a government ban on sex-selective foeticide, the practice remains commonplace in India, the result of a preference for boys in a patriarchal society.[394] The payment of dowry, although illegal, remains widespread across class lines.[395] Deaths resulting from dowry, mostly from bride burning, are on the rise, despite stringent anti-dowry laws.[396]
|
156 |
+
|
157 |
+
Many Indian festivals are religious in origin. The best known include: Diwali, Ganesh Chaturthi, Thai Pongal, Holi, Durga Puja, Eid ul-Fitr, Bakr-Id, Christmas, and Vaisakhi.[397][398]
|
158 |
+
|
159 |
+
The most widely worn traditional dress in India, for both women and men, from ancient times until the advent of modern times, was draped.[399] For women it eventually took the form of a sari, a single long piece of cloth, famously six yards long, and of width spanning the lower body.[399] The sari is tied around the waist and knotted at one end, wrapped around the lower body, and then over the shoulder.[399] In its more modern form, it has been used to cover the head, and sometimes the face, as a veil.[399] It has been combined with an underskirt, or Indian petticoat, and tucked in the waist band for more secure fastening, It is also commonly worn with an Indian blouse, or choli, which serves as the primary upper-body garment, the sari's end, passing over the shoulder, now serving to obscure the upper body's contours, and to cover the midriff.[399]
|
160 |
+
|
161 |
+
For men, a similar but shorter length of cloth, the dhoti, has served as a lower-body garment.[400] It too is tied around the waist and wrapped.[400] In south India, it is usually wrapped around the lower body, the upper end tucked in the waistband, the lower left free. In addition, in northern India, it is also wrapped once around each leg before being brought up through the legs to be tucked in at the back. Other forms of traditional apparel that involve no stitching or tailoring are the chaddar (a shawl worn by both sexes to cover the upper body during colder weather, or a large veil worn by women for framing the head, or covering it) and the pagri (a turban or a scarf worn around the head as a part of a tradition, or to keep off the sun or the cold).[400]
|
162 |
+
|
163 |
+
Until the beginning of the first millennium CE, the ordinary dress of people in India was entirely unstitched.[401] The arrival of the Kushans from Central Asia, circa 48 CE, popularised cut and sewn garments in the style of Central Asian favoured by the elite in northern India.[401] However, it was not until Muslim rule was established, first with the Delhi sultanate and then the Mughal Empire, that the range of stitched clothes in India grew and their use became significantly more widespread.[401] Among the various garments gradually establishing themselves in northern India during medieval and early-modern times and now commonly worn are: the shalwars and pyjamas both forms of trousers, as well as the tunics kurta and kameez.[401] In southern India, however, the traditional draped garments were to see much longer continuous use.[401]
|
164 |
+
|
165 |
+
Shalwars are atypically wide at the waist but narrow to a cuffed bottom. They are held up by a drawstring or elastic belt, which causes them to become pleated around the waist.[402] The pants can be wide and baggy, or they can be cut quite narrow, on the bias, in which case they are called churidars. The kameez is a long shirt or tunic.[403] The side seams are left open below the waist-line,[404]), which gives the wearer greater freedom of movement. The kameez is usually cut straight and flat; older kameez use traditional cuts; modern kameez are more likely to have European-inspired set-in sleeves. The kameez may have a European-style collar, a Mandarin-collar, or it may be collarless; in the latter case, its design as a women's garment is similar to a kurta.[405] At first worn by Muslim women, the use of shalwar kameez gradually spread, making them a regional style,[406][407] especially in the Punjab region.[408]
|
166 |
+
[409]
|
167 |
+
|
168 |
+
A kurta, which traces its roots to Central Asian nomadic tunics, has evolved stylistically in India as a garment for everyday wear as well as for formal occasions.[401] It is traditionally made of cotton or silk; it is worn plain or with embroidered decoration, such as chikan; and it can be loose or tight in the torso, typically falling either just above or somewhere below the wearer's knees.[410] The sleeves of a traditional kurta fall to the wrist without narrowing, the ends hemmed but not cuffed; the kurta can be worn by both men and women; it is traditionally collarless, though standing collars are increasingly popular; and it can be worn over ordinary pyjamas, loose shalwars, churidars, or less traditionally over jeans.[410]
|
169 |
+
|
170 |
+
In the last 50 years, fashions have changed a great deal in India. Increasingly, in urban settings in northern India, the sari is no longer the apparel of everyday wear, transformed instead into one for formal occasions.[411] The traditional shalwar kameez is rarely worn by younger women, who favour churidars or jeans.[411] The kurtas worn by young men usually fall to the shins and are seldom plain. In white-collar office settings, ubiquitous air conditioning allows men to wear sports jackets year-round.[411] For weddings and formal occasions, men in the middle- and upper classes often wear bandgala, or short Nehru jackets, with pants, with the groom and his groomsmen sporting sherwanis and churidars.[411] The dhoti, the once universal garment of Hindu India, the wearing of which in the homespun and handwoven form of khadi allowed Gandhi to bring Indian nationalism to the millions,[412]
|
171 |
+
is seldom seen in the cities,[411] reduced now, with brocaded border, to the liturgical vestments of Hindu priests.
|
172 |
+
|
173 |
+
Indian cuisine consists of a wide variety of regional and traditional cuisines. Given the range of diversity in soil type, climate, culture, ethnic groups, and occupations, these cuisines vary substantially from each other, using locally available spices, herbs, vegetables, and fruit. Indian foodways have been influenced by religion, in particular Hindu cultural choices and traditions.[413] They have been also shaped by Islamic rule, particularly that of the Mughals, by the arrival of the Portuguese on India's southwestern shores, and by British rule. These three influences are reflected, respectively, in the dishes of pilaf and biryani; the vindaloo; and the tiffin and the Railway mutton curry.[414] Earlier, the Columbian exchange had brought the potato, the tomato, maize, peanuts, cashew nuts, pineapples, guavas, and most notably, chilli peppers, to India. Each became staples of use.[415] In turn, the spice trade between India and Europe was a catalyst for Europe's Age of Discovery.[416]
|
174 |
+
|
175 |
+
The cereals grown in India, their choice, times, and regions of planting, correspond strongly to the timing of India's monsoons, and the variation across regions in their associated rainfall.[417] In general, the broad division of cereal zones in India, as determined by their dependence on rain, was firmly in place before the arrival of artificial irrigation.[417] Rice, which requires a lot of water, has been grown traditionally in regions of high rainfall in the northeast and the western coast, wheat in regions of moderate rainfall, like India's northern plains, and millet in regions of low rainfall, such as on the Deccan Plateau and in Rajasthan.[418][417]
|
176 |
+
|
177 |
+
The foundation of a typical Indian meal is a cereal cooked in plain fashion, and complemented with flavourful savoury dishes.[419] The latter includes lentils, pulses and vegetables spiced commonly with ginger and garlic, but also more discerningly with a combination of spices that may include coriander, cumin, turmeric, cinnamon, cardamon and others as informed by culinary conventions.[419] In an actual meal, this mental representation takes the form of a platter, or thali, with a central place for the cooked cereal, peripheral ones, often in small bowls, for the flavourful accompaniments, and the simultaneous, rather than piecemeal, ingestion of the two in each act of eating, whether by actual mixing—for example of rice and lentils—or in the folding of one—such as bread—around the other, such as cooked vegetables.[419]
|
178 |
+
|
179 |
+
A notable feature of Indian food is the existence of a number of distinctive vegetarian cuisines, each a feature of the geographical and cultural histories of its adherents.[420] The appearance of ahimsa, or the avoidance of violence toward all forms of life in many religious orders early in Indian history, especially Upanishadic Hinduism, Buddhism and Jainism, is thought to have been a notable factor in the prevalence of vegetarianism among a segment of India's Hindu population, especially in southern India, Gujarat, and the Hindi-speaking belt of north-central India, as well as among Jains.[420] Among these groups, strong discomfort is felt at thoughts of eating meat,[421] and contributes to the low proportional consumption of meat to overall diet in India.[421] Unlike China, which has increased its per capita meat consumption substantially in its years of increased economic growth, in India the strong dietary traditions have contributed to dairy, rather than meat, becoming the preferred form of animal protein consumption accompanying higher economic growth.[422]
|
180 |
+
|
181 |
+
In the last millennium, the most significant import of cooking techniques into India occurred during the Mughal Empire. The cultivation of rice had spread much earlier from India to Central and West Asia; however, it was during Mughal rule that dishes, such as the pilaf,[418] developed in the interim during the Abbasid caliphate,[423] and cooking techniques such as the marinating of meat in yogurt, spread into northern India from regions to its northwest.[424] To the simple yogurt marinade of Persia, onions, garlic, almonds, and spices began to be added in India.[424] Rice grown to the southwest of the Mughal capital, Agra, which had become famous in the Islamic world for its fine grain, was partially cooked and layered alternately with the sauteed meat, the pot sealed tightly, and slow cooked according to another Persian cooking technique, to produce what has today become the Indian biryani,[424] a feature of festive dining in many parts of India.[425]
|
182 |
+
In food served in restaurants in urban north India, and internationally, the diversity of Indian food has been partially concealed by the dominance of Punjabi cuisine. This was caused in large part by an entrepreneurial response among people from the Punjab region who had been displaced by the 1947 partition of India, and had arrived in India as refugees.[420] The identification of Indian cuisine with the tandoori chicken—cooked in the tandoor oven, which had traditionally been used for baking bread in the rural Punjab and the Delhi region, especially among Muslims, but which is originally from Central Asia—dates to this period.[420]
|
183 |
+
|
184 |
+
In India, several traditional indigenous sports remain fairly popular, such as kabaddi, kho kho, pehlwani and gilli-danda. Some of the earliest forms of Asian martial arts, such as kalarippayattu, musti yuddha, silambam, and marma adi, originated in India. Chess, commonly held to have originated in India as chaturaṅga, is regaining widespread popularity with the rise in the number of Indian grandmasters.[426][427] Pachisi, from which parcheesi derives, was played on a giant marble court by Akbar.[428]
|
185 |
+
|
186 |
+
The improved results garnered by the Indian Davis Cup team and other Indian tennis players in the early 2010s have made tennis increasingly popular in the country.[429] India has a comparatively strong presence in shooting sports, and has won several medals at the Olympics, the World Shooting Championships, and the Commonwealth Games.[430][431] Other sports in which Indians have succeeded internationally include badminton[432] (Saina Nehwal and P V Sindhu are two of the top-ranked female badminton players in the world), boxing,[433] and wrestling.[434] Football is popular in West Bengal, Goa, Tamil Nadu, Kerala, and the north-eastern states.[435]
|
187 |
+
|
188 |
+
Cricket is the most popular sport in India.[437] Major domestic competitions include the Indian Premier League, which is the most-watched cricket league in the world and ranks sixth among all sports leagues.[438]
|
189 |
+
|
190 |
+
India has hosted or co-hosted several international sporting events: the 1951 and 1982 Asian Games; the 1987, 1996, and 2011 Cricket World Cup tournaments; the 2003 Afro-Asian Games; the 2006 ICC Champions Trophy; the 2010 Hockey World Cup; the 2010 Commonwealth Games; and the 2017 FIFA U-17 World Cup. Major international sporting events held annually in India include the Chennai Open, the Mumbai Marathon, the Delhi Half Marathon, and the Indian Masters. The first Formula 1 Indian Grand Prix featured in late 2011 but has been discontinued from the F1 season calendar since 2014.[439] India has traditionally been the dominant country at the South Asian Games. An example of this dominance is the basketball competition where the Indian team won three out of four tournaments to date.[440]
|
191 |
+
|
192 |
+
Overview
|
193 |
+
|
194 |
+
Etymology
|
195 |
+
|
196 |
+
History
|
197 |
+
|
198 |
+
Geography
|
199 |
+
|
200 |
+
Biodiversity
|
201 |
+
|
202 |
+
Politics
|
203 |
+
|
204 |
+
Foreign relations and military
|
205 |
+
|
206 |
+
Economy
|
207 |
+
|
208 |
+
Demographics
|
209 |
+
|
210 |
+
Culture
|
211 |
+
|
212 |
+
Government
|
213 |
+
|
214 |
+
General information
|
215 |
+
|
216 |
+
Coordinates: 21°N 78°E / 21°N 78°E / 21; 78
|
en/2721.html.txt
ADDED
@@ -0,0 +1,252 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Coordinates: 40°N 86°W / 40°N 86°W / 40; -86
|
4 |
+
|
5 |
+
Indiana (/ˌɪndiˈænə/ (listen)) is a U.S. state in the Midwestern and Great Lakes regions of North America. It is the 38th-largest by area and the 17th-most populous of the 50 United States. Its capital and largest city is Indianapolis. Indiana was admitted to the United States as the 19th state on December 11, 1816. It borders Lake Michigan to the northwest, Michigan to the north, Ohio to the east, the Ohio River and Kentucky to the south and southeast, and the Wabash River and Illinois to the west.
|
6 |
+
|
7 |
+
Before becoming a territory, various indigenous peoples inhabited Indiana for thousands of years. Since its founding as a territory, settlement patterns in Indiana have reflected regional cultural segmentation present in the Eastern United States; the state's northernmost tier was settled primarily by people from New England and New York, Central Indiana by migrants from the Mid-Atlantic states and adjacent Ohio, and Southern Indiana by settlers from the Southern states, particularly Kentucky and Tennessee.[6]
|
8 |
+
|
9 |
+
Indiana has a diverse economy with a gross state product of $359.12 billion in 2017.[7] It has several metropolitan areas with populations greater than 100,000 and a number of smaller industrial cities and towns. Indiana is home to professional sports teams, including the NFL's Indianapolis Colts and the NBA's Indiana Pacers, and hosts several notable athletic events, including the Indianapolis 500.
|
10 |
+
|
11 |
+
Indiana's name means "Land of the Indians", or simply "Indian Land".[8] It also stems from Indiana's territorial history. On May 7, 1800, the United States Congress passed legislation to divide the Northwest Territory into two areas and named the western section the Indiana Territory. In 1816, when Congress passed an Enabling Act to begin the process of establishing statehood for Indiana, a part of this territorial land became the geographic area for the new state.[9][10][11]
|
12 |
+
|
13 |
+
A resident of Indiana is officially known as a Hoosier.[12] The etymology of this word is disputed, but the leading theory, advanced by the Indiana Historical Bureau and the Indiana Historical Society, has its origin in Virginia, the Carolinas, and Tennessee (the Upland South) as a term for a backwoodsman, a rough countryman, or a country bumpkin.[13][14]
|
14 |
+
|
15 |
+
The first inhabitants in what is now Indiana were the Paleo-Indians, who arrived about 8000 BCE after the melting of the glaciers at the end of the Ice Age. Divided into small groups, the Paleo-Indians were nomads who hunted large game such as mastodons. They created stone tools made out of chert by chipping, knapping and flaking.[15]
|
16 |
+
|
17 |
+
The Archaic period, which began between 5000 and 4000 BC, covered the next phase of indigenous culture. The people developed new tools as well as techniques to cook food, an important step in civilization. These new tools included different types of spear points and knives, with various forms of notches. They made ground-stone tools such as stone axes, woodworking tools and grinding stones. During the latter part of the period, they built earthwork mounds and middens, which showed settlements were becoming more permanent. The Archaic period ended at about 1500 BC, although some Archaic people lived until 700 BC.[15]
|
18 |
+
|
19 |
+
The Woodland period began around 1500 BC, when new cultural attributes appeared. The people created ceramics and pottery, and extended their cultivation of plants. An early Woodland period group named the Adena people had elegant burial rituals, featuring log tombs beneath earth mounds. In the middle of the Woodland period, the Hopewell people began to develop long-range trade of goods. Nearing the end of the stage, the people developed highly productive cultivation and adaptation of agriculture, growing such crops as corn and squash. The Woodland period ended around 1000 AD.[15]
|
20 |
+
|
21 |
+
The Mississippian culture emerged, lasting from 1000 AD until the 15th century, shortly before the arrival of Europeans. During this stage, the people created large urban settlements designed according to their cosmology, with large mounds and plazas defining ceremonial and public spaces. The concentrated settlements depended on the agricultural surpluses. One such complex was the Angel Mounds. They had large public areas such as plazas and platform mounds, where leaders lived or conducted rituals. Mississippian civilization collapsed in Indiana during the mid-15th century for reasons that remain unclear.[15]
|
22 |
+
|
23 |
+
The historic Native American tribes in the area at the time of European encounter spoke different languages of the Algonquian family. They included the Shawnee, Miami, and Illini. Refugee tribes from eastern regions, including the Delaware who settled in the White and Whitewater River Valleys, later joined them.
|
24 |
+
|
25 |
+
In 1679, French explorer René-Robert Cavelier, Sieur de La Salle was the first European to cross into Indiana after reaching present-day South Bend at the Saint Joseph River.[16] He returned the following year to learn about the region. French-Canadian fur traders soon arrived, bringing blankets, jewelry, tools, whiskey and weapons to trade for skins with the Native Americans.
|
26 |
+
|
27 |
+
By 1702, Sieur Juchereau established the first trading post near Vincennes. In 1715, Sieur de Vincennes built Fort Miami at Kekionga, now Fort Wayne. In 1717, another Canadian, Picote de Beletre, built Fort Ouiatenon on the Wabash River, to try to control Native American trade routes from Lake Erie to the Mississippi River.
|
28 |
+
|
29 |
+
In 1732, Sieur de Vincennes built a second fur trading post at Vincennes. French Canadian settlers, who had left the earlier post because of hostilities, returned in larger numbers. In a period of a few years, British colonists arrived from the East and contended against the Canadians for control of the lucrative fur trade. Fighting between the French and British colonists occurred throughout the 1750s as a result.
|
30 |
+
|
31 |
+
The Native American tribes of Indiana sided with the French Canadians during the French and Indian War (also known as the Seven Years' War). With British victory in 1763, the French were forced to cede to the British crown all their lands in North America east of the Mississippi River and north and west of the colonies.
|
32 |
+
|
33 |
+
The tribes in Indiana did not give up: they captured Fort Ouiatenon and Fort Miami during Pontiac's Rebellion. The British royal proclamation of 1763 designated the land west of the Appalachians for Native American use, and excluded British colonists from the area, which the Crown called "Indian Territory".
|
34 |
+
|
35 |
+
In 1775, the American Revolutionary War began as the colonists sought self-government and independence from the British. The majority of the fighting took place near the East Coast, but the Patriot military officer George Rogers Clark called for an army to help fight the British in the west.[17] Clark's army won significant battles and took over Vincennes and Fort Sackville on February 25, 1779.[18]
|
36 |
+
|
37 |
+
During the war, Clark managed to cut off British troops, who were attacking the eastern colonists from the west. His success is often credited for changing the course of the American Revolutionary War.[19] At the end of the war, through the Treaty of Paris, the British crown ceded their claims to the land south of the Great Lakes to the newly formed United States, including Native American lands.
|
38 |
+
|
39 |
+
In 1787, the US defined the Northwest Territory which included the area of present-day Indiana. In 1800, Congress separated Ohio from the Northwest Territory, designating the rest of the land as the Indiana Territory.[20] President Thomas Jefferson chose William Henry Harrison as the governor of the territory, and Vincennes was established as the capital.[21] After the Michigan Territory was separated and the Illinois Territory was formed, Indiana was reduced to its current size and geography.[20]
|
40 |
+
|
41 |
+
Starting with the Battle of Fallen Timbers in 1794 and the Treaty of Greenville in 1795, Native American titles to Indiana lands were extinguished by usurpation, purchase, or war and treaty. About half the state was acquired in the Treaty of St. Mary's from the Miami in 1818. Purchases were not complete until the Treaty of Mississinewas in 1826 acquired the last of the reserved Native American lands in the northeast.
|
42 |
+
|
43 |
+
A portrait of the Indiana frontier about 1810: The frontier was defined by the Treaty of Fort Wayne in 1809, adding much of the southwestern lands around Vincennes and southeastern lands adjacent to Cincinnati, to areas along the Ohio River as part of U.S. territory. Settlements were military outposts such as Fort Ouiatenon in the northwest and Fort Miami (later Fort Wayne) in the northeast, Fort Knox and Vincennes settlement on the lower Wabash. Other settlements included Clarksville (across from Louisville), Vevay, and Corydon along the Ohio River, the Quaker Colony in Richmond on the eastern border, and Conner's Post (later Connersville) on the east central frontier. Indianapolis would not be populated for 15 more years, and central and northern Indiana Territory remained wilderness populated primarily by Indigenous communities. Only two counties in the extreme southeast, Clark and Dearborn, had been organized by European settlers. Land titles issued out of Cincinnati were sparse. Settler migration was chiefly via flatboat on the Ohio River westerly, and by wagon trails up the Wabash/White River Valleys (west) and Whitewater River Valleys (east).
|
44 |
+
|
45 |
+
In 1810, the Shawnee tribal chief Tecumseh and his brother Tenskwatawa encouraged other indigenous tribes in the territory to resist European settlement. Tensions rose and the US authorized Harrison to launch a preemptive expedition against Tecumseh's Confederacy; the US gained victory at the Battle of Tippecanoe on November 7, 1811. Tecumseh was killed in 1813 during the Battle of Thames. After his death, armed resistance to United States control ended in the region. Most Native American tribes in the state were later removed to west of the Mississippi River in the 1820s and 1830s after US negotiations and the purchase of their lands.[22]
|
46 |
+
|
47 |
+
Corydon, a town in the far southern part of Indiana, was named the second capital of the Indiana Territory in May 1813 in order to decrease the threat of Native American raids following the Battle of Tippecanoe.[20] Two years later, a petition for statehood was approved by the territorial general assembly and sent to Congress. An Enabling Act was passed to provide an election of delegates to write a constitution for Indiana. On June 10, 1816, delegates assembled at Corydon to write the constitution, which was completed in 19 days. Jonathan Jennings was elected the fledling state's first governor in August 1816. President James Madison approved Indiana's admission into the union as the nineteenth state on December 11, 1816.[18] In 1825, the state capital was moved from Corydon to Indianapolis.[20]
|
48 |
+
|
49 |
+
Many European immigrants went west to settle in Indiana in the early 19th century. The largest immigrant group to settle in Indiana were Germans, as well as many immigrants from Ireland and England. Americans who were primarily ethnically English migrated from the Northern Tier of New York and New England, as well as from the mid-Atlantic state of Pennsylvania.[24][25] The arrival of steamboats on the Ohio River in 1811, and the National Road at Richmond in 1829, greatly facilitated settlement of northern and western Indiana.
|
50 |
+
|
51 |
+
Following statehood, the new government worked to transform Indiana from a frontier into a developed, well-populated, and thriving state, beginning significant demographic and economic changes. In 1836, the state's founders initiated a program, the Indiana Mammoth Internal Improvement Act, that led to the construction of roads, canals, railroads and state-funded public schools. The plans bankrupted the state and were a financial disaster, but increased land and produce value more than fourfold.[26] In response to the crisis and in order to avert another, in 1851, a second constitution was adopted. Among its provisions were a prohibition on public debt, as well as the extension of suffrage to African-Americans.
|
52 |
+
|
53 |
+
During the American Civil War, Indiana became politically influential and played an important role in the affairs of the nation. Indiana was the first western state to mobilize for the United States in the war, and soldiers from Indiana participated in all the war's major engagements. The state provided 126 infantry regiments, 26 batteries of artillery and 13 regiments of cavalry to the Union.[27]
|
54 |
+
|
55 |
+
In 1861, Indiana was assigned a quota of 7,500 men to join the Union Army.[28] So many volunteered in the first call that thousands had to be turned away. Before the war ended, Indiana had contributed 208,367 men. Casualties were over 35% among these men: 24,416 lost their lives and over 50,000 more were wounded.[29] The only Civil War conflicts fought in Indiana were the Newburgh Raid, a bloodless capture of the city; and the Battle of Corydon, which occurred during Morgan's Raid leaving 15 dead, 40 wounded, and 355 captured.[30]
|
56 |
+
|
57 |
+
After the war, Indiana remained a largely agricultural state. Post-war industries included mining, including limestone extraction; meatpacking; food processing, such as milling grain, distilling it into alcohol; and the building of wagons, buggies, farm machinery, and hardware.[31] However, the discovery of natural gas in the 1880s in northern Indiana led to an economic boom: the abundant and cheap fuel attracted heavy industry; the availability of jobs in turn attracted new settlers from other parts of the country as well as from Europe.[32] This led to the rapid expansion of cities such as South Bend, Gary, Indianapolis, and Fort Wayne.[31]
|
58 |
+
|
59 |
+
With the onset of the industrial revolution, Indiana industry began to grow at an accelerated rate across the northern part of the state. With industrialization, workers developed labor unions and suffrage movements arose in relation to the progress of women.[32] In the early 20th century, Indiana developed into a strong manufacturing state with ties to the new auto industry.[24] Haynes-Apperson, the nation's first commercially successful auto company, operated in Kokomo until 1925. The construction of the Indianapolis Motor Speedway and the start of auto-related industries were also related to the auto industry boom.[33]
|
60 |
+
|
61 |
+
During the 1930s, Indiana, like the rest of the nation, was affected by the Great Depression. The economic downturn had a wide-ranging negative impact on Indiana, such as the decline of urbanization. The Dust Bowl further to the west led many migrants to flee to the more industrialized Midwest. Governor Paul V. McNutt's administration struggled to build a state-funded welfare system to help overwhelmed private charities. During his administration, spending and taxes were both cut drastically in response to the Depression, and the state government was completely reorganized. McNutt ended Prohibition in the state and enacted the state's first income tax. On several occasions, he declared martial law to put an end to worker strikes.[34] World War II helped lift the economy in Indiana, as the war required steel, food and other goods that were produced in the state.[35] Roughly 10 percent of Indiana's population joined the armed forces, while hundreds of industries earned war production contracts and began making war material.[36] Indiana manufactured 4.5 percent of total United States military armaments produced during World War II, ranking eighth among the 48 states.[37] The expansion of industry to meet war demands helped end the Great Depression.[35]
|
62 |
+
|
63 |
+
With the conclusion of World War II, Indiana rebounded to pre-Depression levels of production. Industry became the primary employer, a trend that continued into the 1960s. Urbanization during the 1950s and 1960s led to substantial growth in the state's cities. The auto, steel and pharmaceutical industries topped Indiana's major businesses. Indiana's population continued to grow after the war, exceeding five million by the 1970 census.[38] In the 1960s the administration of Matthew E. Welsh adopted its first sales tax of two percent.[39] Indiana schools were desegregated in 1949. In 1950, the Census Bureau reported Indiana's population as 95.5% white and 4.4% black.[40] Governor Welsh also worked with the General Assembly to pass the Indiana Civil Rights Bill, granting equal protection to minorities in seeking employment.[41]
|
64 |
+
|
65 |
+
On December 8, 1964, a B-58 carrying a nuclear weapons slid off an icy runway and caught fire during a training drill. The five nuclear weapons on board were burned, causing radioactive contamination of the crash area.[42]
|
66 |
+
|
67 |
+
Beginning in 1970, a series of amendments to the state constitution were proposed. With adoption, the Indiana Court of Appeals was created and the procedure of appointing justices on the courts was adjusted.[43]
|
68 |
+
|
69 |
+
The 1973 oil crisis created a recession that hurt the automotive industry in Indiana. Companies such as Delco Electronics and Delphi began a long series of downsizing that contributed to high unemployment rates in manufacturing in Anderson, Muncie, and Kokomo. The restructuring and deindustrialization trend continued until the 1980s, when the national and state economy began to diversify and recover.[44]
|
70 |
+
|
71 |
+
With a total area (land and water) of 36,418 square miles (94,320 km2), Indiana ranks as the 38th largest state in size.[45] The state has a maximum dimension north to south of 250 miles (400 km) and a maximum east to west dimension of 145 miles (233 km).[46] The state's geographic center (39° 53.7'N, 86° 16.0W) is in Marion County.[47]
|
72 |
+
|
73 |
+
Located in the Midwestern United States, Indiana is one of eight states that make up the Great Lakes Region.[48] Indiana is bordered on the north by Michigan, on the east by Ohio, and on the west by Illinois, partially separated by the Wabash River.[49] Lake Michigan borders Indiana on the northwest and the Ohio River separates Indiana from Kentucky on the south.[47][50]
|
74 |
+
|
75 |
+
The average altitude of Indiana is about 760 feet (230 m) above sea level.[51] The highest point in the state is Hoosier Hill in Wayne County at 1,257 feet (383 m) above sea level.[45][52] The lowest point at 320 feet (98 m) above sea level is in Posey County, where the Wabash River meets the Ohio River.[45][47] Only 2,850 square miles (7,400 km2) have an altitude greater than 1,000 feet (300 m) and this area is enclosed within 14 counties. About 4,700 square miles (12,000 km2) have an elevation of less than 500 feet (150 m), mostly concentrated along the Ohio and lower Wabash Valleys, from Tell City and Terre Haute to Evansville and Mount Vernon.[53]
|
76 |
+
|
77 |
+
The state includes two natural regions of the United States: the Central Lowlands and the Interior Low Plateaus.[54] The till plains make up the northern and central regions of Indiana. Much of its appearance is a result of elements left behind by glaciers. Central Indiana is mainly flat with some low rolling hills (except where rivers cut deep valleys through the plain, like at the Wabash River and Sugar Creek) and soil composed of glacial sands, gravel and clay, which results in exceptional farmland.[49] Northern Indiana is similar, except for the presence of higher and hillier terminal moraines and hundreds of kettle lakes.
|
78 |
+
|
79 |
+
In northwest Indiana there are various sand ridges and dunes, some reaching nearly 200 feet in height. These are along the Lake Michigan shoreline and also inland to the Kankakee Outwash Plain. Southern Indiana is characterized by valleys and rugged, hilly terrain, contrasting from much of the state. Here, bedrock is exposed at the surface and isn't buried in glacial till like further north. Because of the prevalent Indiana limestone, the area has many caves, caverns, and quarries.
|
80 |
+
|
81 |
+
Major river systems in Indiana include the Whitewater, White, Blue, Wabash, St. Joseph, and Maumee rivers.[55] According to the Indiana Department of Natural Resources, there were 65 rivers, streams, and creeks of environmental interest or scenic beauty, which included only a portion of an estimated 24,000 total river miles within the state.[56]
|
82 |
+
|
83 |
+
The Wabash River, which is the longest free-flowing river east of the Mississippi River, is the official river of Indiana.[57][58] At 475 miles (764 kilometers) in length, the river bisects the state from northeast to southwest, forming part of the state's border with Illinois, before converging with the Ohio River. The river has been the subject of several songs, such as On the Banks of the Wabash, The Wabash Cannonball and Back Home Again, In Indiana.[59][60]
|
84 |
+
|
85 |
+
There are about 900 lakes listed by the Indiana Department of Natural Resources.[61] To the northwest, Indiana borders Lake Michigan, one of five lakes comprising the Great Lakes, the largest group of freshwater lakes in the world. Tippecanoe Lake, the deepest lake in the state, reaches depths at nearly 120 feet (37 m), while Lake Wawasee is the largest natural lake in Indiana.[62] At 10,750 acres (summer pool level), Lake Monroe is the largest lake in Indiana.
|
86 |
+
|
87 |
+
In the past, almost all of Indiana was classified as having a humid continental climate, with cold winters and hot, wet summers,[63] with only the extreme southern portion of the state lying within the humid subtropical climate, which receives more precipitation than other parts of Indiana.[49] However, as of the 2016 update, about half of the state is now classified as humid subtropical. Temperatures generally diverge from the north and south sections of the state. In the middle of the winter, average high/low temperatures range from around 30 °F/15 °F (−1 °C/−10 °C) in the far north to 41 °F/24 °F (5 °C/−4 °C) in the far south.[64]
|
88 |
+
|
89 |
+
In the middle of summer there is generally a little less variation across the state, as average high/low temperatures range from around 84 °F/64 °F (29 °C/18 °C) in the far north to 90 °F/69 °F (32 °C/21 °C) in the far south.[64] The record high temperature for the state was 116 °F (47 °C) set on July 14, 1936 at Collegeville. The record low was −36 °F (−38 °C) on January 19, 1994 at New Whiteland. The growing season typically spans from 155 days in the north to 185 days in the south.[citation needed]
|
90 |
+
|
91 |
+
While droughts occasionally occur in the state, rainfall totals are distributed relatively equally throughout the year. Precipitation totals range from 35 inches (89 cm) near Lake Michigan in northwest Indiana to 45 inches (110 cm) along the Ohio River in the south, while the state's average is 40 inches (100 cm). Annual snowfall in Indiana varies widely across the state, ranging from 80 inches (200 cm) in the northwest along Lake Michigan to 14 inches (36 cm) in the far south. Lake effect snow accounts for roughly half of the snowfall in northwest and north central Indiana due to the effects of the moisture and relative warmth of Lake Michigan upwind. The mean wind speed is 8 miles per hour (13 km/h).[65]
|
92 |
+
|
93 |
+
In a 2012 report, Indiana was ranked eighth in a list of the top 20 tornado-prone states based on National Weather Service data from 1950 through 2011.[66] A 2011 report ranked South Bend 15th among the top 20 tornado-prone cities in the United States,[67] while another report from 2011 ranked Indianapolis eighth.[68][69][70] Despite its vulnerability, Indiana is not a part of tornado alley.[71]
|
94 |
+
|
95 |
+
Indiana is one of thirteen U.S. states that are divided into more than one time zone. Indiana's time zones have fluctuated over the past century. At present most of the state observes Eastern Time; six counties near Chicago and six near Evansville observe Central Time. Debate continues on the matter.
|
96 |
+
|
97 |
+
Before 2006, most of Indiana did not observe daylight saving time (DST). Some counties within this area, particularly Floyd, Clark, and Harrison counties near Louisville, Kentucky, and Ohio and Dearborn counties near Cincinnati, Ohio, unofficially observed DST by local custom. Since April 2006 the entire state observes DST.
|
98 |
+
|
99 |
+
Indiana is divided into 92 counties. As of 2010[update], the state includes 16 metropolitan and 25 micropolitan statistical areas, 117 incorporated cities, 450 towns, and several other smaller divisions and statistical areas.[74][75] Marion County and Indianapolis have a consolidated city-county government.[74]
|
100 |
+
|
101 |
+
Indianapolis is the capital of Indiana and its largest city.[74][76] Indiana's four largest metropolitan areas are Indianapolis, Fort Wayne, Evansville, and South Bend.[77] The table below lists the state's twenty largest municipalities based on the 2019 United States Census Estimate.[78]
|
102 |
+
|
103 |
+
The United States Census Bureau estimates Indiana's population was 6,732,219 on July 1, 2019, a 3.83% increase since the 2010 United States Census.[3]
|
104 |
+
|
105 |
+
The state's population density was 181.0 persons per square mile, the 16th-highest in the United States.[74] As of the 2010 U.S. Census, Indiana's population center is northwest of Sheridan, in Hamilton County (+40.149246, −086.259514).[74][81][82]
|
106 |
+
|
107 |
+
In 2005, 77.7% of Indiana residents lived in metropolitan counties, 16.5% lived in micropolitan counties and 5.9% lived in non-core counties.[83]
|
108 |
+
|
109 |
+
The racial makeup of the state (based on the 2019 population estimate) was:
|
110 |
+
|
111 |
+
Hispanic or Latino of any race made up 7.3% of the population.[84] The Hispanic population is Indiana's fastest-growing ethnic minority.[85] 28.2% of Indiana's children under the age of 1 belonged to minority groups (note: children born to white hispanics are counted as minority group).[86]
|
112 |
+
|
113 |
+
German is the largest ancestry reported in Indiana, with 22.7% of the population reporting that ancestry in the Census. Persons citing American (12.0%) and English ancestry (8.9%) are also numerous, as are Irish (10.8%) and Polish (3.0%).[90] Most of those citing American ancestry are actually of English descent, but have family that has been in North America for so long, in many cases since the early colonial era, that they identify simply as American.[91][92][93][94] In the 1980 census 1,776,144 people claimed German ancestry, 1,356,135 claimed English ancestry and 1,017,944 claimed Irish ancestry out of a total population of 4,241,975 making the state 42% German, 32% English and 24% Irish.[95]
|
114 |
+
|
115 |
+
Population growth since 1990 has been concentrated in the counties surrounding Indianapolis, with four of the five fastest-growing counties in that area: Hamilton, Hendricks, Johnson, and Hancock. The other county is Dearborn County, which is near Cincinnati, Ohio. Hamilton County has also grown faster than any county in the states bordering Indiana (Illinois, Michigan, Ohio and Kentucky), and is the 20th-fastest growing county in the country.[96]
|
116 |
+
|
117 |
+
With a population of 829,817, Indianapolis is the largest city in Indiana and the 12th-largest in the United States, according to the 2010 Census. Three other cities in Indiana have a population greater than 100,000: Fort Wayne (253,617), Evansville (117,429) and South Bend (101,168).[97] Since 2000, Fishers has seen the largest population rise amongst the state's 20 largest cities with an increase of 100 percent.[98]
|
118 |
+
|
119 |
+
Gary and Hammond have seen the largest population declines regarding the top 20 largest cities since 2000, with a decrease of 21.0 and 6.8 percent respectively.[98] Other cities that have seen extensive growth since 2000 are Noblesville (39.4 percent), Greenwood (81 percent), Carmel (21.4 percent) and Lawrence (9.3 percent). Meanwhile, Evansville (−4.2 percent), Anderson (−4 percent) and Muncie (−3.9 percent) are cities that have seen the steepest decline in population in the state.[99]
|
120 |
+
|
121 |
+
Indianapolis has the largest population of the state's metropolitan areas and the 33rd-largest in the country.[100] The Indianapolis metropolitan area encompasses Marion County and nine surrounding counties in central Indiana.
|
122 |
+
|
123 |
+
Note: Births in table don't add up, because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number.
|
124 |
+
|
125 |
+
Based on population estimates for 2011, 6.6% of the state's population is under the age of five, 24.5% is under the age of 18, and 13.2% is 65 years of age or older.[84] From the 2010 U.S. Census demographic data for Indiana, the median age is 37.[107]
|
126 |
+
|
127 |
+
As of the 2010 census, Indiana's median household income was $44,616, ranking it 36th among the United States and the District of Columbia.[109] In 2005, the median household income for Indiana residents was $43,993. Nearly 498,700 Indiana households had incomes between $50,000 and $75,000, accounting for 20% of all households.[110]
|
128 |
+
|
129 |
+
Hamilton County's median household income is nearly $35,000 higher than the Indiana average. At $78,932, it ranks seventh in the country among counties with fewer than 250,000 people. The next highest median incomes in Indiana are also found in the Indianapolis suburbs; Hendricks County has a median of $57,538, followed by Johnson County at $56,251.[110]
|
130 |
+
|
131 |
+
Although the largest single religious denomination in the state is Catholic (747,706 members), most of the population are members of various Protestant denominations. The largest Protestant denomination by number of adherents in 2010 was the United Methodist Church with 355,043.[112] A study by the Graduate Center at the City University of New York found 20 percent are Roman Catholic, 14 percent belong to different Baptist churches, 10 percent are other Christians, nine percent are Methodist, and six percent are Lutheran. The study found 16 percent of Indiana is affiliated with no religion.[113]
|
132 |
+
|
133 |
+
Indiana is home to the Benedictine St. Meinrad Archabbey, one of two Catholic archabbeys in the United States and one of 11 in the world. The Lutheran Church–Missouri Synod has one of its two seminaries in Fort Wayne. Two conservative denominations, the Free Methodist Church and the Wesleyan Church, have their headquarters in Indianapolis as does the Christian Church.[114][115]
|
134 |
+
|
135 |
+
The Fellowship of Grace Brethren Churches maintains offices and publishing work in Winona Lake.[116] Huntington serves as the home to the Church of the United Brethren in Christ.[117] Anderson is home to the headquarters of the Church of God.[118] The headquarters of the Missionary Church is in Fort Wayne.[119]
|
136 |
+
|
137 |
+
The Friends United Meeting of the Religious Society of Friends, the largest branch of American Quakerism, is based in Richmond,[120] which also houses the oldest Quaker seminary in the United States, the Earlham School of Religion.[121] The Islamic Society of North America is headquartered in Plainfield.[122]
|
138 |
+
|
139 |
+
Indiana has a constitutional democratic republican form of government with three branches: the executive, including an elected governor and lieutenant governor; the legislative, consisting of an elected bicameral General Assembly; and the judicial, the Supreme Court of Indiana, the Indiana Court of Appeals and circuit courts.
|
140 |
+
|
141 |
+
The Governor of Indiana serves as the state's chief executive and has the authority to manage the government as established in the Constitution of Indiana. The governor and the lieutenant governor are jointly elected to four-year terms, with gubernatorial elections running concurrent with United States presidential elections (1996, 2000, 2004, 2008, etc.).[123] The governor may not serve more than two consecutive terms.[123] The governor works with the Indiana General Assembly and the Indiana Supreme Court to govern the state and has the authority to adjust the other branches. The governor can call special sessions of the General Assembly and select and remove leaders of nearly all state departments, boards and commissions. Other notable powers include calling out the Indiana Guard Reserve or the Indiana National Guard in times of emergency or disaster, issuing pardons or commuting the sentence of any criminal offenders except in cases of treason or impeachment and possessing an abundant amount of statutory authority.[123][124][125]
|
142 |
+
|
143 |
+
The lieutenant governor serves as the President of the Senate and ensures the senate rules are acted in accordance with by its constituents. The lieutenant governor votes only when needed to break ties. If the governor dies in office, becomes permanently incapacitated, resigns or is impeached, the lieutenant governor becomes governor. If both the governor and lieutenant governor positions are unoccupied, the Senate President pro tempore becomes governor.[126]
|
144 |
+
|
145 |
+
The Indiana General Assembly is composed of a 50-member Senate and 100-member House of Representatives. The Senate is the upper house of the General Assembly and the House of Representatives is the lower house.[123] The General Assembly has exclusive legislative authority within the state government. Both the Senate and the House can introduce legislation, with the exception that the Senate is not authorized to initiate legislation that will affect revenue. Bills are debated and passed separately in each house, but both houses must pass them before they can be submitted to the Governor.[127] The legislature can nullify a veto from the governor with a majority vote of full membership in the Senate and House of Representatives.[123] Each law passed by the General Assembly must be used without exception to the entire state. The General Assembly has no authority to create legislation that targets a particular community.[127][128] The General Assembly can manage the state's judiciary system by arranging the size of the courts and the bounds of their districts. It also can oversee the activities of the executive branch of the state government, has restricted power to regulate the county governments within the state, and has exclusive power to initiate the method to alter the Indiana Constitution.[127][129]
|
146 |
+
|
147 |
+
The Indiana Supreme Court is made up of five judges with a Court of Appeals composed of 15 judges. The governor selects judges for the supreme and appeal courts from a group of applicants chosen by a special commission. After serving for two years, the judges must acquire the support of the electorate to serve for a 10-year term.[123] In nearly all cases, the Supreme Court does not have original jurisdiction and can only hear cases petitioned to the court following being heard in lower courts. Local circuit courts are where most cases begin with a trial and the consequence decided by the jury. The Supreme Court has original and sole jurisdiction in certain areas including the practice of law, discipline or disbarment of Judges appointed to the lower state courts, and supervision over the exercise of jurisdiction by the other lower courts of the State.[130][131]
|
148 |
+
|
149 |
+
The state is divided into 92 counties, which are led by a board of county commissioners. 90 counties in Indiana have their own circuit court with a judge elected for a six-year term. The remaining two counties, Dearborn and Ohio, are combined into one circuit. Many counties operate superior courts in addition to the circuit court. In densely populated counties where the caseload is traditionally greater, separate courts have been established to solely hear either juvenile, criminal, probate or small claims cases. The establishment, frequency and jurisdiction of these additional courts varies greatly from county to county. There are 85 city and town courts in Indiana municipalities, created by local ordinance, typically handling minor offenses and not considered courts of record. County officials elected to four-year terms include an auditor, recorder, treasurer, sheriff, coroner and clerk of the circuit court. All incorporated cities in Indiana have a mayor and council form of municipal government. Towns are governed by a town council and townships are governed by a township trustee and advisory board.[123][132]
|
150 |
+
|
151 |
+
U.S. News & World Report ranked Indiana first in the publication's inaugural 2017 Best States for Government listing. Among individual categories, Indiana ranked above average in budget transparency (#1), government digitization (#6), and fiscal stability (#8), and ranked average in state integrity (#25).[133]
|
152 |
+
|
153 |
+
From 1880 to 1924, a resident of Indiana was included in all but one presidential election. Indiana Representative William Hayden English was nominated for Vice President and ran with Winfield Scott Hancock in the 1880 election.[134] Former Indiana Governor Thomas A. Hendricks was elected Vice President in 1884. He served until his death on November 25, 1885, under President Grover Cleveland.[135] In 1888, former Senator from Indiana Benjamin Harrison was elected President and served one term. He remains the only President from Indiana. Indiana Senator Charles W. Fairbanks was elected Vice President in 1904, serving under President Theodore Roosevelt until 1909.[136] Fairbanks made another run for Vice President with Charles Evans Hughes in 1916, but they both lost to Woodrow Wilson and former Indiana Governor Thomas R. Marshall, who served as Vice President from 1913 until 1921.[137] Not until 1988 did another presidential election involve a native of Indiana, when Senator Dan Quayle was elected Vice President and served one term with George H. W. Bush.[49] Governor Mike Pence was elected Vice President in 2016, to serve with Donald Trump.
|
154 |
+
|
155 |
+
Indiana has long been considered a Republican stronghold,[138][139] particularly in Presidential races. The Cook Partisan Voting Index (CPVI) now rates Indiana as R+9. Indiana was one of only ten states to support Republican Wendell Willkie in 1940.[49] On 14 occasions the Republican candidate has defeated the Democrat by a double-digit margin in the state, including six times where a Republican won the state by more than twenty percentage points.[140] In 2000 and 2004 George W. Bush won the state by a wide margin while the election was much closer overall. The state has supported a Democrat for president only five times since 1900. In 1912, Woodrow Wilson became the first Democrat to win the state in the twentieth century, with 43% of the vote. Twenty years later, Franklin D. Roosevelt won the state with 55% of the vote over incumbent Republican Herbert Hoover. Roosevelt won the state again in 1936. In 1964, 56% of voters supported Democrat Lyndon B. Johnson over Republican Barry Goldwater. Forty-four years later, Democrat Barack Obama narrowly won the state against John McCain 50% to 49%.[141] In the following election, Republican Mitt Romney won back the state for the Republican Party with 54% of the vote over the incumbent President Obama who won 43%.[142]
|
156 |
+
|
157 |
+
While only five Democratic presidential nominees have carried Indiana since 1900, 11 Democrats were elected governor during that time. Before Mitch Daniels became governor in 2005, Democrats had held the office for 16 consecutive years. Indiana elects two senators and nine representatives to Congress. The state has 11 electoral votes in presidential elections.[140] Seven of the districts favor the Republican Party according to the CPVI rankings; there are seven Republicans serving as representatives and two Democrats. Historically, Republicans have been strongest in the eastern and central portions of the state, while Democrats have been strongest in the northwestern part of the state. Occasionally, certain counties in the southern part of the state will vote Democratic. Marion County, Indiana's most populous county, supported the Republican candidates from 1968 to 2000, before backing the Democrats in the 2004, 2008, 2012, and 2016 elections. Indiana's second-most populous county, Lake County, strongly supports the Democratic party and has not voted for a Republican since 1972.[140] In 2005, the Bay Area Center for Voting Research rated the most liberal and conservative cities in the United States on voting statistics in the 2004 presidential election, based on 237 cities with populations of more than 100,000. Five Indiana cities were mentioned in the study. On the liberal side, Gary was ranked second and South Bend came in at 83. Among conservative cities, Fort Wayne was 44th, Evansville was 60th and Indianapolis was 82nd on the list.[143]
|
158 |
+
|
159 |
+
Indiana is home to several current and former military installations. The largest of these is the Naval Surface Warfare Center Crane Division, approximately 25 miles southwest of Bloomington, which is the third largest naval installation in the world, comprising approximately 108 square miles of territory.
|
160 |
+
|
161 |
+
Other active installations include Air National Guard fighter units at Fort Wayne, and Terre Haute airports (to be consolidated at Fort Wayne under the 2005 BRAC proposal, with the Terre Haute facility remaining open as a non-flying installation). The Army National Guard conducts operations at Camp Atterbury in Edinburgh, Indiana, helicopter operations out of Shelbyville Airport and urban training at Muscatatuck Urban Training Center. The Army's Newport Chemical Depot, which is now closed and turning into a coal purifier plant.
|
162 |
+
|
163 |
+
Indiana was formerly home to two major military installations; Grissom Air Force Base near Peru (realigned to an Air Force Reserve installation in 1994) and Fort Benjamin Harrison near Indianapolis, now closed, though the Department of Defense continues to operate a large finance center there (Defense Finance and Accounting Service).
|
164 |
+
|
165 |
+
Indiana has an extensive history with auto racing. Indianapolis hosts the Indianapolis 500 mile race over Memorial Day weekend at the Indianapolis Motor Speedway every May. The name of the race is usually shortened to "Indy 500" and also goes by the nickname "The Greatest Spectacle in Racing". The race attracts more than 250,000 people every year, making it the largest single day sporting event in the world. The track also hosts the Brickyard 400 (NASCAR) and the Red Bull Indianapolis Grand Prix. From 2000 to 2007, it hosted the United States Grand Prix (Formula One). Indiana features the world's largest and most prestigious drag race, the NHRA Mac Tools U.S. Nationals, held each Labor Day weekend at Lucas Oil Raceway at Indianapolis in Clermont, Indiana. Indiana is also host to a major unlimited hydroplane racing power boat race circuits in the major H1 Unlimited league, the Madison Regatta (Madison, Indiana).
|
166 |
+
|
167 |
+
As of 2013[update] Indiana has produced more National Basketball Association (NBA) players per capita than any other state. Muncie has produced the most per capita of any American city, with two other Indiana cities in the top ten.[144] It has a rich basketball heritage that reaches back to the sport's formative years. The NBA's Indiana Pacers play their home games at Bankers Life Fieldhouse; they began play in 1967 in the American Basketball Association (ABA) and joined the NBA when the leagues merged in 1976. Although James Naismith developed basketball in Springfield, Massachusetts in 1891, high school basketball was born in Indiana. In 1925, Naismith visited an Indiana basketball state finals game along with 15,000 screaming fans and later wrote "Basketball really had its origin in Indiana, which remains the center of the sport." The 1986 film Hoosiers is inspired by the story of the 1954 Indiana state champions Milan High School. Professional basketball player Larry Bird was born in West Baden Springs and was raised in French Lick. He went on to lead the Boston Celtics to the NBA championship in 1981, 1984, and 1986.[145]
|
168 |
+
|
169 |
+
Indianapolis is home to the Indianapolis Colts. The Colts are members of the South Division of the American Football Conference. The Colts have roots back to 1913 as the Dayton Triangles. They became an official team after moving to Baltimore, MD, in 1953. In 1984, the Colts relocated to Indianapolis, leading to an eventual rivalry with the Baltimore Ravens. After calling the RCA Dome home for 25 years, the Colts play their home games at Lucas Oil Stadium in Indianapolis. While in Baltimore, the Colts won the 1970 Super Bowl. In Indianapolis, the Colts won Super Bowl XLI, bringing the franchise total to two. In recent years the Colts have regularly competed in the NFL playoffs.
|
170 |
+
|
171 |
+
Indiana was home to two charter members of the National Football League teams, the Hammond Pros and the Muncie Flyers. Another early NFL franchise, the Evansville Crimson Giants spent two seasons in the league before folding.
|
172 |
+
|
173 |
+
The following table shows the professional sports teams in Indiana. Teams in italic are in major professional leagues.
|
174 |
+
|
175 |
+
The following is a table of sports venues in Indiana that have a capacity in excess of 30,000:
|
176 |
+
|
177 |
+
Indiana has had great sports success at the collegiate level.
|
178 |
+
|
179 |
+
In men's basketball, the Indiana Hoosiers have won five NCAA national championships and 22 Big Ten Conference championships. The Purdue Boilermakers were selected as the national champions in 1932 before the creation of the tournament, and have won 23 Big Ten championships. The Boilermakers along with the Notre Dame Fighting Irish have both won a national championship in women's basketball.
|
180 |
+
|
181 |
+
In college football, the Notre Dame Fighting Irish have won 11 consensus national championships, as well as the Rose Bowl Game, Cotton Bowl Classic, Orange Bowl and Sugar Bowl. Meanwhile, the Purdue Boilermakers have won 10 Big Ten championships and have won the Rose Bowl and Peach Bowl.
|
182 |
+
|
183 |
+
Schools fielding NCAA Division I athletic programs include:
|
184 |
+
|
185 |
+
In 2017, Indiana had a civilian labor force of nearly 3.4 million, the 15th largest in the U.S. Indiana has an unemployment rate of 3.4 percent, lower than the national average.[147] The total gross state product in 2016 was $347.2 billion.[148] A high percentage of Indiana's income is from manufacturing.[149] According to the Bureau of Labor Statistics, nearly 17 percent of the state's non-farm workforce is employed in manufacturing, the highest of any state in the U.S.[150] The state's five leading exports were motor vehicles and auto parts, pharmaceutical products, industrial machinery, optical and medical equipment, and electric machinery.[151]
|
186 |
+
|
187 |
+
Despite its reliance on manufacturing, Indiana has been less affected by declines in traditional Rust Belt manufactures than many of its neighbors. The explanation appears to be certain factors in the labor market. First, much of the heavy manufacturing, such as industrial machinery and steel, requires highly skilled labor, and firms are often willing to locate where hard-to-train skills already exist. Second, Indiana's labor force is primarily in medium-sized and smaller cities rather than in very large and expensive metropolises. This makes it possible for firms to offer somewhat lower wages for these skills than would normally be paid. Firms often see in Indiana a chance to obtain higher than average skills at lower than average wages.[152]
|
188 |
+
|
189 |
+
In 2016, Indiana was home to seven Fortune 500 companies with a combined $142.5 billion in revenue.[153] Columbus-based Cummins, Inc. and Indianapolis-based Eli Lilly and Company and Simon Property Group were recognized in Fortune publication's "2017 World's Most Admired Companies List", ranking in each of their respective industries.[154]
|
190 |
+
|
191 |
+
Northwest Indiana has been the largest steel producing center in the U.S. since 1975 and accounted for 27 percent of American-made steel in 2016.[155]
|
192 |
+
|
193 |
+
Indiana is home to the international headquarters and research facilities of pharmaceutical company Eli Lilly in Indianapolis, the state's largest corporation, as well as the world headquarters of Mead Johnson Nutritionals in Evansville.[156] Overall, Indiana ranks fifth among all U.S. states in total sales and shipments of pharmaceutical products and second highest in the number of biopharmaceutical related jobs.[157]
|
194 |
+
|
195 |
+
Indiana is within the U.S. Corn Belt and Grain Belt. The state has a feedlot-style system raising corn to fatten hogs and cattle. Along with corn, soybeans are also a major cash crop. Its proximity to large urban centers, such as Indianapolis and Chicago, assure dairying, egg production, and specialty horticulture occur.
|
196 |
+
Other crops include melons, tomatoes, grapes, mint, popping corn, and tobacco in the southern counties.[158] Most of the original land was not prairie and had to be cleared of deciduous trees. Many parcels of woodland remain and support a furniture-making sector in the southern portion of the state.
|
197 |
+
|
198 |
+
In 2011 Indiana was ranked first in the Midwest and sixth in the country for best places to do business according to CEO magazine.[159]
|
199 |
+
|
200 |
+
Tax is collected by the Indiana Department of Revenue.[160]
|
201 |
+
|
202 |
+
Indiana has a flat state income tax rate of 3.23%. Many of the state's counties also collect income tax. The state sales tax rate is 7% with exemptions for food, prescription medications and over-the-counter medications.[161] In some jurisdictions, an additional Food and Beverage Tax is charged, at a rate of 1% (Marion County's rate is 2%), on sales of prepared meals and beverages.[162]
|
203 |
+
|
204 |
+
Property taxes are imposed on both real and personal property in Indiana and are administered by the Department of Local Government Finance. Property is subject to taxation by a variety of taxing units (schools, counties, townships, municipalities, and libraries), making the total tax rate the sum of the tax rates imposed by all taxing units in which a property is located. However, a "circuit breaker" law enacted on March 19, 2008 limits property taxes to 1% of assessed value for homeowners, 2% for rental properties and farmland, and 3% for businesses.
|
205 |
+
|
206 |
+
Indiana does not have a legal requirement to balance the state budget either in law or its constitution. Instead, it has a constitutional ban on assuming debt. The state has a Rainy Day Fund and for healthy reserves proportional to spending. Indiana is one of six US states to not allow a line-item veto.[163]
|
207 |
+
|
208 |
+
In fiscal year 2011, Indiana reported one of the largest surpluses among U.S states, with an extra $1.2 billion in its accounts. Governor Mitch Daniels authorized bonus payments of up to $1,000 for state employees on Friday, July 15, 2011. An employee who "meets expectations" received $500. Those who "exceed expectations" will receive $750, and "outstanding workers" will see an extra $1,000 in their August paychecks.[164] Since 2010, Indiana has been one of a few states to hold AAA bond credit ratings with the Big Three credit rating agencies, the highest possible rating.[165]
|
209 |
+
|
210 |
+
Indiana's power production chiefly consists of the consumption of fossil fuels, mainly coal. It has 24 coal power plants, including the country's largest coal power plant, Gibson Generating Station, across the Wabash River from Mount Carmel, Illinois. Indiana is also home to the coal-fired plant with the highest sulfur dioxide emissions in the United States, the Gallagher power plant, just west of New Albany.[167]
|
211 |
+
|
212 |
+
In 2010, Indiana had estimated coal reserves of 57 billion tons, and state mining operations produced 35 million tons of coal annually.[168] Indiana also has at least 900 million barrels of petroleum reserves in the Trenton Field, though they are not easily recoverable. While Indiana has made commitments to increasing use of renewable resources such as wind, hydroelectric, biomass, or solar power, progress has been very slow, mainly because of the continued abundance of coal in southern Indiana. Most of the new plants in the state have been coal gasification plants. Another source is hydroelectric power.
|
213 |
+
|
214 |
+
Wind power has been developed. Estimates in 2006 raised Indiana's wind capacity from 30 MW at 50 m turbine height to 40,000 MW at 70 m, and to 130,000 MW at 100 m, in 2010, the height of newer turbines.[169] By the end of 2011, Indiana had installed 1,340 MW of wind turbines.[170]
|
215 |
+
|
216 |
+
Indianapolis International Airport serves the greater Indianapolis area. It opened in November 2008 and offers a midfield passenger terminal, concourses, air traffic control tower, parking garage, and airfield and apron improvements.[171]
|
217 |
+
|
218 |
+
Other major airports include Evansville Regional Airport, Fort Wayne International Airport (which houses the 122d Fighter Wing of the Air National Guard), and South Bend International Airport. A long-standing proposal to turn Gary Chicago International Airport into Chicago's third major airport received a boost in early 2006 with the approval of $48 million in federal funding over the next ten years.[172]
|
219 |
+
|
220 |
+
No airlines operate out of Terre Haute Regional Airport but it is used for private planes. Since 1954, the 181st Fighter Wing of the Indiana Air National Guard was stationed at there, but the Base Realignment and Closure (BRAC) Proposal of 2005 stated the 181st would lose its fighter mission and F-16 aircraft, leaving the Terre Haute facility a general-aviation-only facility.
|
221 |
+
|
222 |
+
Louisville International Airport, across the Ohio River in Louisville, Kentucky, serves southern Indiana, as does Cincinnati/Northern Kentucky International Airport in Hebron, Kentucky. Many residents of Northwest Indiana, which is primarily in the Chicago Metropolitan Area, use Chicago's airports, O'Hare International Airport and Chicago Midway International Airport.[citation needed]
|
223 |
+
|
224 |
+
The major U.S. Interstate highways in Indiana are I-64, I-65, I-265, I-465, I-865, I-69, I-469, I-70, I-74, I-80, I-90, I-94, and I-275. The various highways intersecting in and around Indianapolis, along with its historical status as a major railroad hub, and the canals that once crossed Indiana, are the source of the state's motto, the Crossroads of America. There are also many U.S. routes and state highways maintained by the Indiana Department of Transportation. These are numbered according to the same convention as U.S. Highways. Indiana allows highways of different classifications to have the same number. For example, I-64 and Indiana State Road 64 both exist (rather close to each other) in Indiana, but are two distinct roads with no relation to one another.
|
225 |
+
|
226 |
+
A $3 billion project extending I-69 is underway. The project was divided into six sections, with the first five sections (linking Evansville to Martinsville) now complete. The sixth and final phase to Indianapolis is in planning. When complete, I-69 will traverse an additional 142 miles (229 km) through the state.[173]
|
227 |
+
|
228 |
+
Most Indiana counties use a grid-based system to identify county roads; this system replaced the older arbitrary system of road numbers and names, and (among other things) makes it much easier to identify the sources of calls placed to the 9-1-1 system. Such systems are easier to implement in the glacially flattened northern and central portions of the state. Rural counties in the southern third of the state are less likely to have grids and more likely to rely on unsystematic road names (for example, Crawford, Harrison, Perry, Scott, and Washington Counties).
|
229 |
+
|
230 |
+
There are also counties in the northern portions of the state that have never implemented a grid, or have only partially implemented one. Some counties are also laid out in an almost diamond-like grid system (e.g., Clark, Floyd, Gibson, and Knox Counties). Such a system is also almost useless in those situations as well. Knox County once operated two different grid systems for county roads because the county was laid out using two different survey grids, but has since decided to use road names and combine roads instead.
|
231 |
+
|
232 |
+
Notably, the county road grid system of St. Joseph County, whose major city is South Bend, uses perennial (tree) names (i.e. Ash, Hickory, Ironwood, etc.) in alphabetical order for North-South roads and Presidential and other noteworthy names (i.e., Adams, Edison, Lincoln Way, etc.) in alphabetical order for East-West roads. There are exceptions to this rule in downtown South Bend and Mishawaka. Hamilton county just continues the numbered street system from Downtown Indianapolis from 96th Street at the Marion County line to 296th street at the Tipton County line.
|
233 |
+
|
234 |
+
Indiana has more than 4,255 railroad route miles, of which 91 percent are operated by Class I railroads, principally CSX Transportation and the Norfolk Southern Railway. Other Class I railroads in Indiana include the Canadian National Railway and Soo Line Railroad, a Canadian Pacific Railway subsidiary, as well as Amtrak. The remaining miles are operated by 37 regional, local, and switching and terminal railroads. The South Shore Line is one of the country's most notable commuter rail systems, extending from Chicago to South Bend. Indiana is implementing an extensive rail plan prepared in 2002 by the Parsons Corporation.[174] Many recreational trails, such as the Monon Trail and Cardinal Greenway, have been created from abandoned rails routes.
|
235 |
+
|
236 |
+
Indiana annually ships over 70 million tons of cargo by water each year, which ranks 14th among all U.S. states.[citation needed] More than half of Indiana's border is water, which includes 400 miles (640 km) of direct access to two major freight transportation arteries: the Great Lakes/St. Lawrence Seaway (via Lake Michigan) and the Inland Waterway System (via the Ohio River). The Ports of Indiana manages three major ports which include Burns Harbor, Jeffersonville, and Mount Vernon.[175]
|
237 |
+
|
238 |
+
In Evansville, three public and several private port facilities receive year-round service from five major barge lines operating on the Ohio River. Evansville has been a U.S. Customs Port of Entry for more than 125 years. Because of this, it is possible to have international cargo shipped to Evansville in bond. The international cargo can then clear Customs in Evansville rather than a coastal port.[citation needed]
|
239 |
+
|
240 |
+
Indiana's 1816 constitution was the first in the country to implement a state-funded public school system. It also allotted one township for a public university.[176] However, the plan turned out to be far too idealistic for a pioneer society, as tax money was not accessible for its organization. In the 1840s, Caleb Mills pressed the need for tax-supported schools, and in 1851 his advice was included in the new state constitution. In 1843 the Legislature ruled that African Americans could not attend the public schools, leading to the foundation of Union Literary Institute and other schools for them, funded by donations or the students themselves.
|
241 |
+
|
242 |
+
Although the growth of the public school system was held up by legal entanglements, many public elementary schools were in use by 1870. Most children in Indiana attend public schools, but nearly 10 percent attend private schools and parochial schools. About one-half of all college students in Indiana are enrolled in state-supported four-year schools.
|
243 |
+
|
244 |
+
Indiana public schools have gone through several changes throughout Indiana's history. Modern, public school standards, have been implemented all throughout the state. These new standards were adopted in April 2014. The overall goal of these new state standards is to ensure Indiana students have the necessary skills and requirements needed to enter college or the workforce upon high school graduation.[177] State standards can be found for nearly every major subject taught in Indiana public schools. Mathematics, English/Language Arts, Science, and Social Studies are among the top, prioritized standards. In 2017, the Indiana Department of Education reported that the state's overall graduation rates were 87.19% for waivered graduations and 80.10% for non-waiver graduations.[178]
|
245 |
+
|
246 |
+
The largest educational institution is Indiana University, the flagship campus of which was endorsed as Indiana Seminary in 1820. Indiana State University was established as the state's Normal School in 1865; Purdue University was chartered as a land-grant college in 1869. The three other independent state universities are Vincennes University (Founded in 1801 by the Indiana Territory), Ball State University (1918) and University of Southern Indiana (1965 as ISU – Evansville).
|
247 |
+
|
248 |
+
Many of Indiana's private colleges and universities are affiliated with religious groups. The University of Notre Dame, Marian University, and the University of Saint Francis are popular Roman Catholic schools. Universities affiliated with Protestant denominations include Anderson University, Butler University, Huntington University, Manchester University, Indiana Wesleyan University, Taylor University, Franklin College, Hanover College, DePauw University, Earlham College, Valparaiso University, University of Indianapolis,[123] and University of Evansville.[179]
|
249 |
+
|
250 |
+
The state's community college system, Ivy Tech Community College of Indiana, serves nearly 200,000 students annually, making it the state's largest public post-secondary educational institution and the nation's largest singly accredited statewide community college system.[180] In 2008, the Indiana University system agreed to shift most of its associate (2-year) degrees to the Ivy Tech Community College System.[181]
|
251 |
+
|
252 |
+
The state has several universities ranked among the best in 2013 rankings of the U.S. News & World Report. The University of Notre Dame is ranked among the top 20, with Indiana University Bloomington and Purdue University ranking in the top 100. Indiana University – Purdue University Indianapolis (IUPUI) has recently made it into the top 200 U.S. News & World Report rankings. Butler, Valparaiso, and the University of Evansville are ranked among the top ten in the Regional University Midwest Rankings. Purdue's engineering programs are ranked eighth in the country. In addition, Taylor University is ranked first in the Regional College Midwest Rankings and Rose-Hulman Institute of Technology has been considered the top Undergraduate Engineering school (where a doctorate is not offered) for 15 consecutive years.[182][183][184][185]
|
en/2722.html.txt
ADDED
@@ -0,0 +1,116 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Indiana Jones is an American media franchise based on the adventures of Dr. Henry Walton "Indiana" Jones, Jr., a fictional professor of archaeology that began in 1981 with the film Raiders of the Lost Ark. In 1984, a prequel, Indiana Jones and the Temple of Doom, was released, and in 1989, a sequel, Indiana Jones and the Last Crusade. A fourth film followed in 2008, titled Indiana Jones and the Kingdom of the Crystal Skull. A fifth film is in development and is provisionally scheduled to be released in 2022. The series was created by George Lucas and stars Harrison Ford as Indiana Jones. The first four films were directed by Steven Spielberg.
|
4 |
+
|
5 |
+
In 1992, the franchise expanded to a television series with The Young Indiana Jones Chronicles, portraying the character in his childhood and youth, and including adventures with his father. Marvel Comics began publishing The Further Adventures of Indiana Jones in 1983, and Dark Horse Comics gained the comic book rights to the character in 1991. Novelizations of the films have been published, as well as many novels with original adventures, including a series of German novels by Wolfgang Hohlbein, twelve novels set before the films published by Bantam Books, and a series set during the character's childhood inspired by the television show. Numerous Indiana Jones video games have been released since 1982.
|
6 |
+
|
7 |
+
During 1973, George Lucas wrote The Adventures of Indiana Smith.[1] Like Star Wars, it was an opportunity to create a modern version of the movie serials of the 1930s and 1940s.[2] Lucas discussed the concept with Philip Kaufman, who worked with him for several weeks and decided upon the Ark of the Covenant as the MacGuffin. The project was stalled when Clint Eastwood hired Kaufman to write The Outlaw Josey Wales.[3] In May 1977, Lucas was in Maui, trying to escape the enormous success of Star Wars. His friend and colleague Steven Spielberg was also there, on vacation from work on Close Encounters of the Third Kind. Spielberg told Lucas he was interested in making a James Bond film, but Lucas told him of an idea "better than James Bond", outlining the plot of Raiders of the Lost Ark. Spielberg loved it, calling it "a James Bond film without the hardware",[4] and had the character's surname changed to Jones.[2] Spielberg and Lucas made a deal with Paramount Pictures for five Indiana Jones films.[4]
|
8 |
+
|
9 |
+
Spielberg and Lucas aimed to make Indiana Jones and the Temple of Doom much darker, because of their personal moods following their respective breakups and divorces. Lucas made the film a prequel as he did not want the Nazis to be the villains again. He had ideas regarding the Monkey King and a haunted castle, but eventually created the Sankara Stones.[5] He hired Willard Huyck and Gloria Katz to write the script as he knew of their interest in Indian culture.[6] The major scenes that were dropped from Raiders of the Lost Ark were included in this film: an escape using a giant rolling gong as a shield, a fall out of a plane in a raft, and a mine cart chase.[2] For the third film, Spielberg revisited the Monkey King and haunted castle concepts, before Lucas suggested the Holy Grail. Spielberg had previously rejected this as too ethereal, but then devised a father-son story and decided that "The Grail that everybody seeks could be a metaphor for a son seeking reconciliation with a father and a father seeking reconciliation with a son."[7]
|
10 |
+
|
11 |
+
Following the 1989 release of Indiana Jones and the Last Crusade, Lucas let the series end as he felt he could not think of a good plot device to drive the next installment, and chose instead to produce The Young Indiana Jones Chronicles, which explored the character in his early years. Ford played Indiana in one episode, narrating his adventures in 1920 Chicago. When Lucas shot Ford's role in December 1992, he realized that the scene opened up the possibility of a film with an older Indiana set in the 1950s. The film could reflect a science fiction 1950s B-movie, with aliens as the plot device.[8] Ford disliked the new angle, telling Lucas: "No way am I being in a Steve Spielberg movie like that."[9] Spielberg himself, who depicted aliens in Close Encounters of the Third Kind and E.T. the Extra-Terrestrial, resisted it. Lucas devised a story, which Jeb Stuart turned into a script from October 1993 to May 1994.[8] Lucas wanted Indiana to get married, which would allow Henry Jones Sr. to return, expressing concern over whether his son is happy with what he has accomplished. After learning that Joseph Stalin was interested in psychic warfare, Lucas decided to have Russians as the villains and the aliens to have psychic powers.[10] Following Stuart's next draft, Lucas hired Last Crusade writer Jeffrey Boam to write the next three versions, the last of which was completed in March 1996. Three months later, Independence Day was released, and Spielberg told Lucas he would not make another alien invasion film (or at least not until War of the Worlds in 2005). Lucas decided to focus on the Star Wars prequels instead.[8]
|
12 |
+
|
13 |
+
In 2000, Spielberg's son asked when the next Indiana Jones film would be released, which made him interested in reviving the project.[11] The same year, Ford, Lucas, Spielberg, Frank Marshall, and Kathleen Kennedy met during the American Film Institute's tribute to Ford, and decided they wanted to enjoy the experience of making an Indiana Jones film again. Spielberg also found returning to the series a respite from his many dark films during this period.[12] Spielberg and Lucas discussed the central idea of a B-movie involving aliens, and Lucas suggested using crystal skulls to ground the idea. Lucas found these artifacts as fascinating as the Ark,[13] and had intended to feature them for a Young Indiana Jones episode before the show's cancellation.[8] M. Night Shyamalan was hired to write for an intended 2002 shoot,[11] but he was overwhelmed by the task, and claimed it was difficult to get Ford, Spielberg, and Lucas to focus.[14] Stephen Gaghan and Tom Stoppard were also approached.[11]
|
14 |
+
|
15 |
+
Frank Darabont, who wrote various Young Indiana Jones episodes, was hired to write in May 2002.[15] His script, titled Indiana Jones and the City of Gods,[8] was set in the 1950s, with ex-Nazis pursuing Jones.[16] Spielberg conceived the idea because of real-life figures such as Juan Perón in Argentina, who allegedly protected Nazi war criminals.[8] Darabont claimed Spielberg loved the script, but Lucas had issues with it, and decided to take over writing himself.[8] Lucas and Spielberg acknowledged that the 1950s setting could not ignore the Cold War, and the Russians were more plausible villains. Spielberg decided he could not satirize the Nazis after directing Schindler's List,[17] while Ford felt "We plum[b] wore the Nazis out."[9] Darabont's main contribution was reintroducing Marion Ravenwood as Indiana's love interest, but he gave them a 13-year-old daughter, which Spielberg decided was too similar to The Lost World: Jurassic Park.[8]
|
16 |
+
|
17 |
+
Jeff Nathanson met with Spielberg and Lucas in August 2004, and turned in the next drafts in October and November 2005, titled The Atomic Ants. David Koepp continued on from there, giving his script the subtitle Destroyer of Worlds,[8] based on the Robert Oppenheimer quote. It was changed to Kingdom of the Crystal Skull, as Spielberg found this a more inviting title which actually named the plot device.[18] Koepp wanted to depict the character of Mutt as a nerd, but Lucas refused, explaining he had to resemble Marlon Brando in The Wild One; "he needs to be what Indiana Jones' father thought of [him] – the curse returns in the form of his own son – he's everything a father can't stand".[8] Koepp collaborated with Lawrence Kasdan on the film's "love dialogue".[19]
|
18 |
+
|
19 |
+
The Walt Disney Company has owned the Indiana Jones intellectual property since its acquisition of Lucasfilm, the series' production company, in 2012, when Lucas sold it for $4 billion.[20] Walt Disney Studios owns the distribution and marketing rights to future Indiana Jones films since 2013, with Paramount retaining the distribution rights to the first four films and receiving "financial participation" from any additional films.[21][22][23]
|
20 |
+
|
21 |
+
The first film is set in 1936. Indiana Jones (Harrison Ford) is hired by government agents to locate the Ark of the Covenant before the Nazis. The Nazis have teams searching for religious artefacts, including the Ark, which is rumored to make an army that carries the Ark before it invincible.[24] The Nazis are being helped by Indiana's nemesis René Belloq (Paul Freeman). With the help of his old flame Marion Ravenwood (Karen Allen) and Sallah (John Rhys-Davies), Indiana manages to recover the Ark in Egypt. The Nazis steal the Ark and capture Indiana and Marion. Belloq and the Nazis perform a ceremony to open the Ark, but when they do so, they are all killed gruesomely by the Ark's wrath. Indiana and Marion, who survived by closing their eyes, manage to get the Ark to the United States, where it is stored in a secret government warehouse.
|
22 |
+
|
23 |
+
The second film is set in 1935, a year before Raiders of the Lost Ark. Indiana escapes Chinese gangsters with the help of singer/actress Willie Scott (Kate Capshaw) and his twelve-year-old sidekick Short Round (Jonathan Ke Quan). The trio crash-land in India, where they come across a village whose children have been kidnapped. The Thuggee led by Mola Ram (Amrish Puri) has also taken the holy Sankara Stones, which they will use to take over the world. Indiana manages to overcome Mola Ram's evil power, rescues the children and returns the stones to their rightful place, overcoming his own mercenary nature. The film has been noted as an outlier in the franchise, as it does not feature Indy's university or any antagonistic political entity, and is less focused on archaeology, being presented as a dark movie with gross-out elements, human sacrifice and torture.
|
24 |
+
|
25 |
+
The third film is set in 1938. Indiana and his friend Marcus Brody (Denholm Elliott) are assigned by American businessman Walter Donovan (Julian Glover) to find the Holy Grail. They are teamed up with Dr. Elsa Schneider (Alison Doody), following on from where Indiana's estranged father Henry (Sean Connery) left off before he disappeared. It transpires that Donovan and Elsa are in league with the Nazis, who captured Henry Jones in order to get Indiana to help them find the Grail. However, Indiana recovers his father's diary filled with his research, and manages to rescue him before finding the location of the Grail. Both Donovan and Elsa fall to the temptation of the Grail, while Indiana and Henry realize that their relationship with each other is more important than finding the relic.
|
26 |
+
|
27 |
+
The fourth film is set in 1957, nineteen years after The Last Crusade. Indiana is having a quiet life teaching before being thrust into a new adventure. He races against agents of the Soviet Union, led by Irina Spalko (Cate Blanchett) for a crystal skull. His journey takes him across Nevada, Connecticut, Peru, and the Amazon rainforest in Brazil. Indiana is faced with betrayal by one of his best friends, Mac (Ray Winstone), is introduced to a greaser named Mutt Williams (Shia LaBeouf), who turns out to be his son (his real name revealed to be Henry Jones III), and is reunited with, and eventually marries, Marion Ravenwood, who was introduced in the first movie.
|
28 |
+
|
29 |
+
A fifth Indiana Jones film is in development under Disney with James Mangold directing, Spielberg, Marshall, and Kathleen Kennedy producing,[25] Ford returning to play the titular character,[26] Lucas returning to executive produce,[27] and John Williams returning to compose the score.[28] It is scheduled for release on July 29, 2022.[29] Frank Marshall has affirmed that the film will be a sequel,[26] and in May 2020, said that writing had "just started",[25] despite multiple drafts having been worked on by different writers. Disney CEO Bob Iger has indicated that the film will not be the conclusion of the franchise as a whole.[30]
|
30 |
+
|
31 |
+
Ford said he would return for a fifth film if it does not take another twenty years to develop.[31] In 2008, Lucas suggested that he might "make Shia LaBeouf the lead character next time and have Harrison Ford come back like Sean Connery did in the last movie",[32] but later said this would not be the case.[33][a] In August 2008, Lucas was researching potential plot devices, and stated that Spielberg was open to the idea of the fifth film.[34][b]
|
32 |
+
In November 2010, Ford said that he and Spielberg were waiting for Lucas to present an idea to them.[36] In March 2011, Karen Allen said, "What I know is that there's a story that they like, which is a huge step forward."[37] In July 2012, Frank Marshall disclosed that "It's not on until there is a writer on the project."[38]
|
33 |
+
|
34 |
+
In October 2012, The Walt Disney Company acquired Lucasfilm, thereby granting Disney ownership rights to the Indiana Jones intellectual property.[39][40] In December 2013, Walt Disney Studios purchased the distribution and marketing rights to future Indiana Jones films, with Paramount Pictures receiving "financial participation" from any additional films.[21][22][23] In December 2013, studio chairman Alan Horn said that a fifth Indiana Jones film would not be ready for at least two to three years.[41] In a May 2015 interview with Vanity Fair, Kathleen Kennedy confirmed plans for a fifth film, stating that another film "will one day be made inside this company. ... We haven't started working on a script yet, but we are talking about it."[42]
|
35 |
+
|
36 |
+
On March 15, 2016, Disney announced that the fifth film would be released on July 19, 2019, with Ford reprising his role, Spielberg directing, Koepp writing and Kennedy and Marshall and acting as producers. In June, Spielberg confirmed that Lucas would return as executive producer, despite Deadline Hollywood having reported otherwise.[27][43] Spielberg also announced that John Williams would return to compose the score.[28] On April 25, 2017, the official Star Wars website updated the film's release date to July 10, 2020.[44] In September 2017, Bob Iger said that the future of the franchise with Ford was unknown, but that the film "won't be just a one-off". Spielberg promised that Indiana would not be killed off,[30] and Koepp stated that Mutt would not return in the movie.[45] In January 2018, Deadline Hollywood reported that Spielberg was eyeing the film as his next project following the completion of Ready Player One.[46][c]
|
37 |
+
|
38 |
+
In June 2018, it was reported that Jonathan Kasdan had replaced Koepp as scriptwriter, and that the film would miss its 2020 release date.[48][49] Shortly thereafter, Disney postponed the film's release date to July 9, 2021.[50] A few months later, Marshall stated, "I dunno if you'd call it a writers room, but a lot of people that we trust pitch ideas and things."[51] In May 2019, it was reported that Kasdan had written his script from scratch, but that his work was now being replaced by Dan Fogelman, whose screenplay used "an entirely different premise".[52] Two months later, Ford said that the film "should be starting to shoot sometime next year".[53] Later reports narrowed the beginning of filming down to April 2020,[54] suggesting principal photography was to take place at the Iver-based Pinewood Studios.[55] Speaking in September 2019, Koepp said that he was working on the project again, and that they had "got a good idea this time".[56][d]
|
39 |
+
|
40 |
+
In February 2020, Spielberg stepped down as director, stating that he wanted to "pass along Indy's whip to a new generation to bring their perspective to the story".[58] James Mangold will direct the film,[25] while Spielberg will remain attached as a "hands-on" producer.[58] In April 2020, it was reported that the film's release date was delayed to July 29, 2022, because of the COVID-19 pandemic.[29] The next month, Marshall said that work had "just started" on the script,[25] which according to SyFy Wire, is being written by Jonathan Kasdan.[59] In June, Koepp confirmed that he was no longer involved with the project.[60]
|
41 |
+
|
42 |
+
A television series titled The Young Indiana Jones Chronicles (1992–1996) featured three incarnations of the character: Sean Patrick Flanery played Indiana aged 16–21; Corey Carrier played an 8- to 10-year-old version in several episodes; and George Hall narrated the show as the 93-year-old Jones, who bookended each episode. Lucas began developing the series in 1990 as "edutainment" that would be more cerebral than the films. The show was his first collaboration with producer Rick McCallum, and he wrote the stories for each episode. Writers and directors on the show included Carrie Fisher, Frank Darabont, Vic Armstrong, Ben Burtt, Terry Jones, Nicolas Roeg, Mike Newell and Joe Johnston. In the Chronicles, Jones crosses paths with many historical figures, played by stars such as Daniel Craig, Christopher Lee, Bob Peck, Jeffrey Wright, Marc Warren, Catherine Zeta-Jones, Elizabeth Hurley, Anne Heche, Vanessa Redgrave, Julian Fellowes, Timothy Spall and Harrison Ford as a 50-year-old Indiana in one episode (taking the usual place of Hall).[61][62][63]
|
43 |
+
|
44 |
+
The show was filmed in over 25 countries for over 150 weeks. Season one was shot from March 1991 to March 1992; the second season began two months later and wrapped in April 1993.[64] The ABC network was unsure of Lucas's cerebral approach, and attempted to advertise the series as an action-adventure like the films. Ratings were good if unspectacular, and ABC was nervous enough to put the show on hiatus after six episodes until September 1992.[61] With only four episodes left of the second season to air, ABC eventually sold the show to the Family Channel, who changed the format from 50-minute episodes to 90-minute TV movies. Filming for the final four episodes took place from January 1994 to May 1996.[64] The Young Indiana Jones Chronicles received a mixed reception from fans, although it won 10 Emmy Awards out of 23 nominations, as well as a 1994 Golden Globe nomination for Best Drama series. It was also an experimentation ground in digital effects for Lucasfilm.[61]
|
45 |
+
|
46 |
+
The original broadcast versions of some episodes were briefly released in Japan on laserdisc in 1993 and on VHS in 1994. However, Lucas drastically reedited and restructured the show for its worldwide home video release. Major structural changes were made, including the complete removal of the 'bookend' sections narrated by the 93-year-old Jones, and the editing of all the one-hour episodes together into two-hour episodes. Approximately half of the series was released on VHS in various markets around the world in 1999, but the entire series was not released until its DVD debut, in a series of three boxsets released from 2007-2008, to tie in with the theatrical debut of Kingdom of the Crystal Skull. Among other extras, the DVDs include approximately 100 new historical featurettes.
|
47 |
+
|
48 |
+
This is a list of characters who have appeared in the Indiana Jones film franchise.
|
49 |
+
|
50 |
+
A novelization of Raiders of the Lost Ark was written by Campbell Black and published by Ballantine Books in April 1981.[89] It was followed by Indiana Jones and the Temple of Doom, written by James Kahn and published by Ballantine in May 1984.[90] Finally, Indiana Jones and the Last Crusade was published in May 1989, and was the first Indiana Jones book by Rob MacGregor.[91] A fan of the first two films, MacGregor admitted that writing the novelization made him "somewhat disappointed" with the third film, as he had expanded the script whereas Steven Spielberg had cut scenes to tighten the story.[92]
|
51 |
+
|
52 |
+
George Lucas asked MacGregor to continue writing original novels for Bantam Books. These were geared toward an adult or young adult audience, and were prequels set in the 1920s or early 1930s after Jones graduates from college. Of the film characters, Lucas only permitted Marcus Brody to appear.[92] He asked MacGregor to base the books on real myths, but except for the deletion of a sex scene, the writer was given total creative freedom. His six books – Indiana Jones and the Peril at Delphi, Indiana Jones and the Dance of the Giants, Indiana Jones and the Seven Veils, Indiana Jones and the Genesis Deluge, Indiana Jones and the Unicorn's Legacy, and Indiana Jones and the Interior World – were published from February 1991 to November 1992. The Genesis Deluge, published in February 1992 and featuring Noah's Ark, was the best-selling novel; MacGregor felt this was because it "had a strong following among religious-oriented people [...] because they tend to take the Noah's Ark story to heart and think of it as history and archaeological fact, rather than myth." MacGregor's favorite book was The Seven Veils,[92] which featured real-life explorer Percy Fawcett and the death of Indiana's wife, Deirdre Campbell.[93][94][95][96][97][98]
|
53 |
+
|
54 |
+
Martin Caidin wrote the next two novels in Bantam's series, Indiana Jones and the Sky Pirates and Indiana Jones and the White Witch. These feature Gale Parker as Indiana's sidekick; they introduced afterwords to the series, regarding each novel's historical context.[99][100]
|
55 |
+
|
56 |
+
Caidin became ill, so Max McCoy took over in 1995 and wrote the final four novels: Indiana Jones and the Philosopher's Stone, Indiana Jones and the Dinosaur Eggs, Indiana Jones and the Hollow Earth, and Indiana Jones and the Secret of the Sphinx. McCoy set his books closer in time to the events of Raiders of the Lost Ark, which led to his characterizing Indiana as "a bit darker". The prolog of his first book featured a crystal skull,[101] and this became a recurring story, concluding when Jones gives it up in the final novel. Lucas' involvement with McCoy's novels was limited, although LucasFilm censored sexual or outlandish elements in order to make the books appeal to younger readers;[102] they also rejected the theme of time travel in the final book.[101] Sallah, Lao Che, Rene Belloq and the Nazis made appearances, and McCoy also pitted Jones against Benito Mussolini's fascists and the Japanese. Jones also has a doomed romance with Alecia Dunstin, a librarian at the British Museum.[103][104][105][106] A novel involving the Spear of Destiny was dropped, because Dark Horse Comics was developing the idea and later DC Comics developed the idea.[101]
|
57 |
+
|
58 |
+
The books were only published in paperback, as the series editor felt readers would not be prepared to pay the hardback price for an adventure novel.[107]
|
59 |
+
|
60 |
+
In February 2008, the novelizations of the first three films were published in one edition;[108] James Rollins' Kingdom of the Crystal Skull novelization arrived the following May.[109] Children's novelizations of all four films were published by Scholastic in 2008.[110]
|
61 |
+
|
62 |
+
MacGregor was said to be writing new books for Ballantine for early 2009, but none have been published.[111]
|
63 |
+
|
64 |
+
A new adult adventure, Indiana Jones and the Army of the Dead by Steve Perry, was released in September 2009.[112]
|
65 |
+
|
66 |
+
A novel based on the video game Indiana Jones and the Staff of Kings, written by MacGregor to coincide with the release of the game, was canceled due to problems around the game's production.[113]
|
67 |
+
|
68 |
+
Additionally, German author Wolfgang Hohlbein wrote eight Indiana Jones novels in the early 1990s, which were never translated to English.
|
69 |
+
|
70 |
+
All of the following were published by Bantam Books, with the exception of Army of the Dead, which was published by Del Rey.
|
71 |
+
|
72 |
+
Indiana Jones novels by Wolfgang Hohlbein:
|
73 |
+
|
74 |
+
Ballantine Books published a number of Indiana Jones books in the Find Your Fate line, written by various authors. These books were similar to the Choose Your Own Adventure series, allowing the reader to select from options that change the outcome of the story. Indiana Jones books comprised 11 of the 17 releases in the line, which was initially titled Find Your Fate Adventure.[114]
|
75 |
+
|
76 |
+
In 2008, Scholastic released a series of middle-grade novels based on the stories and screenplays. Each book of this edition included several pages of color stills from filming.
|
77 |
+
|
78 |
+
In May 2009, two new middle-grade books were to begin a new series of Untold Adventures, though no further books appeared.[115]
|
79 |
+
|
80 |
+
In the early 1990s, different book series featured childhood and young adult adventures of Indiana Jones in the early decades of the century. Not all were directly tied to the Young Indiana Jones Chronicles TV series.
|
81 |
+
|
82 |
+
The following books are set in Indy's mid- to late-teen years.
|
83 |
+
|
84 |
+
These books were novelizations of episodes of the TV series. Some feature Indy around age 8; others have him age 16-18.
|
85 |
+
|
86 |
+
These are labeled Choose Your Own Adventure books. Like the TV series, some feature Indy around age 8, others age 16-18.
|
87 |
+
|
88 |
+
The Young Indiana Jones Chronicles:
|
89 |
+
|
90 |
+
Young Indiana Jones:
|
91 |
+
|
92 |
+
Since the release of the original film, there have been a number of video games based on the Indiana Jones series. These include both games based on (or derived from) the films, as well as those featuring the characters in new storylines.
|
93 |
+
|
94 |
+
Prior to Disney's acquisition, George Lucas collaborated with Walt Disney Imagineering on several occasions to create Indiana Jones attractions for Walt Disney Parks and Resorts worldwide. Indiana Jones-themed attractions and appaercnes at Disney theme parks include:
|
95 |
+
|
96 |
+
For the holiday season following the June 1981 debut of Raiders of the Lost Ark, Kenner produced a 12-inch-tall "Authentically styled Action Figure" of Indiana Jones. The next spring they delivered nine smaller-scale (33⁄4") action figures, three playsets, replicas of the German desert convoy truck and Jones' horse, all derived from the Raiders movie.[124] They also offered a Raiders board game.[125]
|
97 |
+
|
98 |
+
In conjunction with the theatrical release of The Temple of Doom in 1984, TSR, Inc. released miniature metal versions of twelve characters from both films for a role playing game. LJN Toys Ltd. also released action figures of Jones, Mola Ram, and the Giant Thugee.
|
99 |
+
|
100 |
+
No toys were produced to tie in with The Last Crusade in 1989
|
101 |
+
|
102 |
+
Hasbro released toys based on Raiders of the Lost Ark and Kingdom of the Crystal Skull in 2008. Further figures, including characters from The Temple of Doom and The Last Crusade, followed later in the year,[126] but were distributed on a very limited basis. This line of toys included 33⁄4-inch and 12-inch figures, vehicles, a playset, and a series of "Adventure Heroes" aimed at young children.[127] Hasbro announced the cancellation of the line in the fall of 2008, due to decreasing sales, although some figures continued to be released up until the 2011 San Diego Comic Convention.
|
103 |
+
|
104 |
+
Sideshow Collectibles, Gentle Giant, Diamond Select Toys and Kotobukiya[128] also earned Indiana Jones licensing rights in 2008.[129][130][131][132] Lego released eight play sets to coincide with the fourth film, based on Raiders and The Last Crusade as well as on Kingdom of the Crystal Skull[133][134]
|
105 |
+
|
106 |
+
Merchandise featuring franchise cross-overs include a Mr. Potato Head "Taters Of The Lost Ark" set by Hasbro,[135] Mickey Mouse as Indiana Jones,[136] and a Muppets-branded Adventure Kermit action figure, produced by Palisades Toys and based on the frog's appearance in the Disney World stunt show as seen in The Muppets at Walt Disney World.[137]
|
107 |
+
|
108 |
+
Disney Vinylmation introduced a series based on Indiana Jones characters in 2014.[138]
|
109 |
+
|
110 |
+
There have been two publications of role-playing games based on the Indiana Jones franchise. The Adventures of Indiana Jones Role-Playing Game was designed and published by TSR, Inc. under license in 1984.[139] Ten years later, West End Games acquired the rights to publish their own version, The World of Indiana Jones.
|
111 |
+
|
112 |
+
A pinball machine based on the first three films was released in 1993. Stern Pinball released a new edition in 2008, which featured all four movies.[140]
|
113 |
+
|
114 |
+
Footnotes
|
115 |
+
|
116 |
+
Citations
|
en/2723.html.txt
ADDED
@@ -0,0 +1,280 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Indianapolis (/ˌɪndiəˈnæpəlɪs/),[10][11][12] often shortened to Indy, is the state capital and most populous city of the U.S. state of Indiana and the seat of Marion County. According to 2019 estimates from the U.S. Census Bureau, the consolidated population of Indianapolis and Marion County was 886,220.[13] The "balance" population, which excludes semi-autonomous municipalities in Marion County, was 876,384.[14] It is the 17th most populous city in the U.S., the third-most populous city in the Midwest, after Chicago, Illinois and Columbus, Ohio, and the fourth-most populous state capitol after Phoenix, Arizona, Austin, Texas, and Columbus. The Indianapolis metropolitan area is the 34th most populous metropolitan statistical area in the U.S., with 2,048,703 residents.[15] Its combined statistical area ranks 28th, with a population of 2,431,361.[16] Indianapolis covers 368 square miles (950 km2), making it the 16th largest city by land area in the U.S.
|
4 |
+
|
5 |
+
Indigenous peoples inhabited the area dating to approximately 2000 BC. In 1818, the Delaware relinquished their tribal lands in the Treaty of St. Mary's.[17] In 1821, Indianapolis was founded as a planned city for the new seat of Indiana's state government. The city was platted by Alexander Ralston and Elias Pym Fordham on a 1-square-mile (2.6 km2) grid next to the White River. Completion of the National and Michigan roads and arrival of rail later solidified the city's position as a manufacturing and transportation hub.[18] Two of the city's nicknames reflect its historical ties to transportation—the "Crossroads of America" and "Railroad City".[19][20][1] Since the 1970 city-county consolidation, known as Unigov, local government administration operates under the direction of an elected 25-member city-county council headed by the mayor.
|
6 |
+
|
7 |
+
Indianapolis anchors the 27th largest economic region in the U.S., based primarily on the sectors of finance and insurance, manufacturing, professional and business services, education and health care, government, and wholesale trade.[21] The city has notable niche markets in amateur sports and auto racing.[22][23] The Fortune 500 companies of Anthem, Eli Lilly and Company, and Simon Property Group are headquartered in Indianapolis.[24] The city has hosted many international multi-sport events, such as the 1987 Pan American Games and 2001 World Police and Fire Games, but is perhaps best known for annually hosting the world's largest single-day sporting event, the Indianapolis 500.[25]
|
8 |
+
|
9 |
+
Indianapolis is home to two major league sports clubs, the Indiana Pacers of the National Basketball Association (NBA) and the Indianapolis Colts of the National Football League (NFL). It is home to a number of educational institutions, such as the University of Indianapolis, Butler University, Marian University, and Indiana University – Purdue University Indianapolis (IUPUI). The city's robust philanthropic community has supported several cultural assets, including the world's largest children's museum, one of the nation's largest privately funded zoos, historic buildings and sites, and public art.[26][27][28][29] The city is home to the largest collection of monuments dedicated to veterans and war casualties in the U.S. outside of Washington, D.C.[30][31]
|
10 |
+
|
11 |
+
The name Indianapolis is derived from the state's name, Indiana (meaning "Land of the Indians", or simply "Indian Land"[32]), and polis, the Greek word for "city." Jeremiah Sullivan, justice of the Indiana Supreme Court, is credited with coining the name.[33] Other names considered were Concord, Suwarrow, and Tecumseh.[34]
|
12 |
+
|
13 |
+
In 1816, the year Indiana gained statehood, the U.S. Congress donated four sections of federal land to establish a permanent seat of state government.[35] Two years later, under the Treaty of St. Mary's (1818), the Delaware relinquished title to their tribal lands in central Indiana, agreeing to leave the area by 1821.[17] This tract of land, which was called the New Purchase, included the site selected for the new state capital in 1820.[36] The indigenous people of the land prior to systematic removal are the Miami Nation of Indiana (Miami Nation of Oklahoma) and Indianapolis makes up part of Cession 99; the primary treaty between the indigenous population and the United States was the Treaty of St. Mary's (1818).[37]
|
14 |
+
|
15 |
+
The availability of new federal lands for purchase in central Indiana attracted settlers, many of them descendants of families from northwestern Europe. Although many of these first European and American settlers were Protestants, a large proportion of the early Irish and German immigrants were Catholics. Few African Americans lived in central Indiana before 1840.[38] The first European Americans to permanently settle in the area that became Indianapolis were either the McCormick or Pogue families. The McCormicks are generally considered to be the first permanent settlers; however, some historians believe George Pogue and family may have arrived first, on March 2, 1819, and settled in a log cabin along the creek that was later called Pogue's Run. Other historians have argued as early as 1822 that John Wesley McCormick, his family, and employees became the area's first European American settlers, settling near the White River in February 1820.[39]
|
16 |
+
|
17 |
+
On January 11, 1820, the Indiana General Assembly authorized a committee to select a site in central Indiana for the new state capital.[40] The state legislature approved the site, adopting the name Indianapolis on January 6, 1821.[2] In April, Alexander Ralston and Elias Pym Fordham were appointed to survey and design a town plan for the new settlement.[41] Indianapolis became a seat of county government on December 31, 1821, when Marion County, was established. A combined county and town government continued until 1832 when Indianapolis incorporated as a town. Indianapolis became an incorporated city effective March 30, 1847. Samuel Henderson, the city's first mayor, led the new city government, which included a seven-member city council. In 1853, voters approved a new city charter that provided for an elected mayor and a fourteen-member city council. The city charter continued to be revised as Indianapolis expanded.[42] Effective January 1, 1825, the seat of state government moved to Indianapolis from Corydon, Indiana. In addition to state government offices, a U.S. district court was established at Indianapolis in 1825.[43]
|
18 |
+
|
19 |
+
Growth occurred with the opening of the National Road through the town in 1827, the first major federally funded highway in the United States.[44] A small segment of the ultimately failed Indiana Central Canal was opened in 1839.[45] The first railroad to serve Indianapolis, the Jeffersonville, Madison and Indianapolis Railroad, began operation in 1847, and subsequent railroad connections fostered growth.[46] Indianapolis Union Station was the first of its kind in the world when it opened in 1853.[47]
|
20 |
+
|
21 |
+
During the American Civil War, Indianapolis was mostly loyal to the Union cause. Governor Oliver P. Morton, a major supporter of President Abraham Lincoln, quickly made Indianapolis a rallying place for Union army troops. On February 11, 1861, president-elect Lincoln arrived in the city, en route to Washington, D.C. for his presidential inauguration, marking the first visit from a president-elect in the city's history.[48] On April 16, 1861, the first orders were issued to form Indiana's first regiments and establish Indianapolis as a headquarters for the state's volunteer soldiers.[49][50] Within a week, more than 12,000 recruits signed up to fight for the Union.[51]
|
22 |
+
|
23 |
+
Indianapolis became a major logistics hub during the war, establishing the city as a crucial military base.[52][53] Between 1860 and 1870, the city's population more than doubled.[46] An estimated 4,000 men from Indianapolis served in 39 regiments, and an estimated 700 died during the war.[54] On May 20, 1863, Union soldiers attempted to disrupt a statewide Democratic convention at Indianapolis, forcing the proceedings to be adjourned, sarcastically referred to as the Battle of Pogue's Run.[55] Fear turned to panic in July 1863, during Morgan's Raid into southern Indiana, but Confederate forces turned east toward Ohio, never reaching Indianapolis.[56] On April 30, 1865, Lincoln's funeral train made a stop at Indianapolis, where an estimated crowd of more than 100,000 people passed the assassinated president's bier at the Indiana Statehouse.[53][57]
|
24 |
+
|
25 |
+
Following the Civil War—and in the wake of the Second Industrial Revolution—Indianapolis experienced tremendous growth and prosperity. In 1880, Indianapolis was the world's third largest pork packing city, after Chicago and Cincinnati, and the second largest railroad center in the United States by 1888.[58][59] By 1890, the city's population surpassed 100,000.[46] Some of the city's most notable businesses were founded during this period of growth and innovation, including L. S. Ayres (1872), Eli Lilly and Company (1876), Madam C. J. Walker Manufacturing Company (1910), and Allison Transmission (1915). Once home to 60 automakers, Indianapolis rivaled Detroit as a center of automobile manufacturing.[60] The city was an early focus of labor organization.[46] The Indianapolis Street Car Strike of 1913 and subsequent police mutiny and riots led to the creation of the state's earliest labor-protection laws, including a minimum wage, regular work weeks, and improved working conditions.[61] The International Typographical Union and United Mine Workers of America were among several influential labor unions based in the city.[46]
|
26 |
+
|
27 |
+
Some of the city's most prominent architectural features and best known historical events date from the turn of the 20th century. The Soldiers' and Sailors' Monument, dedicated on May 15, 1902, would later become the city's unofficial symbol.[62] Ray Harroun won the inaugural running of the Indianapolis 500, held May 30, 1911, at Indianapolis Motor Speedway. Indianapolis was one of the hardest hit cities in the Great Flood of 1913, resulting in five known deaths[63][64][65] and the displacement of 7,000 families.[66]
|
28 |
+
|
29 |
+
As a stop on the Underground Railroad, Indianapolis had a higher black population than any other city in the Northern States, until the Great Migration.[67] Led by D. C. Stephenson, the Indiana Klan became the most powerful political and social organization in Indianapolis from 1921 through 1928, controlling City Council and the Board of School Commissioners, among others. At its height, more than 40% of native-born white males in Indianapolis claimed membership in the Klan. While campaigning in the city in 1968, Robert F. Kennedy delivered one of the most lauded speeches in 20th century American history, following the assassination of civil rights leader Martin Luther King Jr.[68][69][70] As in most U.S. cities during the Civil Rights Movement, the city experienced strained race relations. A 1971 federal court decision forcing Indianapolis Public Schools to implement desegregation busing proved controversial.[71]
|
30 |
+
|
31 |
+
Under the mayoral administration of Richard Lugar, the city and county governments restructured, consolidating most public services into a new entity called Unigov. The plan removed bureaucratic redundancies, captured increasingly suburbanizing tax revenue, and created a Republican political machine that dominated Indianapolis politics until the 2000s decade.[72][73] Unigov went into effect on January 1, 1970, increasing the city's land area by 308.2 square miles (798 km2) and population by 268,366 people.[74][75] It was the first major city-county consolidation to occur in the United States without a referendum since the creation of the City of Greater New York in 1898.[76]
|
32 |
+
|
33 |
+
Amid the changes in government and growth, the city invested in an aggressive strategy to brand Indianapolis as a sports tourism destination. Under the administration of the city's longest-serving mayor, William Hudnut (1976–1992), millions of dollars were poured into sport facilities.[23] Throughout the 1980s, $122 million in public and private funding built the Indianapolis Tennis Center, Major Taylor Velodrome, Indiana University Natatorium, Carroll Track and Soccer Stadium, and Hoosier Dome.[23] The latter project secured the 1984 relocation of the NFL Baltimore Colts and the 1987 Pan American Games.[23] The economic development strategy succeeded in revitalizing the central business district through the 1990s, with the openings of the Indianapolis Zoo, Canal Walk,[45] Circle Centre Mall, Victory Field, and Bankers Life Fieldhouse.
|
34 |
+
|
35 |
+
During the 2000s, the city continued investing heavily in infrastructure projects, including two of the largest building projects in the city's history: the $1.1 billion Col. H. Weir Cook Terminal and $720 million Lucas Oil Stadium, both opened in 2008.[77][78] A $275 million expansion of the Indiana Convention Center was completed in 2011.[79] Construction began that year on DigIndy, a $1.9 billion project to correct the city's combined sewer overflows (CSOs) by 2025.[80]
|
36 |
+
|
37 |
+
Indianapolis is in the East North Central region of the Midwestern United States, in central Indiana. According to the U.S. Census Bureau, the Indianapolis (balance) encompasses a total area of 368.2 square miles (954 km2), of which 361.5 square miles (936 km2) is land and 6.7 square miles (17 km2) is water. The consolidated city boundaries are coterminous with Marion County, with the exception of the autonomous municipalities of Beech Grove, Lawrence, Southport, and Speedway.[46][81] Indianapolis is the 16th largest city by land area in the U.S.
|
38 |
+
|
39 |
+
Indianapolis is within the Tipton Till Plain, a flat to gently sloping terrain underlain by glacial deposits known as till.[82] The lowest point in the city is about 650 feet (198 m) above mean sea level, with the highest natural elevation at about 900 feet (274 m) above sea level.[82] Few hills or short ridges, known as kames, rise about 100 feet (30 m) to 130 feet (40 m) above the surrounding terrain.[82] The city lies just north of the Indiana Uplands, a region characterized by rolling hills and high limestone content. The city is also within the EPA's Eastern Corn Belt Plains ecoregion, an area of the U.S. known for its fertile agricultural land.[83]
|
40 |
+
|
41 |
+
Topographic relief slopes gently toward the White River and its two primary tributaries, Fall and Eagle creeks. In total, there are about 35 streams in the city, including Indian Creek and Pogue's Run.[84] Major bodies of water include Indian Lake, Geist Reservoir, and Eagle Creek Reservoir.
|
42 |
+
|
43 |
+
Indianapolis is a planned city. On January 11, 1820, the Indiana General Assembly authorized a committee to select a site in central Indiana for the new state capital, appointing Alexander Ralston and Elias Pym Fordham to survey and design a town plan for Indianapolis. Ralston had been a surveyor for the French architect Pierre L'Enfant, assisting him with the plan for Washington, D.C. Ralston's original plan for Indianapolis called for a town of 1 square mile (2.6 km2), near the confluence of the White River and Fall Creek.[85]
|
44 |
+
|
45 |
+
The plan, known as the Mile Square, is bounded by East, West, North, and South streets, centered on a traffic circle, called Monument Circle (originally Governor's Circle), from which Indianapolis's "Circle City" nickname originated.[86] Four diagonal streets radiated a block from Monument Circle: Massachusetts, Virginia, Kentucky, and Indiana avenues.[87] The city's address numbering system begins at the intersection of Washington and Meridian streets.[88] Before its submersion into a sanitary tunnel, Pogue's Run was included into the plan, disrupting the rectilinear street grid to the southeast.
|
46 |
+
|
47 |
+
Noted as one of the finest examples of the City Beautiful movement design in the United States, the Indiana World War Memorial Plaza Historic District began construction in 1921 in downtown Indianapolis.[89][90] The district, a National Historic Landmark, encompasses several examples of neoclassical architecture, including the American Legion, Central Library, and Birch Bayh Federal Building and United States Courthouse. The district is also home to several sculptures and memorials, Depew Memorial Fountain, and open space, hosting many annual civic events.[90]
|
48 |
+
|
49 |
+
After completion of the Soldiers' and Sailors' Monument, an ordinance was passed in 1905 restricting building heights on the traffic circle to 86 ft (26 m) to protect views of the 284 ft (87 m) monument.[91] The ordinance was revised in 1922, permitting buildings to rise to 108 ft (33 m), with an additional 42 ft (13 m) allowable with a series of setbacks.[91] A citywide height restriction ordinance was instituted in 1912, barring structures over 200 ft (61 m).[92] Completed in 1962, the City-County Building was the first skyscraper in the city, surpassing the Soldiers' and Sailors' Monument in height by nearly 100 ft (30 m).[93] A building boom, lasting from 1982 to 1990, saw the construction of six of the city's ten tallest buildings.[94][95] The tallest is Salesforce Tower, completed in 1990 at 811 ft (247 m).[96] Indiana limestone is the signature building material in Indianapolis, widely included in the city's many monuments, churches, academic, government, and civic buildings.[94]
|
50 |
+
|
51 |
+
Compared with similar-sized American cities, Indianapolis is unique in that it contains some 200 farms covering thousands of acres of agricultural land within its municipal boundaries.[97] Equestrian farms and corn and soybean fields interspersed with suburban development are commonplace on the city's periphery, especially in Franklin Township. The stark contrast between Indianapolis's urban neighborhoods and rural villages is a result of the 1970 city-county consolidation, which expanded the city's incorporated boundary to be coterminous with Marion County.[98]
|
52 |
+
|
53 |
+
The city is divided into 99 community areas for statistical purposes, though many smaller neighborhoods exist within them.[99]
|
54 |
+
Indianapolis's neighborhoods are often difficult to define because the city lacks historical ethnic divisions, as in Chicago, or physical boundaries, seen in Pittsburgh and Cincinnati.[100] Instead, most neighborhoods are subtle in their distinctions.[100] The Indianapolis Historic Preservation Commission recognizes several neighborhoods as historic districts, including: Central Court, Chatham Arch, Golden Hill, Herron-Morton Place, Lockerbie Square, Old Northside, Old Southside and Oliver Johnson's Woods. Expansion of the interurban system at the turn of the 20th century facilitated growth of several streetcar suburbs, including Broad Ripple, Irvington, University Heights, and Woodruff Place.[100]
|
55 |
+
|
56 |
+
The post–World War II economic expansion and subsequent suburbanization had a profound impact on the physical development of the city's neighborhoods. From 1950 to 1970, 97,000 housing units were built in Marion County.[100] Most of this new construction occurred outside Center Township, expediting out-migration from the city's urban neighborhoods to suburban areas, such as Castleton, Eagledale, and Nora. Between 1950 and 1990, over 155,000 residents left Center Township, resulting in urban blight and disinvestment.[100] Since the 2000s, Downtown Indianapolis and surrounding neighborhoods have seen increased reinvestment attributed to nationwide demographic trends, driven by empty nesters and millennials.[101] By 2020, Downtown is projected to have 30,000 residential units, compared to 18,300 in 2010.[102]
|
57 |
+
|
58 |
+
Renewed interest in urban living has been met with some dispute regarding gentrification and affordable housing.[103][104][105] According to a Center for Community Progress report, neighborhoods like Cottage Home and Fall Creek Place have experienced measurable gentrification since 2000.[106] The North Meridian Street Historic District is among the most affluent urban neighborhoods in the U.S., with a mean household income of $102,599 in 2017.[107]
|
59 |
+
|
60 |
+
Indianapolis has a humid continental climate (Köppen climate classification Dfa), but can be considered a borderline humid subtropical climate (Köppen: Cfa) using the −3 °C (27 °F) isotherm. It experiences four distinct seasons.[108] The city is in USDA hardiness zone 6a.[109]
|
61 |
+
|
62 |
+
Typically, summers are hot, humid and wet. Winters are generally cold with moderate snowfall. The July daily average temperature is 75.4 °F (24.1 °C). High temperatures reach or exceed 90 °F (32 °C) an average of 18 days each year,[110] and occasionally exceed 95 °F (35 °C). Spring and autumn are usually pleasant, if at times unpredictable; midday temperature drops exceeding 30 °F or 17 °C are common during March and April, and instances of very warm days (80 °F or 27 °C) followed within 36 hours by snowfall are not unusual during these months. Winters are cold, with an average January temperature of 28.1 °F (−2.2 °C). Temperatures dip to 0 °F (−18 °C) or below an average of 4.7 nights per year.[110]
|
63 |
+
|
64 |
+
The rainiest months occur in the spring and summer, with slightly higher averages during May, June, and July. May is typically the wettest, with an average of 5.05 inches (12.8 cm) of precipitation.[110] Most rain is derived from thunderstorm activity; there is no distinct dry season, although occasional droughts occur. Severe weather is not uncommon, particularly in the spring and summer months; the city experiences an average of 20 thunderstorm days annually.[111]
|
65 |
+
|
66 |
+
The city's average annual precipitation is 42.4 inches (108 cm), with snowfall averaging 25.9 inches (66 cm) per season. Official temperature extremes range from 106 °F (41 °C), set on July 14, 1936,[112] to −27 °F (−33 °C), set on January 19, 1994.[112][113]
|
67 |
+
|
68 |
+
The U.S. Census Bureau considers Indianapolis as two entities: the consolidated city and the city's remainder, or balance. The consolidated city is coterminous with Marion County, except the independent municipalities of Beech Grove, Lawrence, Southport, and Speedway.[121] The city's balance excludes the populations of ten semi-autonomous municipalities that are included in totals for the consolidated city.[81] These are Clermont, Crows Nest, Homecroft, Meridian Hills, North Crows Nest, Rocky Ripple, Spring Hill, Warren Park, Williams Creek, and Wynnedale.[121][3] An eleventh town, Cumberland, is partially included.[122][123] As of 2018[update], the city's estimated consolidated population was 876,862 and its balance was 867,125.[13][124] As of 2010[update], the city's population density was 2,270 people per square mile (880/km2).[125] Indianapolis is the most populous city in Indiana, containing nearly 13% of the state's total population.[81]
|
69 |
+
|
70 |
+
The Indianapolis metropolitan area, officially the Indianapolis–Carmel–Anderson metropolitan statistical area (MSA), consists of Marion County and the surrounding counties of Boone, Brown, Hamilton, Hancock, Hendricks, Johnson, Madison, Morgan, Putnam, and Shelby. As of 2018[update], the metropolitan area's population was 2,048,703, the most populous in Indiana and home to 30% of the state's residents.[15][126] With a population of 2,431,361, the larger Indianapolis–Carmel–Muncie combined statistical area (CSA) covers 18 counties, home to 36% of Indiana residents.[16][127] Indianapolis is also situated within the Great Lakes Megalopolis, the largest of 11 megaregions in the U.S.
|
71 |
+
|
72 |
+
According to the U.S. Census of 2010, 97.2% of the Indianapolis population was reported as one race: 61.8% White, 27.5% Black or African American, 2.1% Asian (0.4% Burmese, 0.4% Indian, 0.3% Chinese, 0.3% Filipino, 0.1% Korean, 0.1% Vietnamese, 0.1% Japanese, 0.1% Thai, 0.1% other Asian); 0.3% American Indian, and 5.5% as other. The remaining 2.8% of the population was reported as multiracial (two or more races).[128] The city's Hispanic or Latino community comprised 9.4% of the city's population in the 2010 U.S. Census: 6.9% Mexican, 0.4% Puerto Rican, 0.1% Cuban, and 2% as other.[128]
|
73 |
+
|
74 |
+
As of 2010[update], the median age for Indianapolis was 33.7 years. Age distribution for the city's inhabitants was 25% under the age of 18; 4.4% were between 18 and 21; 16.3% were age 21 to 65; and 13.1% were age 65 or older.[128] For every 100 females, there were 93 males. For every 100 females age 18 and over, there were 90 males.[129]
|
75 |
+
|
76 |
+
The U.S. Census for 2010 reported 332,199 households in Indianapolis, with an average household size of 2.42 and an average family size of 3.08.[128] Of the total households, 59.3% were family households, with 28.2% of these including the family's own children under the age of 18; 36.5% were husband-wife families; 17.2% had a female householder (with no husband present) and 5.6% had a male householder (with no wife present). The remaining 40.7% were non-family households.[128] As of 2010[update], 32% of the non-family households included individuals living alone, 8.3% of these households included individuals age 65 years of age or older.[128]
|
77 |
+
|
78 |
+
The U.S. Census Bureau's 2007–2011 American Community Survey indicated the median household income for Indianapolis city was $42,704, and the median family income was $53,161.[130] Median income for males working full-time, year-round, was $42,101, compared to $34,788 for females. Per capita income for the city was $24,430, 14.7% of families and 18.9% of the city's total population living below the poverty line (28.3% were under the age of 18 and 9.2% were age 65 or older).[130]
|
79 |
+
|
80 |
+
As of 2015[update], the Indianapolis metropolitan area had the 18th highest percentage of LGBT residents in the U.S., with 4.2% of residents identifying as gay, lesbian, bisexual, or transgender.[131]
|
81 |
+
|
82 |
+
Of the 42.42% of the city's residents who identify as religious, Roman Catholics make up the largest group, at 11.31%.[132] The second highest religious group in the city are Baptists at 10.31%, with Methodists following behind at 4.97%. Presbyterians make up 2.13% of the city's religiously affiliated population, followed by Pentecostals and Lutherans. Another 8.57% are affiliated with other Christian faiths.[132] 0.32% of religiously affiliated persons identified themselves as following Eastern religions, while 0.68% of the religiously affiliated population identified as Jewish, and 0.29% as Muslim.[132] According to the nonpartisan and nonprofit Public Religion Research Institute's American Values Atlas, 22% of residents identify as religiously "unaffiliated," consistent with the national average of 22.7%.[133]
|
83 |
+
|
84 |
+
Indianapolis is the seat of the Roman Catholic Archdiocese of Indianapolis. Joseph W. Tobin, C.Ss.R., served as archbishop from 2012 to 2017 and was elevated to cardinal in November 2016. On June 13, 2017, Pope Francis announced Charles C. Thompson would replace Tobin, who was reassigned to the Roman Catholic Archdiocese of Newark in January 2017.[134] Thompson is the youngest American archbishop.[135] The archdiocese also operates Bishop Simon Bruté College Seminary, affiliated with Marian University, while the Christian Theological Seminary is affiliated with the Christian Church (Disciples of Christ).
|
85 |
+
|
86 |
+
Indianapolis is the seat of the Episcopal Diocese of Indianapolis, based from Christ Church Cathedral. The Indiana-Kentucky Synod of the Evangelical Lutheran Church in America and the Indiana Conference of the United Methodist Church are also based in the city.
|
87 |
+
|
88 |
+
In 2015, the Indianapolis metropolitan area had a gross domestic product (GDP) of $134 billion. The top five industries were: finance, insurance, real estate, rental, and leasing ($30.7B), manufacturing ($30.1B), professional and business services ($14.3B), educational services, health care, and social assistance ($10.8B), and wholesale trade ($8.1B). Government, if it had been a private industry, would have ranked fifth, generating $10.2 billion.[21] Indianapolis is considered a "sufficiency" world city.[137]
|
89 |
+
|
90 |
+
Compared to Indiana as a whole, the Indianapolis metropolitan area has a lower proportion of manufacturing jobs and a higher concentration of jobs in wholesale trade; administrative, support, and waste management; professional, scientific, and technical services; and transportation and warehousing.[138] The city's major exports include pharmaceuticals, motor vehicle parts, medical equipment and supplies, engine and power equipment, and aircraft products and parts.[19] According to the Bureau of Labor Statistics, the region's unemployment rate was 2.8 percent in May 2019.[139]
|
91 |
+
|
92 |
+
As of 2020[update], three Fortune 500 companies were based in the city: health insurance company Anthem Inc. (33);[140] pharmaceutical company Eli Lilly (123);[141] and Simon Property Group (496), the largest real estate investment trust in the U.S.[142] Columbus, Indiana-based Cummins (128) opened its Global Distribution Headquarters in downtown Indianapolis in 2017.[143][144] The city is home to three Fortune 1000 companies: hydrocarbon manufacturer Calumet Specialty Products Partners (604); automotive transmission manufacturer Allison Transmission (890); and retailer Finish Line (972). Other companies based in the Indianapolis metropolitan area include: real estate investment trust Duke Realty;[145] media conglomerate Emmis Communications;[146] retailer Lids;[147] financial services holding company OneAmerica;[148] airline holding company Republic Airways;[149] contract research corporation Envigo; and fast food chains Noble Roman's and Steak 'n Shake.
|
93 |
+
|
94 |
+
Like many Midwestern cities, recent deindustrialization trends have had a significant impact on the local economy. Once home to 60 automakers, Indianapolis rivaled Detroit as a center of automobile manufacturing in the early 20th century.[60] Between 1990 and 2012, approximately 26,900 manufacturing jobs were lost in the city, including the automotive plant closures of Chrysler, Ford, and General Motors.[150] In 2016, Carrier Corporation announced the closure of its Indianapolis plant, moving 1,400 manufacturing jobs to Mexico.[151] Since 1915, Rolls-Royce Holdings has had operations in Indianapolis.[152] It is the third largest manufacturing employer and thirteenth largest employer overall in the city, with a workforce of 4,300 in aircraft engine development and manufacturing.[153]
|
95 |
+
|
96 |
+
Biotechnology, life sciences and health care are major sectors of Indianapolis's economy. As of 2016[update], Eli Lilly and Company was the largest private employer in the city, with more than 11,000 workers.[154] The North American headquarters for Roche Diagnostics and Dow AgroSciences are also in the city.[155] A 2014 report by the Battelle Memorial Institute and Biotechnology Industry Organization indicated that the Indianapolis–Carmel–Anderson MSA was the only U.S. metropolitan area to have specialized employment concentrations in all five bioscience sectors evaluated in the study: agricultural feedstock and chemicals; bioscience-related distribution; drugs and pharmaceuticals; medical devices and equipment; and research, testing, and medical laboratories.[156] The regional health care providers of Community Health Network, Eskenazi Health, Franciscan Health, Indiana University Health, and St. Vincent Health have a combined workforce of 43,700.[157]
|
97 |
+
|
98 |
+
The city's central location and extensive highway and rail infrastructure have positioned Indianapolis as an important logistics center, home to 1,500 distribution firms employing some 100,000 workers.[158][159][160] As home to the second largest FedEx Express hub in the world, Indianapolis International Airport ranks as the sixth busiest U.S. airport in terms of air cargo transport, handling over 1 million tons and employing 6,600 in 2015.[161][162] Indianapolis is a hub for CSX Transportation, home to its division headquarters, an intermodal terminal, and classification yard (in the suburb of Avon).[163] Amtrak's Beech Grove Shops, in the enclave of Beech Grove, serve as its primary heavy maintenance and overhaul facility, while the Indianapolis Distribution Center is the company's largest material and supply terminal.[164][165]
|
99 |
+
|
100 |
+
The hospitality industry is an increasingly vital sector to the Indianapolis economy. According to Visit Indy, 28.8 million visitors generated $5.4 billion in 2017, the seventh straight year of record growth.[166] Indianapolis has long been a sports tourism destination, but has more recently relied on conventions.[167] The Indiana Convention Center (ICC) and Lucas Oil Stadium are considered mega convention center facilities, with a combined 750,000 square feet (70,000 m2) of exhibition space.[168] ICC is connected to 12 hotels and 4,700 hotel rooms, the most of any U.S. convention center.[169] In 2008, the facility hosted 42 national conventions with an attendance of 317,815; in 2014, it hosted 106 for an attendance of 635,701.[167] Since 2003, Indianapolis has hosted Gen Con, one of the largest gaming conventions in North America.[170]
|
101 |
+
|
102 |
+
According to real estate tracking firm CBRE Group, Indianapolis ranks among the fastest high-tech job growth areas in the U.S.[171][172] The metropolitan area is home to 28,500 information technology-related jobs at such companies as Angie's List, Appirio, Formstack, Genesys, Hubstaff,[173] Infosys,[174] Ingram Micro, and Salesforce Marketing Cloud.[175][176]
|
103 |
+
|
104 |
+
Major shopping malls in the city include Castleton Square, Circle Centre, The Fashion Mall at Keystone, Glendale Town Center, Lafayette Square, and Washington Square.
|
105 |
+
|
106 |
+
Seven cultural districts have been designated to capitalize on cultural institutions within historically significant neighborhoods unique to the city's heritage. These include Broad Ripple Village, Canal and White River State Park, Fountain Square, Indiana Avenue, Market East, Mass Ave, and Wholesale.[178][179]
|
107 |
+
|
108 |
+
After 12 years of planning and six years of construction, the Indianapolis Cultural Trail officially opened in 2013.[180] The $62.5 million public-private partnership, spurred by an initial donation of $15 million by philanthropists Gene B. Glick and Marilyn Glick, resulted in 8 miles (13 km) of urban bike and pedestrian corridors linking the city's cultural districts with neighborhoods, IUPUI, and every significant arts, cultural, heritage, sports and entertainment venue downtown.[181][182][183][184][185]
|
109 |
+
|
110 |
+
Indianapolis is home to dozens of annual festivals and events showcasing local culture. Notable events include the "Month of May" (a series of celebrations leading to the Indianapolis 500), Indiana Black Expo, Indiana State Fair, Indy Pride Festival, and Historic Irvington Halloween Festival.
|
111 |
+
|
112 |
+
Founded in 1883, the Indianapolis Museum of Art (IMA) is the ninth oldest[186][note 1] and eighth largest encyclopedic art museum in the U.S.[188][note 2] The permanent collection has over 54,000 works, including African, American, Asian, and European pieces.[189] In addition to its collections, the Newfields campus consists of The Virginia B. Fairbanks Art & Nature Park: 100 Acres; Oldfields, a restored house museum and estate once owned by Josiah K. Lilly, Jr.; and restored gardens and grounds originally designed by Percival Gallagher of the Olmsted Brothers firm.[190] The IMA also owns the Miller House, a Mid-century modern home designed by Eero Saarinen in Columbus, Indiana.[191] The museum's holdings demonstrate the institution's emphasis on the connections among art, design, and the natural environment.[187]
|
113 |
+
|
114 |
+
The Indianapolis Art Center, in Broad Ripple Village, was founded in 1934 by the Works Project Administration. The center opened at its Michael Graves-designed building in 1996, including three public art galleries, 11 studios, a library, and auditorium. Opened in 2005, the center's ARTSPARK sculpture garden covers 12.5 acres (5.1 ha) along the White River.[192] Eiteljorg Museum of American Indians and Western Art opened in 1989 at White River State Park as the only Native American art museum in the Midwest.[193] Indiana University – Purdue University Indianapolis (IUPUI) contains the Herron School of Art and Design. Established in 1902, the school's first core faculty included Impressionist painters of the Hoosier Group: T. C. Steele, J. Ottis Adams, William Forsyth, Richard Gruelle, and Otto Stark. The university's public art collection is extensive, with more than 30 works. Other public works can be found in the Eskenazi Health Art Collection and the Indiana Statehouse Public Art Collection.
|
115 |
+
|
116 |
+
Most of Indianapolis's notable performing arts venues are in the Mass Ave cultural district and other locations in the downtown area. The Indiana Theatre opened as a movie palace on Washington Street in 1927 and houses the Indiana Repertory Theatre, a regional repertory theatre. Located on Monument Circle since 1916, the 1,786-seat Hilbert Circle Theatre is the home of the Indianapolis Symphony Orchestra (ISO). Founded in 1930, the ISO performed 180 concerts to over 275,000 guests during the 2015–2016 season, generating a record $8.5 million in ticket sales.[194] The Indianapolis Opera, founded in 1975, maintains a collaborative relationship with the ISO. The nonprofit Phoenix Theatre, which opened a new Cultural Centre in 2018, focuses on contemporary theatrical productions.[195]
|
117 |
+
|
118 |
+
In 1927, Madam Walker Legacy Center opened in the heart of the city's African-American neighborhood on Indiana Avenue.[197] The theater is named for Sarah Breedlove, or Madam C. J. Walker, an African American entrepreneur, philanthropist, and activist who began her beauty empire in Indianapolis. Indiana Avenue was home to a notable jazz scene from the 1920s through the 1960s, producing greats such as David Baker, Slide Hampton, Freddie Hubbard, J. J. Johnson, James Spaulding, and the Montgomery Brothers (Buddy, Monk, and Wes).[198] Wes Montgomery is considered one of the most influential jazz guitarists of all time,[198][199] and is credited with popularizing the "Naptown Sound."[200]
|
119 |
+
|
120 |
+
Mass Ave is home to the Old National Centre and the Athenæum (Das Deutsche Haus). Old National Centre at the Murat Shrine is the oldest stage house in Indianapolis, opened in 1909.[201] The building is a prime example of Moorish Revival architecture and features a 2,600-seat performing arts theatre, 1,800-seat concert hall, and 600-seat multi-functional room, hosting approximately 300 public and private events throughout the year.[201] The Athenæum, houses the American Cabaret Theater and Young Actors Theater.
|
121 |
+
|
122 |
+
Other notable venues include the Indianapolis Artsgarden, a performing arts center suspended over the intersection of Washington and Illinois streets, Clowes Memorial Hall on the Butler University campus, Melody Inn in Butler-Tarkington, Rivoli Theater, The Vogue in Broad Ripple, and The Emerson Theater in Little Flower.
|
123 |
+
|
124 |
+
Indianapolis is home to Bands of America (BOA), a nationwide organization of high school marching, concert, and jazz bands, and the headquarters for Drum Corps International (DCI), a professional drum and bugle corps association.[202] Annual music events include the International Violin Competition of Indianapolis, Midwest Music Summit, and Indy Jazz Fest. The Heartland Film Festival, Indianapolis International Film Festival, Indianapolis Jewish Film Festival, Indianapolis Theatre Fringe Festival, and the Indianapolis Alternative Media Festival are annual events held in the city.
|
125 |
+
|
126 |
+
Indianapolis was at the center of the Golden Age of Indiana Literature from 1870 to 1920.[203] Several notable poets and writers based in the city achieved national prominence and critical acclaim during this period, including James Whitcomb Riley, Booth Tarkington, and Meredith Nicholson.[20] In A History of Indiana Literature, Arthur W. Shumaker remarked on the era's influence: "It was the age of famous men and their famous books. In it Indiana, and particularly Indianapolis, became a literary center which in many ways rivaled the East."[204] A 1947 study found that Indiana authors ranked second to New York in the number of bestsellers produced in the previous 40 years.[203] Located in Lockerbie Square, the James Whitcomb Riley Museum Home has been a National Historic Landmark since 1962.
|
127 |
+
|
128 |
+
Perhaps the city's most famous 20th-century writer was Kurt Vonnegut, known for his darkly satirical and controversial bestselling novel Slaughterhouse-Five (1969). The Kurt Vonnegut Museum and Library opened in 2010 downtown.[205] Vonnegut became known for including at least one character in his novels from Indianapolis.[206] Upon returning to the city in 1986, Vonnegut acknowledged the influence the city had on his writings:
|
129 |
+
|
130 |
+
All my jokes are Indianapolis. All my attitudes are Indianapolis. My adenoids are Indianapolis. If I ever severed myself from Indianapolis, I would be out of business. What people like about me is Indianapolis.[206][205]
|
131 |
+
|
132 |
+
Indianapolis is home to bestselling young adult fiction writer John Green, known for his critically acclaimed 2012 novel The Fault in Our Stars, set in the city.[207]
|
133 |
+
|
134 |
+
The Children's Museum of Indianapolis is the largest of its kind in the world, offering 433,000 square feet (40,227.02 m2) of exhibit space.[208] The museum holds a collection of over 120,000 artifacts, including the Broad Ripple Park Carousel, a National Historic Landmark.[209] Because of its leadership and innovations, the museum is a world leader in its field.[210] Child and Parents magazine have both ranked the museum as the best children's museum in the U.S.[211] The museum is one of the city's most popular attractions, with 1.2 million visitors in 2014.[212]
|
135 |
+
|
136 |
+
The Indianapolis Zoo is home to nearly 1,400 animals of 214 species and 31,000 plants, including many threatened and endangered species.[213][214] The zoo is a leader in animal conservation and research, recognized for its biennial Indianapolis Prize designation. It is the only American zoo accredited as a zoo, aquarium, and zoological garden by the Association of Zoos and Aquariums.[215] It is the largest privately funded zoo in the U.S. and one of the city's most visited attractions, with 1.2 million guests in 2014.[28][212]
|
137 |
+
|
138 |
+
The Indianapolis Motor Speedway Museum exhibits an extensive collection of auto racing memorabilia showcasing various motorsports and automotive history.[216][217] The museum is the permanent home of the Borg-Warner Trophy, presented to Indianapolis 500 winners.[25] Daily grounds and track tours are also based at the museum.[217] The NCAA Hall of Champions opened in 2000 at White River State Park housing collegiate athletic artifacts and interactive exhibits covering all 23 NCAA-sanctioned sports.[218][219]
|
139 |
+
|
140 |
+
Indianapolis is home to several centers commemorating Indiana history. These include the Indiana Historical Society, Indiana State Library and Historical Bureau, Indiana State Museum, and Indiana Medical History Museum. Indiana Landmarks, the largest private statewide historic preservation organization in the U.S., is also in the city.[220] The Benjamin Harrison Presidential Site, in the Old Northside Historic District, is open for daily tours and includes archives and memorabilia from the 23rd President of the United States. President Harrison is buried about 3 miles (4.8 km) north of the site at Crown Hill Cemetery, listed on the National Register of Historic Places. Other notable graves include three U.S. Vice Presidents and notorious American gangster, John Dillinger.
|
141 |
+
|
142 |
+
Two museums and several memorials in the city commemorate armed forces or conflict, including the Colonel Eli Lilly Civil War Museum at the Soldiers' and Sailors' Monument and Indiana World War Memorial Military Museum at the Indiana World War Memorial Plaza. Outside of Washington, D.C., Indianapolis contains the largest collection of monuments dedicated to veterans and war casualties in the nation.[30][31] Other notable sites are the Confederate Soldiers and Sailors Monument, Crown Hill National Cemetery, the Medal of Honor Memorial, Project 9/11 Indianapolis, and the USS Indianapolis National Memorial.
|
143 |
+
|
144 |
+
Nearly 1.5 miles (2.4 km) of the former Indiana Central Canal—now known as the Canal Walk—link several downtown museums, memorials, and public art pieces. Flanked by walking and bicycling paths, the Canal Walk also offers gondola rides, pedal boat, kayak, and surrey rentals. The Indiana Central Canal has been recognized by the American Water Works Association as an American Water Landmark since 1971.[221]
|
145 |
+
|
146 |
+
Indianapolis has an emerging food scene as well as established eateries.[222] Founded in 1821 as the city's public market, the Indianapolis City Market has served the community from its current building since 1886. Prior to World War II, the City Market and neighboring Tomlinson Hall (since demolished) were home to meat and vegetable vendors. As consumer habits evolved and residents moved from the central city, the City Market transitioned from a traditional marketplace to a food court, a function it retains today.[223]
|
147 |
+
|
148 |
+
Opened in 1902, St. Elmo Steak House is well known for its signature shrimp cocktail, named by the Travel Channel as the "world's spiciest food". In 2012, it was recognized by the James Beard Foundation as one of "America's Classics".[224] The Slippery Noodle Inn, a blues bar and restaurant, is the oldest continuously operating tavern in Indiana, having opened in 1850.[225] The Jazz Kitchen, opened in 1994, was recognized in 2011 by OpenTable as one the "top 50 late night dining hotspots" in the U.S.[226]
|
149 |
+
|
150 |
+
Distinctive local dishes include pork tenderloin sandwiches[227] and sugar cream pie, the latter being the unofficial state pie of Indiana.[228] The beef Manhattan, invented in Indianapolis, can also be found on restaurant menus throughout the city and region.[229]
|
151 |
+
|
152 |
+
In 2016, Condé Nast Traveler named Indianapolis the "most underrated food city in the U.S.," while ranking Milktooth as one of the best restaurants in the world.[230][231] Food & Wine called Indianapolis the "rising star of the Midwest," recognizing Milktooth, Rook, Amelia's, and Bluebeard, all in Fletcher Place.[232][233] Several Indianapolis chefs and restaurateurs have been semifinalists in the James Beard Foundation Awards in recent years.[234][235] Microbreweries are quickly becoming a staple in the city, increasing fivefold since 2009.[236] There are now about 50 craft brewers in Indianapolis, with Sun King Brewing being the largest.[237]
|
153 |
+
|
154 |
+
For some time, Indianapolis was known as the "100 Percent American City" for its racial and ethnic homogeneity.[238] Historically, these factors, as well as low taxes and wages, provided chain restaurants a relatively stable market to test dining preferences before expanding nationwide. As a result, the Indianapolis metropolitan area had the highest concentration of chain restaurants per capita of any market in the U.S. in 2008, with one chain restaurant for every 1,459 people—44% higher than the national average.[239] In recent years, immigrants have opened some 800 ethnic restaurants.[238]
|
155 |
+
|
156 |
+
Urban agriculture has become increasingly prevalent throughout the city in an effort to alleviate food deserts. In 2018, the Indy Food Council reported a 272% increase in the number of community and urban gardens between 2011 and 2016.[240]
|
157 |
+
|
158 |
+
Two major league sports teams are based in Indianapolis: the Indianapolis Colts of the National Football League (NFL) and the Indiana Pacers of the National Basketball Association (NBA).
|
159 |
+
|
160 |
+
Originally the Baltimore Colts, the franchise has been based in Indianapolis since relocating in 1984. The Colts' tenure in Indianapolis has produced 11 division championships, two conference championships, and two Super Bowl appearances. Quarterback Peyton Manning led the team to win Super Bowl XLI in the 2006 NFL season. Lucas Oil Stadium replaced the team's first home, the RCA Dome, in 2008.
|
161 |
+
|
162 |
+
Founded in 1967, the Indiana Pacers began in the American Basketball Association (ABA), joining the NBA when the leagues merged in 1976. Prior to joining the NBA, the Pacers won three division titles and three championships (1970, 1972, 1973). Since the merger, the Pacers have won one conference title and six division titles, most recently in 2014.
|
163 |
+
|
164 |
+
Founded in 2000, the Indiana Fever of the Women's National Basketball Association (WNBA) have won three conference titles and one championship in 2012. The Fever and Pacers share Bankers Life Fieldhouse, which replaced Market Square Arena in 1999. The Indianapolis Indians of the International League (AAA) is the second oldest minor league franchise in American professional baseball, established in 1902.[241] The Indians have won 25 division titles, 14 league titles, and seven championships, most recently in 2000. Since 1996, the team has played at Victory Field, which replaced Bush Stadium. Of the 160 teams comprising Minor League Baseball, the Indians had the highest attendance during the 2016 season.[242] Established in 2013, Indy Eleven of the United Soccer League (USL) plays from Lucas Oil Stadium. Indy Fuel of the ECHL was founded in 2014 and plays from Indiana Farmers Coliseum.
|
165 |
+
|
166 |
+
Butler University and Indiana University – Purdue University Indianapolis (IUPUI) are NCAA Division I schools based in the city. The Butler Bulldogs compete in the Big East Conference, except for Butler Bulldogs football, which plays in the Pioneer Football League FCS. The Butler Bulldogs men's basketball team were runners-up in the 2010 and 2011 NCAA Men's Division I Basketball Championship Games. The IUPUI Jaguars compete in the Horizon League.
|
167 |
+
|
168 |
+
Traditionally, Indianapolis's Hinkle Fieldhouse was the hub for Hoosier Hysteria, a general excitement for the game of basketball throughout the state, specifically the Indiana High School Boys Basketball Tournament.[243] Hinkle, a National Historic Landmark, was opened in 1928 as the world's largest basketball arena, with seating for 15,000.[244] It is regarded as "Indiana's Basketball Cathedral". Perhaps the most notable game was the 1954 state championship, which inspired the critically acclaimed 1986 film, Hoosiers.[245]
|
169 |
+
|
170 |
+
Indianapolis has been called the "Amateur Sports Capital of the World".[46][246] The National Collegiate Athletic Association (NCAA), the main governing body for U.S. collegiate sports, and the National Federation of State High School Associations are based in Indianapolis. The city is home to three NCAA athletic conferences: the Horizon League (Division I); the Great Lakes Valley Conference (Division II); and the Heartland Collegiate Athletic Conference (Division III). Indianapolis is also home to three national sport governing bodies, as recognized by the United States Olympic Committee: USA Gymnastics; USA Diving; and USA Track & Field.[247]
|
171 |
+
|
172 |
+
Indianapolis hosts numerous sporting events annually, including the Circle City Classic (1983–present), NFL Scouting Combine (1987–present), and Big Ten Football Championship Game (2011–present). Indianapolis is tied with New York City for having hosted the second most NCAA Men's Division I Basketball Championships (1980, 1991, 1997, 2000, 2006, 2010, and 2015).[248] The city will host the men's Final Four next in 2021.[249] The city has also hosted three NCAA Women's Division I Basketball Championships (2005, 2011, and 2016). Notable past events include the NBA All-Star Game (1985), Pan American Games X (1987), US Open Series Indianapolis Tennis Championships (1988–2009), World Artistic Gymnastics Championships (1991), WrestleMania VIII (1992), World Rowing Championships (1994), World Police and Fire Games (2001), FIBA Basketball World Cup (2002), and Super Bowl XLVI (2012).
|
173 |
+
|
174 |
+
Indianapolis is home to the OneAmerica 500 Festival Mini-Marathon, the largest half marathon and seventh largest running event in the U.S.[250] The mini-marathon is held the first weekend of May as part of the 500 Festival, leading up to the Indianapolis 500. As of 2013[update], it had sold out for 12 consecutive years, with 35,000 participants.[251] Held in autumn, the Monumental Marathon is also among the largest in the U.S., with nearly 14,000 entrants in 2015.[252]
|
175 |
+
|
176 |
+
Indianapolis is a major center for motorsports. Two auto racing sanctioning bodies are headquartered in the city (INDYCAR and United States Auto Club) along with more than 500 motorsports companies and racing teams, employing some 10,000 people in the region.[253] Indianapolis is so well connected with auto racing that it has inspired the name "Indy car," used for both the competition and type of car used in it.[254]
|
177 |
+
|
178 |
+
Since 1911, Indianapolis Motor Speedway (IMS) (in the enclave of Speedway, Indiana) has been the site of the Indianapolis 500, an open-wheel automobile race held annually on Memorial Day weekend. Considered part of the Triple Crown of Motorsport, the Indianapolis 500 is the world's largest single-day sporting event, hosting more than 257,000 permanent seats.[25] Since 1994, IMS has hosted one of NASCAR's highest attended events, the Monster Energy Cup Series Brickyard 400.[255] IMS has also hosted the NASCAR Xfinity Series Lilly Diabetes 250 since 2012 and the IndyCar Series Grand Prix of Indianapolis since 2014.
|
179 |
+
|
180 |
+
Lucas Oil Raceway, in nearby Brownsburg, is home to the National Hot Rod Association (NHRA) U.S. Nationals, the most prestigious drag racing event in the world, held annually each Labor Day weekend.[256]
|
181 |
+
|
182 |
+
Indy Parks and Recreation maintains 211 parks covering 11,254 acres (4,554 ha) and some 99 miles (159 km) of trails and greenways.[257] Eagle Creek Park is the largest and most visited park in the city and ranks among the largest municipal parks in the U.S., covering 4,766 acres (1,929 ha).[258] Fishing, sailing, kayaking, canoeing, and swimming are popular activities at Eagle Creek Reservoir. Notable trails and greenways include Pleasant Run Greenway and the Monon Trail.[259] The Monon is a popular rail trail and part of the United States Bicycle Route System, drawing some 1.3 million people annually.[260][261] There are 13 public golf courses in the city.[262]
|
183 |
+
|
184 |
+
Military Park was established as the city's first public park in 1852.[263] By the 20th century, the city enlisted landscape architect George Kessler to conceive a framework for Indianapolis's modern parks system.[264] Kessler's 1909 Indianapolis Park and Boulevard Plan linked notable parks, such as Brookside, Ellenberger, and Garfield, with a system of parkways following the city's waterways.[265] In 2003, the system's 3,474 acres (1,406 ha) were added to the National Register of Historic Places.[266]
|
185 |
+
|
186 |
+
Marion County is home to two of Indiana's 25 state parks: Fort Harrison in Lawrence and White River downtown. Fort Harrison is managed by the Indiana Department of Natural Resources. White River is owned and operated by the White River State Park Development Commission, a quasi-governmental agency.[267] Encompassing 250 acres (100 ha), White River is the city's major urban park, home to the Indianapolis Zoo and White River Gardens.[213] Indianapolis lies about 50 miles (80 km) north of two state forests, Morgan–Monroe and Yellowwood, and one national forest, Hoosier. Crown Hill Cemetery, the third largest private cemetery in the U.S., covers 555 acres (225 ha) on the city's north side and is home to more than 250 species of trees and shrubs comprising one of the largest old-growth forests in the Midwest.[268][269]
|
187 |
+
|
188 |
+
According to the Trust for Public Land's 2017 ParkScore Index, Indianapolis tied for last with respect to public park accessibility of the 100 largest U.S. cities evaluated. Some 68% of residents are underserved. The city's large land area and low public funding contributed to the ranking.[270]
|
189 |
+
|
190 |
+
Indianapolis has a consolidated city-county government, a status it has held since 1970 under Indiana Code's Unigov provision. Many functions of the city and county governments are consolidated, though some remain separate.[3] The city has a strong mayor–council form of government.
|
191 |
+
|
192 |
+
The executive branch is headed by an elected mayor, who serves as the chief executive of both the city and Marion County. Joe Hogsett, a Democrat, is the 49th mayor of Indianapolis. The mayor appoints deputy mayors, department heads, and members of various boards and commissions. City-County Council is the legislative body and consists of 25 members, all of whom represent geographic districts. The council has the exclusive power to adopt budgets, levy taxes, and make appropriations. It can also enact, repeal, or amend ordinances, and make appointments to certain boards and commissions. According to Moody's, the city maintains a Aaa bond credit rating, with an annual budget of $1.1 billion.[271][272] The judicial branch consists of a circuit court, a superior court with four divisions and 32 judges, and a small claims court.[3] The three branches, along with most local government departments, are based in the City-County Building.
|
193 |
+
|
194 |
+
As the state capital, Indianapolis is the seat of Indiana's state government. The city has hosted the capital since its move from Corydon in 1825. The Indiana Statehouse, located downtown, houses the executive, legislative, and judicial branches of state government, including the offices of the Governor of Indiana and Lieutenant Governor of Indiana, the Indiana General Assembly, and the Indiana Supreme Court. Most state departments and agencies are in Indiana Government Centers North and South. The Indiana Governor's Residence is on Meridian Street in the Butler–Tarkington neighborhood, about 5 miles (8.0 km) north of downtown.
|
195 |
+
|
196 |
+
Most of Indianapolis is within Indiana's 7th congressional district, represented by André Carson (D–Indianapolis), while the northern fifth is part of Indiana's 5th congressional district, represented by Susan Brooks (R–Carmel).[273] Federal field offices are in the Birch Bayh Federal Building and United States Courthouse (which houses the United States District Court for the Southern District of Indiana) and the Minton-Capehart Federal Building, both downtown. The Defense Finance and Accounting Service, an agency of the U.S. Department of Defense, is headquartered in nearby Lawrence.
|
197 |
+
|
198 |
+
Indianapolis Emergency Medical Services is the largest provider of pre-hospital medical care in the city, responding to 95,000 emergency dispatch calls annually.[274] The agency's coverage area includes six townships within the city (Center, Franklin, Lawrence, Perry, Warren, and Washington) and the town of Speedway. As of 2019[update], Daniel O'Donnell, MD is the EMS chief.[275]
|
199 |
+
|
200 |
+
The Indianapolis Fire Department (IFD) provides fire protection services as the primary emergency response agency for 278 square miles (720 km2) of Marion County. IFD provides automatic and mutual aid to the excluded municipalities of Beech Grove, Lawrence, and Speedway, as well as Decatur, Pike, and Wayne townships who have retained their own fire departments. The fire district comprises seven geographic battalions with 44 fire stations, dual-staffing a forty-fifth station with the City of Lawrence Fire Department.[3] As of 2014[update], 1,205 sworn firefighters responded to nearly 100,000 incidents annually.[276] As of 2018[update], Ernest Malone was the fire chief.[277]
|
201 |
+
|
202 |
+
Indianapolis Metropolitan Police Department (IMPD) is the primary law enforcement agency for the city of Indianapolis. IMPD's jurisdiction covers Marion County, with the exceptions of Beech Grove, Lawrence, Southport, Speedway, and the Indianapolis International Airport, which is served by the Indianapolis Airport Authority Police Department.[278] IMPD was established in 2007 through a merger between the Indianapolis Police Department and the Marion County Sheriff's Office Law Enforcement Division.[279] The Marion County Sheriff's Office maintains and operates Marion County Jails I and II. In 2016, IMPD operated six precincts with 1,640 sworn police personnel and 200 civilian employees.[3] As of 2020[update], Randal Taylor was the chief of police.[280]
|
203 |
+
|
204 |
+
According to the FBI's 2017 Uniform Crime Report, Indianapolis recorded 1,333.96 violent crimes per 100,000 people. Violent crimes include murder and non-negligent manslaughter, rape, robbery, and aggravated assault. In that same report, Indianapolis recorded 4,411.87 property crimes per 100,000 people. Property crimes include burglary, larceny-theft, and motor vehicle theft.
|
205 |
+
|
206 |
+
Until 2019, annual criminal homicide numbers had grown each year since 2011, reaching record highs from 2015 to 2018.[281] With 144 criminal homicides, 2015 surpassed 1998 as the year with the most murder investigations in the city. With 159 criminal homicides, 2018 stands as the most violent year on record in the city.[281] FBI data showed a 7 percent increase in violent crimes committed in Indianapolis, outpacing the rest of the state and country.[282] Law enforcement has blamed increased violence on a combination of root causes, including poverty, substance abuse, mental illness, and availability of firearms.[283]
|
207 |
+
|
208 |
+
Until fairly recently, Indianapolis was considered one of the most conservative major cities in the U.S.[72] Republicans held the mayor's office for 32 years (1967–1999), and controlled the City-County Council from its inception in 1970 to 2003.[72] Since the early-2000s, the city's politics have gradually shifted more toward the Democrats. As of 2014[update], the city is regarded as politically moderate.[285]
|
209 |
+
|
210 |
+
Incumbent mayor Democrat Joe Hogsett faced Republican State Senator Jim Merritt and Libertarian Doug McNaughton in the 2019 Indianapolis mayoral election. Hogsett was elected to a second term, with 72% of the vote.[286] The 2019 City-County Council elections expanded Democratic control of the council, flipping six seats to hold a 20–5 supermajority over Republicans.[287]
|
211 |
+
|
212 |
+
Recent political issues of local concern have included cutting the city's structural deficit, planning and construction of a new criminal justice center, homelessness, streetlights, and improved mass transit and transportation infrastructure.[288][272]
|
213 |
+
|
214 |
+
Indiana University – Purdue University Indianapolis (IUPUI) was founded in 1969 after merging the branch campuses of Indiana University and Purdue University.[289] IUPUI's enrollment is 29,800, the third-largest in the state.[289] IUPUI has two colleges and 18 schools, including the Herron School of Art and Design, Robert H. McKinney School of Law, School of Dentistry, and the Indiana University School of Medicine, the largest medical school in the U.S.[290][291] The city is home to the largest campus for Ivy Tech Community College of Indiana, a state-funded community college serving 77,600 students statewide.[292]
|
215 |
+
|
216 |
+
Five private universities are based in Indianapolis. Established in 1855, Butler University is the oldest higher education institution in the city, with a total enrollment of about 5,000.[293] Affiliated with the Roman Catholic Church, Marian University was founded in 1936 when St. Francis Normal and Immaculate Conception Junior College merged, moving to Indianapolis in 1937. Marian has an enrollment of about 3,100 students.[294] Founded in 1902, the University of Indianapolis is affiliated with the United Methodist Church. The school's enrollment is 5,700 students.[295] Martin University was founded in 1977 and is the state's only predominately black university.[296] Crossroads Bible College and Indiana Bible College are small Christian colleges in the city. The American College of Education is an accredited online university based in Indianapolis.
|
217 |
+
|
218 |
+
Satellite campuses in the city include Ball State University's R. Wayne Estopinal College of Architecture and Planning, Grace College, Indiana Institute of Technology, Indiana Wesleyan University, and Vincennes University.
|
219 |
+
|
220 |
+
Nine public school districts serve residents of Indianapolis: Franklin Township Community School Corporation, MSD Decatur Township, MSD Lawrence Township, MSD Perry Township, MSD Pike Township, MSD Warren Township, MSD Washington Township, MSD Wayne Township, and Indianapolis Public Schools (IPS). As of 2016[update], IPS was the second largest public school district in Indiana, serving nearly 30,000 students.[3][297]
|
221 |
+
|
222 |
+
Several private primary and secondary schools are operated through the Archdiocese of Indianapolis, charters, or other independent organizations. Founded in 1873, the Indianapolis Public Library includes the Central Library and 23 branches throughout Marion County. The Indianapolis Public Library served 4.2 million patrons in 2014, with a circulation of 15.9 million materials.[298] The Central Library houses a number of special collections, including the Center for Black Literature & Culture, the Chris Gonzalez Library and Archives, and the Nina Mason Pulliam Indianapolis Special Collections Room.[299]
|
223 |
+
|
224 |
+
Indianapolis is served by various print media. Founded in 1903, The Indianapolis Star is the city's daily morning newspaper. The Star is owned by Gannett Company, with a daily circulation of 127,064.[300] The Indianapolis News was the city's daily evening newspaper and oldest print media, published from 1869 to 1999. Notable weeklies include NUVO, an alternative weekly newspaper, the Indianapolis Recorder, a weekly newspaper serving the local African American community, the Indianapolis Business Journal, reporting on local real estate, and the Southside Times. Indianapolis Monthly is the city's monthly lifestyle publication.
|
225 |
+
|
226 |
+
Broadcast television network affiliates include WTTV 4 (CBS), WRTV 6 (ABC), WISH-TV 8 (The CW), WTHR-TV 13 (NBC), WDNI-CD 19 (Telemundo), WFYI-TV 20 (PBS), WNDY-TV 23 (MyNetworkTV), WUDZ-LD 28 (Buzzr), WSDI-LD 30 (FNX), WHMB-TV 40 (Family), WCLJ-TV 42 (Ion Life), WBXI-CD 47 (Start TV), WXIN-TV 59 (Fox), WIPX-TV 63 (Ion) and WDTI 69 (Daystar). The majority of commercial radio stations in the city are owned by Cumulus Media, Emmis Communications, iHeartMedia, and Urban One. Popular nationally syndicated radio program The Bob & Tom Show has been based at Indianapolis radio station WFBQ since 1983.[301] As of 2019[update], the Indianapolis metropolitan area was the 25th largest television market and 39th largest radio market in the U.S.[302][303]
|
227 |
+
|
228 |
+
Indianapolis natives Jane Pauley and David Letterman launched their broadcasting careers in local media, Pauley with WISH-TV and Letterman with WTHR-TV, respectively.[304][305] Motion pictures at least partially filmed in the city include Speedway,[306] To Please a Lady,[307] Winning,[308] Hoosiers,[309] Going All the Way,[310] Eight Men Out,[311] and Athlete A. Television series set in Indianapolis have included One Day at a Time; Good Morning, Miss Bliss; Men Behaving Badly; Close to Home;[312] the second season of anthology drama American Crime;[313] and the web television limited series, Self Made.[314] Television series shot on location in the city include Cops[315] and HGTV's Good Bones.[316] NBC's Parks and Recreation occasionally filmed in the city, including the eponymous episode "Indianapolis."[317][318]
|
229 |
+
|
230 |
+
Indianapolis's transportation infrastructure comprises a complex network that includes a local public bus system, several private intercity bus providers, Amtrak passenger rail service via the Cardinal, 282 miles (454 km) of freight rail lines, an Interstate Highway System, two airports, a heliport, bikeshare system, 104 miles (167 km) of bike lanes, 34 miles (55 km) of multi-use paths, and 99 miles (159 km) of trails and greenways.[259] The city has also become known for its prevalence of electric scooters.[319]
|
231 |
+
|
232 |
+
According to the 2016 American Community Survey, 83.7% of working residents in the city commuted by driving alone, 8.4% carpooled, 1.5% used public transportation, and 1.8% walked. About 1.5% used all other forms of transportation, including taxicab, motorcycle, and bicycle. About 3.1% of working city residents worked at home.[320] In 2015, 10.5 percent of Indianapolis households lacked a car, which decreased to 8.7 percent in 2016, the same as the national average in that year. Indianapolis averaged 1.63 cars per household in 2016, compared to a national average of 1.8.[321]
|
233 |
+
|
234 |
+
Indianapolis International Airport (IND) sits on 7,700 acres (3,116 ha) approximately 7 miles (11 km) southwest of downtown Indianapolis. IND is the busiest airport in the state, serving more than 9.4 million passengers annually.[322] Completed in 2008, the Col. H. Weir Cook Terminal contains two concourses and 40 gates, connecting to 51 nonstop domestic and international destinations and averaging 145 daily departures.[323] As home to the second largest FedEx Express hub in the world, behind only Memphis, IND ranked as the seventh busiest U.S. airport in terms of air cargo throughput in 2015.[161][324]
|
235 |
+
|
236 |
+
The Indianapolis Airport Authority is a municipal corporation that oversees operations at five additional airports in the region, two of which are in Indianapolis: Eagle Creek Airpark (EYE), a relief airport for IND, and the Indianapolis Downtown Heliport (8A4).[325]
|
237 |
+
|
238 |
+
Four Interstates intersect the city: Interstate 65, Interstate 69, Interstate 70, and Interstate 74. Two auxiliary Interstate Highways are in the metropolitan area: a beltway (Interstate 465) and connector (Interstate 865). A $3 billion expansion project to extend Interstate 69 from Evansville to Indianapolis is in progress.[326] The Indiana Department of Transportation manages all Interstates, U.S. Highways, and Indiana State Roads within the city.
|
239 |
+
|
240 |
+
The city's Department of Public Works manages about 8,175 miles (13,156 km) of street, in addition to 540 bridges, alleys, sidewalks, and curbs.[259][327]
|
241 |
+
|
242 |
+
The Indianapolis Public Transportation Corporation, branded as IndyGo, operates the city's public bus system. In 2016, the Julia M. Carson Transit Center opened, the downtown hub for 27 of its 31 bus routes and operating 9.2 million passenger trips.[328][259] In 2017, City-County Council approved a voter referendum increasing Marion County's income tax to help fund IndyGo's first major system expansion since its founding in 1975. The Marion County Transit Plan outlines proposed system improvements, including three bus rapid transit (BRT) lines, new buses, sidewalks, and bus shelters, extended hours and weekend schedules, and a 70% increase in service hours on all existing local routes.[329][330][331] Phase I of IndyGo's Red Line, the first of the three planned BRT lines, began service on September 1, 2019.[332] The $96.3 million project includes a $75 million grant from the Federal Transit Administration.[333]
|
243 |
+
|
244 |
+
The Central Indiana Regional Transportation Authority (CIRTA) is a quasi-governmental agency that organizes regional car and vanpools and operates three public workforce connectors from Indianapolis to employment centers in Plainfield and Whitestown.
|
245 |
+
|
246 |
+
Reliance on the automobile has affected the city's development patterns, with Walk Score ranking Indianapolis as one of the least walkable large cities in the U.S.[334] The city has enhanced bicycle and pedestrian infrastructure in recent years, with some 104 miles (167 km) of on-street bike lanes, 34 miles (55 km) of multi-use paths, and 99 miles (159 km) of trails and greenways.[335][259] Indianapolis is designated a "Bronze Level" Bicycle Friendly Community by the League of American Bicyclists.[336] The Indianapolis Cultural Trail and BCycle launched Indiana Pacers Bikeshare in April 2014 as the city's bicycle-sharing system, consisting of 525 bicycles at 50 stations.[337] Transportation network companies Lyft and Uber are available by mobile app in the city, as well as traditional taxicabs.[338] After negotiations with city officials, Bird and Lime electric scooter-sharing launched in September 2018.[339]
|
247 |
+
|
248 |
+
Amtrak provides intercity rail service to Indianapolis via Union Station, serving about 30,000 passengers in 2015.[165] The Cardinal makes three weekly trips between New York City and Chicago. Several private intercity bus service providers stop in the city. Greyhound Lines operates a bus terminal at Union Station and stop at Indianapolis International Airport's Ground Transportation Center.[340] Barons Bus Lines, Burlington Trailways, and Miller Transportation's Hoosier Ride also stop at Greyhound's Union Station bus terminal.[341] Megabus stops at the corner of North Alabama Street and East Market Street near the Indianapolis City Market.[342] GO Express Travel manages two shuttle services: GO Green Express between downtown Indianapolis and the Indianapolis International Airport and Campus Commute between IUPUI and Indiana University Bloomington.[343][344] OurBus began daily service between Indianapolis and Chicago, with stops in Zionsville and Lafayette, filling a gap left after Amtrak's Hoosier State was discontinued in July 2019.[345]
|
249 |
+
|
250 |
+
Indiana University Health's Academic Health Center encompasses Marion County, with the medical centers of University Hospital, Methodist Hospital, and Riley Hospital for Children. The Academic Health Center is anchored by the Indiana University School of Medicine's principal research and education campus, the largest allopathic medical school in the U.S.[290][291] Riley Hospital for Children is among the nation's foremost pediatric health centers, recognized in all ten specialties by U.S. News and World Report, including top 25 honors in orthopedics (23), nephrology (22), gastroenterology and GI surgery (16), pulmonology (13), and urology (4).[347] The 430-bed facility also contains Indiana's only Pediatric Level I Trauma Center.[348]
|
251 |
+
|
252 |
+
Health & Hospital Corporation of Marion County, a municipal corporation, was formed in 1951 to manage the city's public health facilities and programs, including the Marion County Public Health Department and Eskenazi Health.[349] Eskenazi Health's flagship medical center, the Sidney & Lois Eskenazi Hospital, opened in 2013 after a $754 million project to replace Wishard Memorial Hospital.[350] The hospital includes an Adult Level I Trauma Center, 315 beds, and 275 exam rooms, annually serving about 1 million outpatients.[351] Opened in 1932, the Richard L. Roudebush VA Medical Center is Indiana's tertiary referral hospital for former armed services personnel, treating more than 60,000 veterans annually.[352]
|
253 |
+
|
254 |
+
Located on the city's far north side, St. Vincent Indianapolis Hospital is the flagship medical center of St. Vincent Health's 22-hospital system. St. Vincent Indianapolis includes Peyton Manning Children's Hospital, St. Vincent Heart Center of Indiana, St. Vincent Seton Specialty Hospital, and St. Vincent Women's Hospital. Franciscan Health Indianapolis's flagship medical center is on the city's far south side.
|
255 |
+
|
256 |
+
Community Health Network contains dozens of specialty hospitals and three emergency medical centers in Marion County, including Community Hospital South, Community Hospital North, and Community Hospital East. Community Hospital East replaced its 60-year-old facility with a $175 million, 150-bed hospital in 2019.[353] The campus also includes a $120 million, 159-bed state-funded psychiatric and chronic addiction treatment facility. The Indiana NeuroDiagnostic Institute and Advanced Treatment Center will replace the antiquated Larue D. Carter Memorial Hospital in 2019.[354]
|
257 |
+
|
258 |
+
According to Indianapolis-based American College of Sports Medicine's 2016 American Fitness Index Data Report, the city scored last of the 50 largest U.S. metropolitan areas for health and community fitness.[355] Higher instances of obesity, coronary heart disease, diabetes, smoking, and asthma contributed to the ranking.[356] After the annual listing expanded to the 100 largest U.S. cities in 2019, Indianapolis ranked 96th.[357]
|
259 |
+
|
260 |
+
Electricity is provided by Indianapolis Power & Light (IPL), a subsidiary of AES Corporation.[358] Despite a portfolio comprised 100% of nonrenewable energy sources in 2007, IPL ended coal-firing operations at its Harding Street Station in 2016.[359] Today, IPL generates 3,343 MW of electricity at four power stations, two wind farms,[359] and 34 solar farms,[360] covering a service area of 528 square miles (1,370 km2).[361] In 2017, Indianapolis had the fourth highest number of photovoltaics per capita in the U.S.[360]
|
261 |
+
|
262 |
+
Citizens Energy Group, the only public charitable trust formed to operate utilities in the U.S., provides residents with natural gas, water, wastewater, and thermal services.[362][363][364] Covanta Energy operates a waste-to-energy plant in the city, processing solid waste for steam production.[363][365] Steam is sold to Citizens' Perry K. Generating Station for the downtown Indianapolis district heating system, the second largest in the U.S.[366] Indianapolis's water is supplied through four surface water treatment plants, drawing from the White River, Fall Creek, and Eagle Creek; and four pumping stations, providing water supply from groundwater aquifers. Additional water supply is ensured by three reservoirs in the region.[221] A fourth reservoir near the northern suburb of Fishers will be completed in 2020.[367]
|
263 |
+
|
264 |
+
Eleven solid waste districts are managed by one of three garbage collection providers: the city's Department of Public Works, Republic Services, and Waste Management.[368][369] Republic Services and Ray's Trash Service collect curbside recycling.[370] The Department of Public Works' Operations Division is responsible for snow and ice removal, with a fleet of more than 70 snow removal trucks plowing approximately 7,300 miles (11,700 km) of public streets after winter weather events.[371][372]
|
265 |
+
|
266 |
+
Indianapolis has seven sister cities and two friendship cities as designated by Sister Cities International.[373] The sister-city relationship with Scarborough, Ontario, Canada lasted from 1996 to 1998, ending when Scarborough was amalgamated into Toronto.[374]
|
267 |
+
|
268 |
+
Charter sister cities
|
269 |
+
|
270 |
+
Friendship cities
|
271 |
+
|
272 |
+
As of 2018[update], Indianapolis contains ten foreign consulates, serving Denmark, France, Germany, Italy, Japan, Mexico, Portugal, Romania, Slovakia, and Switzerland.[375]
|
273 |
+
|
274 |
+
Bold indicates county of 100,000+
|
275 |
+
|
276 |
+
Nation:
|
277 |
+
|
278 |
+
States:
|
279 |
+
|
280 |
+
Territories:
|
en/2724.html.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
Native Americans may refer to:
|
en/2725.html.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
Native Americans may refer to:
|
en/2726.html.txt
ADDED
@@ -0,0 +1,183 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Coordinates: 5°S 120°E / 5°S 120°E / -5; 120
|
4 |
+
|
5 |
+
Indonesia (/ˌɪndəˈniːʒə/ (listen) IN-də-NEE-zhə), officially the Republic of Indonesia (Indonesian: Republik Indonesia [reˈpublik ɪndoˈnesia] (listen)),[a] is a country in Southeast Asia and Oceania, between the Indian and Pacific oceans. It consists of more than seventeen thousand islands, including Sumatra, Java, Borneo (Kalimantan), Sulawesi, and New Guinea (Papua). Indonesia is the world's largest island country and the 14th largest country by land area, at 1,904,569 square kilometres (735,358 square miles). With over 267 million people, it is the world's 4th most populous country as well as the most populous Muslim-majority country. Java, the world's most populous island, is home to more than half of the country's population.
|
6 |
+
|
7 |
+
The sovereign state is a presidential, constitutional republic with an elected legislature. It has 34 provinces, of which five have special status. The country's capital, Jakarta, is the second-most populous urban area in the world. The country shares land borders with Papua New Guinea, East Timor, and the eastern part of Malaysia. Other neighbouring countries include Singapore, Vietnam, the Philippines, Australia, Palau, and India's Andaman and Nicobar Islands. Despite its large population and densely populated regions, Indonesia has vast areas of wilderness that support one of the world's highest levels of biodiversity.
|
8 |
+
|
9 |
+
The Indonesian archipelago has been a valuable region for trade since at least the 7th century when Srivijaya and later Majapahit traded with entities from mainland China and the Indian subcontinent. Local rulers gradually absorbed foreign influences from the early centuries and Hindu and Buddhist kingdoms flourished. Sunni traders and Sufi scholars brought Islam, while Europeans introduced Christianity through colonisation. Although sometimes interrupted by the Portuguese, French and British, the Dutch were the foremost colonial power for much of their 350-year presence in the archipelago. The concept of "Indonesia" as a nation-state emerged in the early 20th century[13] and the country proclaimed its independence in 1945. However, it was not until 1949 that the Dutch recognised Indonesia's sovereignty following an armed and diplomatic conflict between the two.
|
10 |
+
|
11 |
+
Indonesia consists of hundreds of distinct native ethnic and linguistic groups, with the largest one being the Javanese. A shared identity has developed with the motto "Bhinneka Tunggal Ika" ("Unity in Diversity" literally, "many, yet one"), defined by a national language, ethnic diversity, religious pluralism within a Muslim-majority population, and a history of colonialism and rebellion against it. The economy of Indonesia is the world's 16th largest by nominal GDP and 7th by GDP at PPP. The country is a member of several multilateral organisations, including the United Nations, World Trade Organization, International Monetary Fund, G20, and a founding member of Non-Aligned Movement, Association of Southeast Asian Nations, Asia-Pacific Economic Cooperation, East Asia Summit, Asian Infrastructure Investment Bank, and Organisation of Islamic Cooperation.
|
12 |
+
|
13 |
+
The name Indonesia derives from Greek Indos (Ἰνδός) and the word nesos (νῆσος), meaning "Indian islands".[14] The name dates to the 18th century, far predating the formation of independent Indonesia.[15] In 1850, George Windsor Earl, an English ethnologist, proposed the terms Indunesians—and, his preference, Malayunesians—for the inhabitants of the "Indian Archipelago or Malayan Archipelago".[16] In the same publication, one of his students, James Richardson Logan, used Indonesia as a synonym for Indian Archipelago.[17][18] However, Dutch academics writing in East Indies publications were reluctant to use Indonesia; they preferred Malay Archipelago (Dutch: Maleische Archipel); the Netherlands East Indies (Nederlandsch Oost Indië), popularly Indië; the East (de Oost); and Insulinde.[19]
|
14 |
+
|
15 |
+
After 1900, Indonesia became more common in academic circles outside the Netherlands, and native nationalist groups adopted it for political expression.[19] Adolf Bastian, of the University of Berlin, popularised the name through his book Indonesien oder die Inseln des Malayischen Archipels, 1884–1894. The first native scholar to use the name was Ki Hajar Dewantara when in 1913 he established a press bureau in the Netherlands, Indonesisch Pers-bureau.[15]
|
16 |
+
|
17 |
+
Fossilised remains of Homo erectus, popularly known as the "Java Man", suggest the Indonesian archipelago was inhabited two million to 500,000 years ago.[21][22][23] Homo sapiens reached the region around 43,000 BCE.[24] Austronesian peoples, who form the majority of the modern population, migrated to Southeast Asia from what is now Taiwan. They arrived in the archipelago around 2,000 BCE and confined the native Melanesian peoples to the far eastern regions as they spread east.[25] Ideal agricultural conditions and the mastering of wet-field rice cultivation as early as the eighth century BCE[26] allowed villages, towns, and small kingdoms to flourish by the first century CE. The archipelago's strategic sea-lane position fostered inter-island and international trade, including with Indian kingdoms and Chinese dynasties, from several centuries BCE.[27] Trade has since fundamentally shaped Indonesian history.[28][29]
|
18 |
+
|
19 |
+
From the seventh century CE, the Srivijaya naval kingdom flourished as a result of trade and the influences of Hinduism and Buddhism.[30] Between the eighth and tenth centuries CE, the agricultural Buddhist Sailendra and Hindu Mataram dynasties thrived and declined in inland Java, leaving grand religious monuments such as Sailendra's Borobudur and Mataram's Prambanan. The Hindu Majapahit kingdom was founded in eastern Java in the late 13th century, and under Gajah Mada, its influence stretched over much of present-day Indonesia. This period is often referred to as a "Golden Age" in Indonesian history.[31]
|
20 |
+
|
21 |
+
The earliest evidence of Islamized populations in the archipelago dates to the 13th century in northern Sumatra.[32] Other parts of the archipelago gradually adopted Islam, and it was the dominant religion in Java and Sumatra by the end of the 16th century. For the most part, Islam overlaid and mixed with existing cultural and religious influences, which shaped the predominant form of Islam in Indonesia, particularly in Java.[33]
|
22 |
+
|
23 |
+
The first Europeans arrived in the archipelago in 1512, when Portuguese traders, led by Francisco Serrão, sought to monopolise the sources of nutmeg, cloves, and cubeb pepper in the Maluku Islands.[34] Dutch and British traders followed. In 1602, the Dutch established the Dutch East India Company (VOC) and became the dominant European power for almost 200 years. The VOC was dissolved in 1800 following bankruptcy, and the Netherlands established the Dutch East Indies as a nationalised colony.[35]
|
24 |
+
|
25 |
+
For most of the colonial period, Dutch control over the archipelago was tenuous. Dutch forces were engaged continuously in quelling rebellions both on and off Java. The influence of local leaders such as Prince Diponegoro in central Java, Imam Bonjol in central Sumatra, Pattimura in Maluku, and bloody 30-year war in Aceh weakened the Dutch and tied up the colonial military forces.[36][37][38] Only in the early 20th century did their dominance extend to what was to become Indonesia's current boundaries.[39][40][41][42]
|
26 |
+
|
27 |
+
The Japanese invasion and subsequent occupation during World War II ended Dutch rule[43][44] and encouraged the previously suppressed independence movement. Two days after the surrender of Japan in August 1945, Sukarno and Mohammad Hatta, influential nationalist leaders, proclaimed Indonesian independence and were appointed president and vice-president respectively.[45] The Netherlands attempted to re-establish their rule, and a bitter armed and diplomatic struggle ended in December 1949 when the Dutch formally recognised Indonesian independence in the face of international pressure.[46][47] Despite extraordinary political, social and sectarian divisions, Indonesians, on the whole, found unity in their fight for independence.[48][49]
|
28 |
+
|
29 |
+
As president, Sukarno moved Indonesia from democracy towards authoritarianism and maintained power by balancing the opposing forces of the military, political Islam, and the increasingly powerful Communist Party of Indonesia (PKI).[50] Tensions between the military and the PKI culminated in an attempted coup in 1965. The army, led by Major General Suharto, countered by instigating a violent anti-communist purge that killed between 500,000 and one million people.[51] The PKI was blamed for the coup and effectively destroyed.[52][53][54] Suharto capitalised on Sukarno's weakened position, and following a drawn-out power play with Sukarno, Suharto was appointed president in March 1968. His "New Order" administration,[55] supported by the United States,[56][57][58] encouraged foreign direct investment,[59][60] which was a crucial factor in the subsequent three decades of substantial economic growth.
|
30 |
+
|
31 |
+
Indonesia was the country hardest hit by the 1997 Asian financial crisis.[61] It brought out popular discontent with the New Order's corruption and suppression of political opposition and ultimately ended Suharto's presidency.[62][63][64][65] In 1999, East Timor seceded from Indonesia, following its 1975 invasion by Indonesia[66] and a 25-year occupation that was marked by international condemnation of human rights abuses.[67]
|
32 |
+
|
33 |
+
In the post-Suharto era, democratic processes have been strengthened by enhancing regional autonomy and instituting the country's first direct presidential election in 2004.[68] Political, economic and social instability, corruption, and terrorism remained problems in the 2000s; however, in recent years, the economy has performed strongly. Although relations among the diverse population are mostly harmonious, acute sectarian discontent and violence remain a problem in some areas.[69] A political settlement to an armed separatist conflict in Aceh was achieved in 2005 following the 2004 Indian Ocean earthquake and tsunami that killed 130,000 Indonesians.[70] In 2014, Joko Widodo became the first directly elected president from outside the military and political elite.[71]
|
34 |
+
|
35 |
+
Indonesia lies between latitudes 11°S and 6°N, and longitudes 95°E and 141°E. It is the largest archipelagic country in the world, extending 5,120 kilometres (3,181 mi) from east to west and 1,760 kilometres (1,094 mi) from north to south.[72] According to the country's Coordinating Ministry for Maritime and Investments Affairs, Indonesia has 17,504 islands (16,056 of which are registered at the UN),[73] scattered over both sides of the equator, around 6,000 of which are inhabited.[74] The largest are Java, Sumatra, Borneo (shared with Brunei and Malaysia), Sulawesi, and New Guinea (shared with Papua New Guinea). Indonesia shares land borders with Malaysia on Borneo, Papua New Guinea on the island of New Guinea, and East Timor on the island of Timor, and maritime borders with Singapore, Malaysia, Vietnam, the Philippines, Palau, and Australia.
|
36 |
+
|
37 |
+
At 4,884 metres (16,024 ft), Puncak Jaya is Indonesia's highest peak, and Lake Toba in Sumatra is the largest lake, with an area of 1,145 km2 (442 sq mi). Indonesia's largest rivers are in Kalimantan and New Guinea and include Kapuas, Barito, Mamberamo, Sepik and Mahakam. They serve as communication and transport links between the island's river settlements.[75]
|
38 |
+
|
39 |
+
Indonesia lies along the equator, and its climate tends to be relatively even year-round.[76] Indonesia has two seasons—a wet season and a dry season—with no extremes of summer or winter.[77] For most of Indonesia, the dry season falls between May and October with the wet season between November and April.[77] Indonesia's climate is almost entirely tropical, dominated by the tropical rainforest climate found in every large island of Indonesia. More cooling climate types do exist in mountainous regions that are 1,300 to 1,500 metres (4,300 to 4,900 feet) above sea level. The oceanic climate (Köppen Cfb) prevails in highland areas adjacent to rainforest climates, with reasonably uniform precipitation year-round. In highland areas near the tropical monsoon and tropical savanna climates, the subtropical highland climate (Köppen Cwb) is prevalent with a more pronounced dry season.
|
40 |
+
|
41 |
+
Some regions, such as Kalimantan and Sumatra, experience only slight differences in rainfall and temperature between the seasons, whereas others, such as Nusa Tenggara, experience far more pronounced differences with droughts in the dry season, and floods in the wet. Rainfall varies across regions, with more in western Sumatra, Java, and the interiors of Kalimantan and Papua, and less in areas closer to Australia, such as Nusa Tenggara, which tend to be dry. The almost uniformly warm waters that constitute 81% of Indonesia's area ensure that temperatures on land remain relatively constant. Humidity is quite high, at between 70 and 90%. Winds are moderate and generally predictable, with monsoons usually blowing in from the south and east in June through October, and from the northwest in November through March. Typhoons and large-scale storms pose little hazard to mariners; significant dangers come from swift currents in channels, such as the Lombok and Sape straits.
|
42 |
+
|
43 |
+
Tectonically, Indonesia is highly unstable, making it a site of numerous volcanoes and frequent earthquakes.[79] It lies on the Pacific Ring of Fire where the Indo-Australian Plate and the Pacific Plate are pushed under the Eurasian plate where they melt at about 100 kilometres (62 miles) deep. A string of volcanoes runs through Sumatra, Java, Bali and Nusa Tenggara, and then to the Banda Islands of Maluku to northeastern Sulawesi.[80] Of the 400 volcanoes, around 130 are active.[79] Between 1972 and 1991, there were 29 volcanic eruptions, mostly on Java.[81] Volcanic ash has made agricultural conditions unpredictable in some areas.[82] However, it has also resulted in fertile soils, a factor in historically sustaining high population densities of Java and Bali.[83]
|
44 |
+
|
45 |
+
A massive supervolcano erupted at present-day Lake Toba around 70,000 BCE. It is believed to have caused a global volcanic winter and cooling of the climate, and subsequently led to a genetic bottleneck in human evolution, though this is still in debate.[84] The 1815 eruption of Mount Tambora and the 1883 eruption of Krakatoa were among the largest in recorded history. The former caused 92,000 deaths and created an umbrella of volcanic ash which spread and blanketed parts of the archipelago, and made much of Northern Hemisphere without summer in 1816.[85] The latter produced the loudest sound in recorded history and caused 36,000 deaths due to the eruption itself and the resulting tsunamis, with significant additional effects around the world years after the event.[86] Recent catastrophic disasters due to seismic activity include the 2004 Indian Ocean earthquake and the 2006 Yogyakarta earthquake.
|
46 |
+
|
47 |
+
Indonesia's size, tropical climate, and archipelagic geography support one of the world's highest levels of biodiversity.[87] Its flora and fauna is a mixture of Asian and Australasian species.[88] The islands of the Sunda Shelf (Sumatra, Java, Borneo, and Bali) were once linked to mainland Asia, and have a wealth of Asian fauna. Large species such as the Sumatran tiger, rhinoceros, orangutan, Asian elephant, and leopard were once abundant as far east as Bali, but numbers and distribution have dwindled drastically. Having been long separated from the continental landmasses, Sulawesi, Nusa Tenggara, and Maluku have developed their unique flora and fauna.[89] Papua was part of the Australian landmass and is home to a unique fauna and flora closely related to that of Australia, including over 600 bird species.[90] Forests cover approximately 70% of the country.[91] However, the forests of the smaller, and more densely populated Java, have largely been removed for human habitation and agriculture.
|
48 |
+
|
49 |
+
Indonesia is second only to Australia in terms of total endemic species, with 36% of its 1,531 species of bird and 39% of its 515 species of mammal being endemic.[92] Tropical seas surround Indonesia's 80,000 kilometres (50,000 miles) of coastline. The country has a range of sea and coastal ecosystems, including beaches, dunes, estuaries, mangroves, coral reefs, seagrass beds, coastal mudflats, tidal flats, algal beds, and small island ecosystems.[14] Indonesia is one of Coral Triangle countries with the world's most enormous diversity of coral reef fish with more than 1,650 species in eastern Indonesia only.[93]
|
50 |
+
|
51 |
+
British naturalist Alfred Russel Wallace described a dividing line (Wallace Line) between the distribution of Indonesia's Asian and Australasian species.[94] It runs roughly north–south along the edge of the Sunda Shelf, between Kalimantan and Sulawesi, and along the deep Lombok Strait, between Lombok and Bali. Flora and fauna on the west of the line are generally Asian, while east from Lombok they are increasingly Australian until the tipping point at the Weber Line. In his 1869 book, The Malay Archipelago, Wallace described numerous species unique to the area.[95] The region of islands between his line and New Guinea is now termed Wallacea.[94]
|
52 |
+
|
53 |
+
Indonesia's large and growing population and rapid industrialisation present serious environmental issues. They are often given a lower priority due to high poverty levels and weak, under-resourced governance.[96] Problems include the destruction of peatlands, large-scale illegal deforestation—and the resulting Southeast Asian haze—over-exploitation of marine resources, air pollution, garbage management, and reliable water and wastewater services.[96] These issues contribute to Indonesia's poor ranking (number 116 out of 180 countries) in the 2020 Environmental Performance Index. The report also indicates that Indonesia's performance is generally below average in both regional and global context.[97]
|
54 |
+
|
55 |
+
Expansion of the palm oil industry requiring significant changes to the natural ecosystems is the one primary factor behind much of Indonesia's deforestation.[98] While it can generate wealth for local communities, it may degrade ecosystems and cause social problems.[99] This situation makes Indonesia the world's largest forest-based emitter of greenhouse gases.[100] It also threatens the survival of indigenous and endemic species. The International Union for Conservation of Nature (IUCN) identified 140 species of mammals as threatened, and 15 as critically endangered, including the Bali starling,[101] Sumatran orangutan,[102] and Javan rhinoceros.[103]
|
56 |
+
|
57 |
+
Several studies consider Indonesia to be at severe risk from the projected effects of climate change.[104] They predict that unreduced emissions would see an average temperature rise of around 1 °C (2 °F) by mid-century,[105][106] amounting to almost double the frequency of scorching days (above 35 °C or 95 °F) per year by 2030. That figure is predicted to rise further by the end of the century.[105] It would raise the frequency of drought and food shortages, having an impact on precipitation and the patterns of wet and dry seasons, the basis of Indonesia's agricultural system.[106] It would also encourage diseases and increases in wildfires, which threaten the country's enormous rainforest.[106] Rising sea levels, at current rates, would result in tens of millions of households being at risk of submersion by mid-century.[107] A majority of Indonesia's population lives in low-lying coastal areas,[106] including the capital Jakarta, the fastest-sinking city in the world.[108] Impoverished communities would likely be affected the most by climate change.[109]
|
58 |
+
|
59 |
+
Indonesia is a republic with a presidential system. Following the fall of the New Order in 1998, political and governmental structures have undergone sweeping reforms, with four constitutional amendments revamping the executive, legislative and judicial branches.[110] Chief among them is the delegation of power and authority to various regional entities while remaining a unitary state.[111] The President of Indonesia is the head of state and head of government, commander-in-chief of the Indonesian National Armed Forces (Tentara Nasional Indonesia, TNI), and the director of domestic governance, policy-making, and foreign affairs. The president may serve a maximum of two consecutive five-year terms.[112]
|
60 |
+
|
61 |
+
The highest representative body at the national level is the People's Consultative Assembly (Majelis Permusyawaratan Rakyat, MPR). Its main functions are supporting and amending the constitution, inaugurating and impeaching the president,[113][114] and formalising broad outlines of state policy. The MPR comprises two houses; the People's Representative Council (Dewan Perwakilan Rakyat, DPR), with 575 members, and the Regional Representative Council (Dewan Perwakilan Daerah, DPD), with 136.[115] The DPR passes legislation and monitors the executive branch. Reforms since 1998 have markedly increased its role in national governance,[110] while the DPD is a new chamber for matters of regional management.[116][114]
|
62 |
+
|
63 |
+
Most civil disputes appear before the State Court (Pengadilan Negeri); appeals are heard before the High Court (Pengadilan Tinggi). The Supreme Court of Indonesia (Mahkamah Agung) is the highest level of the judicial branch, and hears final cessation appeals and conducts case reviews. Other courts include the Constitutional Court (Mahkamah Konstitusi) that listens to constitutional and political matters and the Religious Court (Pengadilan Agama) that deals with codified Islamic Law (sharia) cases.[117] Additionally, the Judicial Commission (Komisi Yudisial) monitors the performance of judges.
|
64 |
+
|
65 |
+
Since 1999, Indonesia has had a multi-party system. In all legislative elections since the fall of the New Order, no political party has managed to win an overall majority of seats. The Indonesian Democratic Party of Struggle (PDI-P), which secured the most votes in the 2019 elections, is the party of the incumbent President, Joko Widodo.[118] Other notable parties include the Party of the Functional Groups (Golkar), the Great Indonesia Movement Party (Gerindra), the Democratic Party, and the Prosperous Justice Party (PKS). The 2019 elections resulted in nine political parties in the DPR, with a parliamentary threshold of 4% of the national vote.[119] The first general election was held in 1955 to elect members of the DPR and the Constitutional Assembly (Konstituante). At the national level, Indonesians did not elect a president until 2004. Since then, the president is elected for a five-year term, as are the party-aligned members of the DPR and the non-partisan DPD.[115][110] Beginning with 2015 local elections, elections for governors and mayors have occurred on the same date. As of 2019, both legislative and presidential elections coincide.
|
66 |
+
|
67 |
+
Indonesia has several levels of subdivisions. The first level is that of the provinces, with five out of a total of 34 having a special status. Each has a legislature (Dewan Perwakilan Rakyat Daerah, DPRD) and an elected governor. This number has evolved, with the most recent change being the split of North Kalimantan from East Kalimantan in 2012.[120] The second level is that of the regencies (kabupaten) and cities (kota), led by regents (bupati) and mayors (walikota) respectively and a legislature (DPRD Kabupaten/Kota). The third level is that of the districts (kecamatan, distrik in Papua, or kapanewon and kemantren in Yogyakarta), and the fourth is of the villages (either desa, kelurahan, kampung, nagari in West Sumatra, or gampong in Aceh).
|
68 |
+
|
69 |
+
The village is the lowest level of government administration. It is divided into several community groups (rukun warga, RW), which are further divided into neighbourhood groups (rukun tetangga, RT). In Java, the village (desa) is divided into smaller units called dusun or dukuh (hamlets), which are the same as RW. Following the implementation of regional autonomy measures in 2001, regencies and cities have become chief administrative units, responsible for providing most government services. The village administration level is the most influential on a citizen's daily life and handles matters of a village or neighbourhood through an elected village chief (lurah or kepala desa).
|
70 |
+
|
71 |
+
Aceh, Jakarta, Yogyakarta, Papua, and West Papua have greater legislative privileges and a higher degree of autonomy from the central government than the other provinces. A conservative Islamic territory, Aceh has the right to create some aspects of an independent legal system implementing sharia.[121] Yogyakarta is the only pre-colonial monarchy legally recognised in Indonesia, with the positions of governor and vice governor being prioritised for descendants of the Sultan of Yogyakarta and Paku Alam, respectively.[122] Papua and West Papua are the only provinces where the indigenous people have privileges in their local government.[123] Jakarta is the only city granted a provincial government due to its position as the capital of Indonesia.[124]
|
72 |
+
|
73 |
+
Indonesia maintains 132 diplomatic missions abroad, including 95 embassies.[125] The country adheres to what it calls a "free and active" foreign policy, seeking a role in regional affairs in proportion to its size and location but avoiding involvement in conflicts among other countries.[126]
|
74 |
+
|
75 |
+
Indonesia was a significant battleground during the Cold War. Numerous attempts by the United States and the Soviet Union,[127][128] and China to some degree,[129] culminated in the 1965 coup attempt and subsequent upheaval that led to a reorientation of foreign policy. Quiet alignment with the West while maintaining a non-aligned stance has characterised Indonesia's foreign policy since then.[130] Today, it maintains close relations with its neighbours and is a founding member of the Association of Southeast Asian Nations (ASEAN) and the East Asia Summit. In common with most of the Muslim world, Indonesia does not have diplomatic relations with Israel and has actively supported Palestine. However, observers have pointed out that Indonesia has ties with Israel, albeit discreetly.[131]
|
76 |
+
|
77 |
+
Indonesia has been a member of the United Nations since 1950 and was a founding member of the Non-Aligned Movement (NAM) and the Organisation of Islamic Cooperation (OIC).[132] Indonesia is a signatory to the ASEAN Free Trade Area agreement, the Cairns Group, and the World Trade Organization (WTO), and an occasional member of OPEC.[133] During the Indonesia–Malaysia confrontation, Indonesia withdrew from the UN due to the latter's election to the United Nations Security Council, although it returned 18 months later. It marked the first time in UN history that a member state had attempted a withdrawal.[134] Indonesia has been a humanitarian and development aid recipient since 1966,[135][136][137] and recently, the country has expressed interest in becoming an aid donor.[138]
|
78 |
+
|
79 |
+
Indonesia's Armed Forces (TNI) include the Army (TNI–AD), Navy (TNI–AL, which includes Marine Corps), and Air Force (TNI–AU).[139] The army has about 400,000 active-duty personnel. Defence spending in the national budget was 0.7% of GDP in 2018,[140] with controversial involvement of military-owned commercial interests and foundations.[141] The Armed Forces were formed during the Indonesian National Revolution when it undertook guerrilla warfare along with informal militia. Since then, territorial lines have formed the basis of all TNI branches' structure, aimed at maintaining domestic stability and deterring foreign threats.[142] The military has possessed a strong political influence since its founding, which peaked during the New Order. Political reforms in 1998 included the removal of the TNI's formal representation from the legislature. Nevertheless, its political influence remains, albeit at a reduced level.[143]
|
80 |
+
|
81 |
+
Since independence, the country has struggled to maintain unity against local insurgencies and separatist movements.[144] Some, notably in Aceh and Papua, have led to an armed conflict, and subsequent allegations of human rights abuses and brutality from all sides.[145][146] The former was resolved peacefully in 2005,[70] while the latter continues, amid a significant, albeit imperfect, implementation of regional autonomy laws, and a reported decline in the levels of violence and human rights abuses since 2004.[147] Other engagements of the army include the campaign against the Netherlands New Guinea to incorporate the territory into Indonesia, the Konfrontasi to oppose the creation of Malaysia, the mass killings of PKI, and the invasion of East Timor, which remains Indonesia's most massive military operation.[148][149]
|
82 |
+
|
83 |
+
Indonesia has a mixed economy in which both the private sector and government play vital roles.[150] As the only G20 member state in Southeast Asia,[151] the country has the largest economy in the region and is classified as a newly industrialised country. As of 2019[update], it is the world's 16th largest economy by nominal GDP and 7th in terms of GDP at PPP, estimated to be US$1.100 trillion and US$3.740 trillion respectively. Per capita GDP in PPP is US$14,020, while nominal per capita GDP is US$4,120. The debt ratio to GDP is 29.2%.[152] The services are the economy's largest sector and account for 43.4% of GDP (2018), followed by industry (39.7%) and agriculture (12.8%).[153] Since 2009, it has employed more people than other sectors, accounting for 47.7% of the total labour force, followed by agriculture (30.2%) and industry (21.9%).[154]
|
84 |
+
|
85 |
+
Over time, the structure of the economy has changed considerably.[155] Historically, it has been weighted heavily towards agriculture, reflecting both its stage of economic development and government policies in the 1950s and 1960s to promote agricultural self-sufficiency.[155] A gradual process of industrialisation and urbanisation began in the late 1960s and accelerated in the 1980s as falling oil prices saw the government focus on diversifying away from oil exports and towards manufactured exports.[155] This development continued throughout the 1980s and into the next decade despite the 1990 oil price shock, during which the GDP rose at an average rate of 7.1%. As a result, the official poverty rate fell from 60% to 15%.[156] Reduction of trade barriers from the mid-1980s made the economy more globally integrated. The growth, however, ended with the 1997 Asian financial crisis, which affected the economy severely. It caused a real GDP contraction by 13.1% in 1998, and inflation reached 72%. The economy reached its low point in mid-1999 with only 0.8% real GDP growth.
|
86 |
+
|
87 |
+
Relatively steady inflation[157] and an increase in GDP deflator and the Consumer Price Index[158] have contributed to strong economic growth in recent years. Since 2007, annual growth has accelerated to between 4% and 6% as a result of improvement in the banking sector and domestic consumption,[159] helping Indonesia weather the 2008–2009 Great Recession.[160] In 2011, the country regained the investment grade rating it had lost in 1997.[161] As of 2019[update], 9.41% of the population lived below the poverty line, and the official open unemployment rate was 5.28%.[162]
|
88 |
+
|
89 |
+
Indonesia has abundant natural resources like oil and natural gas, coal, tin, copper, gold, and nickel, while agriculture produces rice, palm oil, tea, coffee, cacao, medicinal plants, spices, and rubber. These commodities make up a large portion of the country's exports, with palm oil and coal briquettes as the leading export commodities. In addition to refined and crude petroleum as the main imports, telephones, vehicle parts and wheat cover the majority of additional imports.[163] China, the United States, Japan, Singapore, India, Malaysia, South Korea and Thailand are Indonesia's principal export markets and import partners.[164]
|
90 |
+
|
91 |
+
Indonesia's transport system has been shaped over time by the economic resource base of an archipelago, and the distribution of its 250 million people highly concentrated on Java.[165] All transport modes play a role in the country's transport system and are generally complementary rather than competitive. In 2016, the transport sector generated about 5.2% of GDP.[166]
|
92 |
+
|
93 |
+
The road transport system is predominant, with a total length of 542,310 kilometres (336,980 miles) as of 2018[update].[167] Jakarta has the most extended bus rapid transit system in the world, boasting some 251.2 kilometres (156.1 miles) in 13 corridors and ten cross-corridor routes.[168] Rickshaws such as bajaj and becak and share taxis such as Angkot and Metromini are a regular sight in the country. Most of the railways are in Java, used for both freight and passenger transport, such as local commuter rail services complementing the inter-city rail network in several cities. In the late 2010s, Jakarta and Palembang were the first cities in Indonesia to have rapid transit systems, with more planned for other cities in the future.[169] In 2015, the government announced a plan to build a high-speed rail, which would be a first in Southeast Asia.[170]
|
94 |
+
|
95 |
+
Indonesia's largest airport, Soekarno–Hatta International Airport is the busiest in the Southern Hemisphere, serving 66 million passengers in 2018.[171] Ngurah Rai International Airport and Juanda International Airport are the country's second- and third-busiest airport respectively. Garuda Indonesia, the country's flag carrier since 1949, is one of the world's leading airlines and a member of the global airline alliance SkyTeam. Port of Tanjung Priok is the busiest and most advanced Indonesian port,[172] handling more than 50% of Indonesia's trans-shipment cargo traffic.
|
96 |
+
|
97 |
+
In 2017, Indonesia was the world's 9th largest energy producer with 4,200 terawatt-hours (14.2 quadrillion British thermal units), and the 15th largest energy consumer, with 2,100 terawatt-hours (7.1 quadrillion British thermal units).[173] The country has substantial energy resources, including 22 billion barrels (3.5 billion cubic metres) of conventional oil and gas reserves (of which about 4 billion barrels are recoverable), 8 billion barrels of oil-equivalent of coal-based methane (CBM) resources, and 28 billion tonnes of recoverable coal.[174] While reliance on domestic coal and imported oil has increased,[175] Indonesia has seen progress in renewable energy with hydropower being the most abundant source. Furthermore, the country has the potential for geothermal, solar, wind, biomass and ocean energy.[176] Indonesia has set out to achieve 23% use of renewable energy by 2025 and 31% by 2050.[175] As of 2015[update], Indonesia's total national installed power generation capacity stands at 55,528.51 MW.[177]
|
98 |
+
|
99 |
+
The country's largest dam, Jatiluhur, has several purposes including the provision of hydroelectric power generation, water supply, flood control, irrigation and aquaculture. The earth-fill dam is 105 m (344 ft) high and withholds a reservoir of 3.0 billion m3 (2.4 million acre⋅ft). It helps to supply water to Jakarta and to irrigate 240,000 ha (590,000 acres) of rice fields[178] and has an installed capacity of 186.5 MW which feeds into the Java grid managed by the State Electricity Company (Perusahaan Listrik Negara, PLN).
|
100 |
+
|
101 |
+
Indonesia's expenditure on science and technology is relatively low, at less than 0.1% of GDP (2017).[179] Historical examples of scientific and technological developments include the paddy cultivation technique terasering, which is common in Southeast Asia, and the pinisi boats by the Bugis and Makassar people.[180] In the 1980s, Indonesian engineer Tjokorda Raka Sukawati invented a road construction technique named Sosrobahu that allows the construction of long stretches of flyovers above existing main roads with minimum traffic disruption. It later became widely used in several countries.[181] The country is also an active producer of passenger trains and freight wagons with its state-owned company, the Indonesian Railway Industry (INKA), and has exported trains abroad.[182]
|
102 |
+
|
103 |
+
Indonesia has a long history in developing military and small commuter aircraft as the only country in Southeast Asia to build and produce aircraft. With its state-owned company, the Indonesian Aerospace (PT. Dirgantara Indonesia), Indonesia has provided components for Boeing and Airbus. The company also collaborated with EADS CASA of Spain to develop the CN-235 that has seen use by several countries.[183] Former President B. J. Habibie played a vital role in this achievement.[184] Indonesia has also joined the South Korean programme to manufacture the fifth-generation jet fighter KAI KF-X.[185]
|
104 |
+
|
105 |
+
Indonesia has a space programme and space agency, the National Institute of Aeronautics and Space (Lembaga Penerbangan dan Antariksa Nasional, LAPAN). In the 1970s, Indonesia became the first developing country to operate a satellite system called Palapa,[186] a series of communication satellites owned by Indosat Ooredoo. The first satellite, PALAPA A1 was launched on 8 July 1976 from the Kennedy Space Center in Florida, United States.[187] As of 2019[update], Indonesia has launched 18 satellites for various purposes,[188] and LAPAN has expressed a desire to put satellites in orbit with native launch vehicles by 2040.[189]
|
106 |
+
|
107 |
+
Tourism contributed around US$19.7 billion to GDP in 2019. In 2018, Indonesia received 15.8 million visitors, a growth of 12.5% from last year, and received an average receipt of US$967.[191][192] China, Singapore, Malaysia, Australia, and Japan are the top five sources of visitors to Indonesia. Since 2011, Wonderful Indonesia has been the slogan of the country's international marketing campaign to promote tourism.[193]
|
108 |
+
|
109 |
+
Nature and culture are prime attractions of Indonesian tourism. The former can boast a unique combination of a tropical climate, a vast archipelago, and a long stretch of beaches, and the latter complement those with a rich cultural heritage reflecting Indonesia's dynamic history and ethnic diversity. Indonesia has a well-preserved natural ecosystem with rain forests that stretch over about 57% of Indonesia's land (225 million acres). Forests on Sumatra and Kalimantan are examples of popular destinations, such as the Orangutan wildlife reserve. Moreover, Indonesia has one of the world's longest coastlines, measuring 54,716 kilometres (33,999 mi). The ancient Borobudur and Prambanan temples as well as Toraja and Bali, with its traditional festivities, are some of the popular destinations for cultural tourism.[195]
|
110 |
+
|
111 |
+
Indonesia has nine UNESCO World Heritage Sites, including the Komodo National Park and the Sawahlunto Coal Mine; and a further 19 in a tentative list that includes Bunaken National Park and Raja Ampat Islands.[196] Other attractions include the specific points in Indonesian history, such as the colonial heritage of the Dutch East Indies in the old towns of Jakarta and Semarang, and the royal palaces of Pagaruyung, Ubud, and Yogyakarta.[195]
|
112 |
+
|
113 |
+
The 2010 census recorded Indonesia's population as 237.6 million, the fourth largest in the world, with high population growth at 1.9%.[197] Java is the world's most populous island,[198] where 58% of the country's population lives.[199] The population density is 138 people per km2 (357 per sq mi), ranking 88th in the world,[200] although Java has a population density of 1,067 people per km2 (2,435 per sq mi). In 1961, the first post-colonial census recorded a total of 97 million people.[201] It is expected to grow to around 295 million by 2030 and 321 million by 2050.[202] The country currently possesses a relatively young population, with a median age of 30.2 years (2017 estimate).[74]
|
114 |
+
|
115 |
+
The spread of the population is uneven throughout the archipelago with a varying habitat and level of development, ranging from the megacity of Jakarta to uncontacted tribes in Papua.[203] As of 2010, about 49.7% of the population lives in urban areas.[204] Jakarta is the country's primate city and the second-most populous urban area in the world with over 34 million residents.[205] About 8 million Indonesians live overseas; most settled in Malaysia, the Netherlands, Saudi Arabia, the United Arab Emirates, Hong Kong, Singapore, the United States, and Australia.[206]
|
116 |
+
|
117 |
+
Indonesia is an ethnically diverse country, with around 300 distinct native ethnic groups.[207] Most Indonesians are descended from Austronesian peoples whose languages had origins in Proto-Austronesian, which possibly originated in what is now Taiwan. Another major grouping is the Melanesians, who inhabit eastern Indonesia (the Maluku Islands and Western New Guinea).[25][208][209]
|
118 |
+
|
119 |
+
The Javanese are the largest ethnic group, constituting 40.2% of the population,[4] and are politically dominant.[210] They are predominantly located in the central to eastern parts of Java and also sizable numbers in most provinces. The Sundanese, Malay, Batak, Madurese, Minangkabau and Buginese are the next largest groups in the country.[b] A sense of Indonesian nationhood exists alongside strong regional identities.[211]
|
120 |
+
|
121 |
+
The country's official language is Indonesian, a variant of Malay based on its prestige dialect, which for centuries had been the lingua franca of the archipelago. It was promoted by nationalists in the 1920s and achieved official status under the name Bahasa Indonesia in 1945.[212] As a result of centuries-long contact with other languages, it is rich in local and foreign influences, including from Javanese, Sundanese, Minangkabau, Hindi, Sanskrit, Chinese, Arabic, Dutch, Portuguese and English.[213][214][215] Nearly every Indonesian speaks the language due to its widespread use in education, academics, communications, business, politics, and mass media. Most Indonesians also speak at least one of more than 700 local languages,[3] often as their first language. Most belong to the Austronesian language family, while there are over 270 Papuan languages spoken in eastern Indonesia.[3] Of these, Javanese is the most widely spoken.[74]
|
122 |
+
|
123 |
+
In 1930, Dutch and other Europeans (Totok), Eurasians, and derivative people like the Indos, numbered 240,000 or 0.4% of the total population.[216] Historically, they constituted only a tiny fraction of the native population and continue to do so today. Despite the Dutch presence for almost 350 years, the Dutch language never had a substantial number of speakers or official status.[217] The small minorities that can speak it or Dutch-based creole languages fluently are the aforementioned ethnic groups and descendants of Dutch colonisers. Today, there is some degree of fluency by either educated members of the oldest generation or legal professionals,[218] as specific law codes are still only available in Dutch.[219]
|
124 |
+
|
125 |
+
Religion in Indonesia (2018)[5]
|
126 |
+
|
127 |
+
While the constitution stipulates religious freedom,[220][114] the government officially recognises only six religions: Islam, Protestantism, Roman Catholicism, Hinduism, Buddhism, and Confucianism;[221][222] with indigenous religions only partly acknowledged.[222] Indonesia is the world's most populous Muslim-majority country[223] with 227 million adherents in 2017, with the majority being Sunnis (99%).[224] The Shias and Ahmadis respectively constitute 1% (1–3 million) and 0.2% (200,000–400,000) of the Muslim population.[222][225] Almost 10% of Indonesians are Christians, while the rest are Hindus, Buddhists, and others. Most Hindus are Balinese,[226] and most Buddhists are Chinese Indonesians.[227]
|
128 |
+
|
129 |
+
The natives of the Indonesian archipelago originally practised indigenous animism and dynamism, beliefs that are common to Austronesian people.[228] They worshipped and revered ancestral spirit, and believed that supernatural spirits (hyang) might inhabit certain places such as large trees, stones, forests, mountains, or sacred sites.[228] Examples of Indonesian native belief systems include the Sundanese Sunda Wiwitan, Dayak's Kaharingan, and the Javanese Kejawèn. They have had a significant impact on how other faiths are practised, evidenced by a large proportion of people—such as the Javanese abangan, Balinese Hindus, and Dayak Christians—practising a less orthodox, syncretic form of their religion.[229]
|
130 |
+
|
131 |
+
Hindu influences reached the archipelago as early as the first century CE.[230] The Sundanese kingdom of Salakanagara in western Java around 130 was the first historically recorded Indianised kingdom in the archipelago.[231] Buddhism arrived around the 6th century,[232] and its history in Indonesia is closely related to that of Hinduism, as some empires based on Buddhism had its roots around the same period. The archipelago has witnessed the rise and fall of powerful and influential Hindu and Buddhist empires such as Majapahit, Sailendra, Srivijaya, and Mataram. Though no longer a majority, Hinduism and Buddhism remain defining influences in Indonesian culture.
|
132 |
+
|
133 |
+
Islam was introduced by Sunni traders of the Shafi'i fiqh, as well as Sufi traders from the Indian subcontinent and southern Arabian peninsula as early as the 8th century CE.[233][234] For the most part, Islam overlaid and mixed with existing cultural and religious influences that resulted in a distinct form of Islam.[33][235] Trade, missionary works such as by the Wali Sanga and Chinese explorer Zheng He, and military campaigns by several sultanates helped accelerate the spread of the religion.[236][237] By the end of the 16th century, Islam had supplanted Hinduism and Buddhism as the dominant religion of Java and Sumatra.
|
134 |
+
|
135 |
+
Catholicism was brought by Portuguese traders and missionaries such as Jesuit Francis Xavier, who visited and baptised several thousand locals.[238][239] Its spread faced difficulty due to the VOC policy of banning the religion and the Dutch hostility due to the Eighty Years' War against Catholic Spain's rule. Protestantism is mostly a result of Calvinist and Lutheran missionary efforts during the Dutch colonial era.[240][241][242] Although they are the most common branch, there is a multitude of other denominations elsewhere in the country.[243]
|
136 |
+
|
137 |
+
There was a sizable Jewish presence in the archipelago until 1945, mostly Dutch and some Baghdadi Jews. Since most have left after Indonesia proclaimed independence, Judaism was never accorded official status, and only a tiny number of Jews remain today, mostly in Jakarta and Surabaya.[244] At the national and local level, Indonesia's political leadership and civil society groups have played a crucial role in interfaith relations, both positively and negatively. The invocation of the first principle of Indonesia's philosophical foundation, Pancasila (the belief in the one and only God) often serves as a reminder of religious tolerance,[245] though instances of intolerance have occurred. An overwhelming majority of Indonesians consider religion to be essential,[246] and its role is present in almost all aspects of society, including politics, education, marriage, and public holidays.[247][248]
|
138 |
+
|
139 |
+
Education is compulsory for 12 years.[249] Parents can choose between state-run, non-sectarian schools or private or semi-private religious (usually Islamic) schools, supervised by the ministries of Education and Religion, respectively.[250] Private international schools that do not follow the national curriculum are also available. The enrolment rate is 90% for primary education, 76% for secondary education, and 24% for tertiary education (2015). The literacy rate is 95% (2016), and the government spends about 3.6% of GDP (2015) on education.[251] In 2018, there were more than 4,500 higher educational institutions in Indonesia.[252] The top universities are the Java-based University of Indonesia, Bandung Institute of Technology and Gadjah Mada University.[252] Andalas University is pioneering the establishment of a leading university outside of Java.[253]
|
140 |
+
|
141 |
+
Government expenditure on healthcare is about 3.3% of GDP in 2016.[254] As part of an attempt to achieve universal health care, the government launched the National Health Insurance (Jaminan Kesehatan Nasional, JKN) in 2014 that provides health care to citizens.[255] They include coverage for a range of services from the public and also private firms that have opted to join the scheme. In recent decades, there have been remarkable improvements such as rising life expectancy (from 63 in 1990 to 71 in 2012) and declining child mortality (from 84 deaths per 1,000 births in 1990 to 27 deaths in 2015).[256] Nevertheless, Indonesia continues to face challenges that include maternal and child health, low air quality, malnutrition, high rate of smoking, and infectious diseases.[257]
|
142 |
+
|
143 |
+
Nearly 80% of Indonesia's population lives in the western parts of the archipelago,[258] but they are growing at a slower pace than the rest of the country. This situation creates a gap in wealth, unemployment rate, and health between densely populated islands and economic centres (such as Sumatra and Java) and sparsely populated, disadvantaged areas (such as Maluku and Papua).[259][260] Racism, especially against Chinese Indonesians since the colonial period, is still prevalent today.[261][262] There has been a marked increase of religious intolerance since 1998, with the most recent high-profile case being that of Chinese Christian former governor of Jakarta, Basuki Tjahaja Purnama.[263] LGBT issues have recently gained attention in Indonesia.[264] While homosexuality is legal in most parts of the country, it is illegal in Aceh and South Sumatra.[265] LGBT people and activists have regularly faced fierce opposition, intimidation, and discrimination launched even by authorities.[266]
|
144 |
+
|
145 |
+
The cultural history of the Indonesian archipelago spans more than two millennia. Influences from the Indian subcontinent, mainland China, the Middle East, Europe,[267][268] and the Austronesian peoples have historically shaped the cultural, linguistic and religious make-up of the archipelago. As a result, modern-day Indonesia has a multicultural, multilingual and multi-ethnic society,[3][207] with a complex cultural mixture that differs significantly from the original indigenous cultures. Indonesia currently holds ten items of UNESCO's Intangible Cultural Heritage, including a wayang puppet theatre, kris, batik,[269] pencak silat, angklung, and the three genres of traditional Balinese dance.[270]
|
146 |
+
|
147 |
+
Indonesian arts include both age-old art forms developed through centuries and a recently developed contemporary art. Despite often displaying local ingenuity, Indonesian arts have absorbed foreign influences—most notably from India, the Arab world, China and Europe, as a result of contacts and interactions facilitated, and often motivated, by trade.[271] Painting is an established and developed art in Bali, where its people are famed for their artistry. Their painting tradition started as classical Kamasan or Wayang style visual narrative, derived from visual art discovered on candi bas reliefs in eastern Java.[272]
|
148 |
+
|
149 |
+
There have been numerous discoveries of megalithic sculptures in Indonesia.[273] Subsequently, tribal art has flourished within the culture of Nias, Batak, Asmat, Dayak and Toraja.[274][275] Wood and stone are common materials used as the media for sculpting among these tribes. Between the 8th and 15th centuries, the Javanese civilisation has developed a refined stone sculpting art and architecture which was influenced by Hindu-Buddhist Dharmic civilisation. The temples of Borobudur and Prambanan are among the most famous examples of the practice.[276]
|
150 |
+
|
151 |
+
As with the arts, Indonesian architecture has absorbed foreign influences that have brought cultural changes and profound effect on building styles and techniques. The most dominant has traditionally been Indian; however, Chinese, Arab, and European influences have also been significant. Traditional carpentry, masonry, stone and woodwork techniques and decorations have thrived in vernacular architecture, with numbers of traditional houses' (rumah adat) styles that have been developed. The traditional houses and settlements in the country vary by ethnic groups, and each has a specific custom and history.[277] Examples include Toraja's Tongkonan, Minangkabau's Rumah Gadang and Rangkiang, Javanese style Pendopo pavilion with Joglo style roof, Dayak's longhouses, various Malay houses, Balinese houses and temples, and also different forms of rice barns (lumbung).
|
152 |
+
|
153 |
+
The music of Indonesia predates historical records. Various indigenous tribes incorporate chants and songs accompanied by musical instruments in their rituals. Angklung, kacapi suling, gong, gamelan, talempong, kulintang, and sasando are examples of traditional Indonesian instruments. The diverse world of Indonesian music genres is the result of the musical creativity of its people, and subsequent cultural encounters with foreign influences. These include gambus and qasida from the Middle East,[278] keroncong from Portugal,[279] and dangdut—one of the most popular music genres in Indonesia—with notable Hindi influence as well as Malay orchestras.[280] Today, the Indonesian music industry enjoys both nationwide and regional popularity in Malaysia, Singapore, and Brunei, due to common culture and intelligible languages between Indonesian and Malay.
|
154 |
+
|
155 |
+
Indonesian dances have a diverse history, with more than 3,000 original dances. Scholars believe that they had their beginning in rituals and religious worship.[281] Examples include war dances, a dance of witch doctors, and dance to call for rain or any agricultural rituals such as Hudoq. Indonesian dances derive its influences from the archipelago's prehistoric and tribal, Hindu-Buddhist, and Islamic periods. Recently, modern dances and urban teen dances have gained popularity due to the influence of Western culture, as well as those of Japan and South Korea to some extent. Traditional dances, however, such as the Javanese, Sundanese, Minang, Balinese, Saman continue to be a living and dynamic tradition.
|
156 |
+
|
157 |
+
Indonesia has various styles of clothing as a result of its long and rich cultural history. The national costume has its origins in the indigenous culture of the country and traditional textile traditions. The Javanese Batik and Kebaya[282] are arguably Indonesia's most recognised national costume, though they have Sundanese and Balinese origins as well.[283] Each province has a representation of traditional attire and dress,[267] such as Ulos of Batak from North Sumatra; Songket of Malay and Minangkabau from Sumatra; and Ikat of Sasak from Lombok. People wear national and regional costumes during traditional weddings, formal ceremonies, music performances, government and official occasions,[283] and they vary from traditional to modern attire.
|
158 |
+
|
159 |
+
Wayang, the Javanese, Sundanese, and Balinese shadow puppet theatre display several mythological legends such as Ramayana and Mahabharata.[284] Other forms of local drama include the Javanese Ludruk and Ketoprak, the Sundanese Sandiwara, Betawi Lenong,[285][286] and various Balinese dance drama. They incorporate humour and jest and often involve audiences in their performances.[287] Some theatre traditions also include music, dancing and the silat martial art such as Randai from Minangkabau people of West Sumatra. It is usually performed for traditional ceremonies and festivals,[288][289] and based on semi-historical Minangkabau legends and love story.[289] Modern performing art also developed in Indonesia with their distinct style of drama. Notable theatre, dance, and drama troupe such as Teater Koma are famous as it often portrays social and political satire of Indonesian society.[290]
|
160 |
+
|
161 |
+
The first film produced in the archipelago was Loetoeng Kasaroeng,[291] a silent film by Dutch director L. Heuveldorp. The film industry expanded after independence, with six films made in 1949 rising to 58 in 1955. Usmar Ismail, who made significant imprints in the 1950s and 1960s, is generally considered to be the pioneer of Indonesian films.[292] The latter part of the Sukarno era saw the use of cinema for nationalistic, anti-Western purposes, and foreign films were subsequently banned, while the New Order utilised a censorship code that aimed to maintain social order.[293] Production of films peaked during the 1980s, although it declined significantly in the next decade.[291] Notable films in this period include Pengabdi Setan (1980), Nagabonar (1987), Tjoet Nja' Dhien (1988), Catatan Si Boy (1989), and Warkop's comedy films.
|
162 |
+
|
163 |
+
Independent filmmaking was a rebirth of the film industry since 1998, where films started addressing previously banned topics, such as religion, race, and love.[293] Between 2000 and 2005, the number of films released each year steadily increased.[294] Riri Riza and Mira Lesmana were among the new generation of filmmakers who co-directed Kuldesak (1999), Petualangan Sherina (2000), Ada Apa dengan Cinta? (2002), and Laskar Pelangi (2008). In 2016, Warkop DKI Reborn: Jangkrik Boss Part 1 smashed box office records, becoming the most-watched Indonesian film with 6.8 million tickets sold.[295] Indonesia has held annual film festivals and awards, including the Indonesian Film Festival (Festival Film Indonesia) that has been held intermittently since 1955. It hands out the Citra Award, the film industry's most prestigious award. From 1973 to 1992, the festival was held annually and then discontinued until its revival in 2004.
|
164 |
+
|
165 |
+
Media freedom increased considerably after the fall of the New Order, during which the Ministry of Information monitored and controlled domestic media and restricted foreign media.[296] The television market includes several national commercial networks and provincial networks that compete with public TVRI, which held a monopoly on TV broadcasting from 1962 to 1989. By the early 21st century, the improved communications system had brought television signals to every village and people can choose from up to 11 channels.[297] Private radio stations carry news bulletins while foreign broadcasters supply programmes. The number of printed publications has increased significantly since 1998.[297]
|
166 |
+
|
167 |
+
Like other developing countries, Indonesia began development of the Internet in the early 1990s. Its first commercial Internet service provider, PT. Indo Internet began operation in Jakarta in 1994.[298] The country had 171 million Internet users in 2018, with a penetration rate that keeps increasing annually.[299] Most are between the ages of 15 and 19 and depend primarily on mobile phones for access, outnumbering both laptops and computers.[300]
|
168 |
+
|
169 |
+
The oldest evidence of writing in the Indonesian archipelago is a series of Sanskrit inscriptions dated to the 5th century. Many of Indonesia's peoples have firmly rooted oral traditions, which help to define and preserve their cultural identities.[302] In written poetry and prose, several traditional forms dominate, mainly syair, pantun, gurindam, hikayat and babad. Examples of these forms include Syair Abdul Muluk, Hikayat Hang Tuah, Sulalatus Salatin, and Babad Tanah Jawi.[303]
|
170 |
+
|
171 |
+
Early modern Indonesian literature originates in Sumatran tradition.[304][305] Literature and poetry flourished during the decades leading up to and after independence. Balai Pustaka, the government bureau for popular literature, was instituted in 1917 to promote the development of indigenous literature. Many scholars consider the 1950s and 1960s to be the Golden Age of Indonesian Literature.[306] The style and characteristics of modern Indonesian literature vary according to the dynamics of the country's political and social landscape,[306] most notably the war of independence in the second half of 1940s and the anti-communist mass killings in the mid-1960s.[307] Notable literary figures of the modern era include Multatuli, Mohammad Yamin, Merari Siregar, Marah Roesli, Pramoedya Ananta Toer, and Ayu Utami.
|
172 |
+
|
173 |
+
Indonesian cuisine is one of the most diverse, vibrant, and colourful in the world, full of intense flavour.[308] Many regional cuisines exist, often based upon indigenous culture and foreign influences such as Chinese, European, Middle Eastern, and Indian precedents.[309] Rice is the leading staple food and is served with side dishes of meat and vegetables. Spices (notably chilli), coconut milk, fish and chicken are fundamental ingredients.[310]
|
174 |
+
|
175 |
+
Some popular dishes such as nasi goreng, gado-gado, sate, and soto are prevalent and considered as national dishes. The Ministry of Tourism, however, chose tumpeng as the official national dish in 2014, describing it as binding the diversity of various culinary traditions.[311] Other popular dishes include rendang, one of the many Padang cuisines along with dendeng and gulai. In 2017, rendang was chosen as the "World's Most Delicious Food" by the CNN Travel reader's choice.[312] Another fermented food is oncom, similar in some ways to tempeh but uses a variety of bases (not only soy), created by different fungi, and particularly popular in West Java.
|
176 |
+
|
177 |
+
Sports are generally male-oriented, and spectators are often associated with illegal gambling.[313] Badminton and football are the most popular sports. Indonesia is among the only five countries that have won the Thomas and Uber Cup, the world team championship of men's and women's badminton. Along with weightlifting, it is the sport that contributes the most to Indonesia's Olympic medal tally. Liga 1 is the country's premier football club league. On the international stage, Indonesia has experienced limited success despite being the first Asian team to participate in the FIFA World Cup in 1938 as Dutch East Indies.[314] On the continental level, Indonesia won the bronze medal in the 1958 Asian Games. Indonesia's first appearance in the AFC Asian Cup was in 1996 and successfully qualified for the next three tournaments. They, however, failed to progress through the next stage in all occasions.
|
178 |
+
|
179 |
+
Other popular sports include boxing and basketball, which has a long history in Indonesia and was part of the first National Games (Pekan Olahraga Nasional, PON) in 1948.[315] Some of the famous Indonesian boxers include Ellyas Pical, three times IBF Super flyweight champion; Nico Thomas, Muhammad Rachman, and Chris John.[316] In motorsport, Rio Haryanto became the first Indonesian to compete in Formula One in 2016.[317] Sepak takraw and karapan sapi (bull racing) in Madura are some examples of traditional sports in Indonesia. In areas with a history of tribal warfare, mock fighting contests are held, such as caci in Flores and pasola in Sumba. Pencak Silat is an Indonesian martial art and in 1987, became one of the sporting events in the Southeast Asian Games, with Indonesia appearing as one of the leading competitors. In Southeast Asia, Indonesia is one of the top sports powerhouses by winning the Southeast Asian Games ten times since 1977, most recently in 2011.
|
180 |
+
|
181 |
+
Government
|
182 |
+
|
183 |
+
General
|
en/2727.html.txt
ADDED
@@ -0,0 +1,183 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Coordinates: 5°S 120°E / 5°S 120°E / -5; 120
|
4 |
+
|
5 |
+
Indonesia (/ˌɪndəˈniːʒə/ (listen) IN-də-NEE-zhə), officially the Republic of Indonesia (Indonesian: Republik Indonesia [reˈpublik ɪndoˈnesia] (listen)),[a] is a country in Southeast Asia and Oceania, between the Indian and Pacific oceans. It consists of more than seventeen thousand islands, including Sumatra, Java, Borneo (Kalimantan), Sulawesi, and New Guinea (Papua). Indonesia is the world's largest island country and the 14th largest country by land area, at 1,904,569 square kilometres (735,358 square miles). With over 267 million people, it is the world's 4th most populous country as well as the most populous Muslim-majority country. Java, the world's most populous island, is home to more than half of the country's population.
|
6 |
+
|
7 |
+
The sovereign state is a presidential, constitutional republic with an elected legislature. It has 34 provinces, of which five have special status. The country's capital, Jakarta, is the second-most populous urban area in the world. The country shares land borders with Papua New Guinea, East Timor, and the eastern part of Malaysia. Other neighbouring countries include Singapore, Vietnam, the Philippines, Australia, Palau, and India's Andaman and Nicobar Islands. Despite its large population and densely populated regions, Indonesia has vast areas of wilderness that support one of the world's highest levels of biodiversity.
|
8 |
+
|
9 |
+
The Indonesian archipelago has been a valuable region for trade since at least the 7th century when Srivijaya and later Majapahit traded with entities from mainland China and the Indian subcontinent. Local rulers gradually absorbed foreign influences from the early centuries and Hindu and Buddhist kingdoms flourished. Sunni traders and Sufi scholars brought Islam, while Europeans introduced Christianity through colonisation. Although sometimes interrupted by the Portuguese, French and British, the Dutch were the foremost colonial power for much of their 350-year presence in the archipelago. The concept of "Indonesia" as a nation-state emerged in the early 20th century[13] and the country proclaimed its independence in 1945. However, it was not until 1949 that the Dutch recognised Indonesia's sovereignty following an armed and diplomatic conflict between the two.
|
10 |
+
|
11 |
+
Indonesia consists of hundreds of distinct native ethnic and linguistic groups, with the largest one being the Javanese. A shared identity has developed with the motto "Bhinneka Tunggal Ika" ("Unity in Diversity" literally, "many, yet one"), defined by a national language, ethnic diversity, religious pluralism within a Muslim-majority population, and a history of colonialism and rebellion against it. The economy of Indonesia is the world's 16th largest by nominal GDP and 7th by GDP at PPP. The country is a member of several multilateral organisations, including the United Nations, World Trade Organization, International Monetary Fund, G20, and a founding member of Non-Aligned Movement, Association of Southeast Asian Nations, Asia-Pacific Economic Cooperation, East Asia Summit, Asian Infrastructure Investment Bank, and Organisation of Islamic Cooperation.
|
12 |
+
|
13 |
+
The name Indonesia derives from Greek Indos (Ἰνδός) and the word nesos (νῆσος), meaning "Indian islands".[14] The name dates to the 18th century, far predating the formation of independent Indonesia.[15] In 1850, George Windsor Earl, an English ethnologist, proposed the terms Indunesians—and, his preference, Malayunesians—for the inhabitants of the "Indian Archipelago or Malayan Archipelago".[16] In the same publication, one of his students, James Richardson Logan, used Indonesia as a synonym for Indian Archipelago.[17][18] However, Dutch academics writing in East Indies publications were reluctant to use Indonesia; they preferred Malay Archipelago (Dutch: Maleische Archipel); the Netherlands East Indies (Nederlandsch Oost Indië), popularly Indië; the East (de Oost); and Insulinde.[19]
|
14 |
+
|
15 |
+
After 1900, Indonesia became more common in academic circles outside the Netherlands, and native nationalist groups adopted it for political expression.[19] Adolf Bastian, of the University of Berlin, popularised the name through his book Indonesien oder die Inseln des Malayischen Archipels, 1884–1894. The first native scholar to use the name was Ki Hajar Dewantara when in 1913 he established a press bureau in the Netherlands, Indonesisch Pers-bureau.[15]
|
16 |
+
|
17 |
+
Fossilised remains of Homo erectus, popularly known as the "Java Man", suggest the Indonesian archipelago was inhabited two million to 500,000 years ago.[21][22][23] Homo sapiens reached the region around 43,000 BCE.[24] Austronesian peoples, who form the majority of the modern population, migrated to Southeast Asia from what is now Taiwan. They arrived in the archipelago around 2,000 BCE and confined the native Melanesian peoples to the far eastern regions as they spread east.[25] Ideal agricultural conditions and the mastering of wet-field rice cultivation as early as the eighth century BCE[26] allowed villages, towns, and small kingdoms to flourish by the first century CE. The archipelago's strategic sea-lane position fostered inter-island and international trade, including with Indian kingdoms and Chinese dynasties, from several centuries BCE.[27] Trade has since fundamentally shaped Indonesian history.[28][29]
|
18 |
+
|
19 |
+
From the seventh century CE, the Srivijaya naval kingdom flourished as a result of trade and the influences of Hinduism and Buddhism.[30] Between the eighth and tenth centuries CE, the agricultural Buddhist Sailendra and Hindu Mataram dynasties thrived and declined in inland Java, leaving grand religious monuments such as Sailendra's Borobudur and Mataram's Prambanan. The Hindu Majapahit kingdom was founded in eastern Java in the late 13th century, and under Gajah Mada, its influence stretched over much of present-day Indonesia. This period is often referred to as a "Golden Age" in Indonesian history.[31]
|
20 |
+
|
21 |
+
The earliest evidence of Islamized populations in the archipelago dates to the 13th century in northern Sumatra.[32] Other parts of the archipelago gradually adopted Islam, and it was the dominant religion in Java and Sumatra by the end of the 16th century. For the most part, Islam overlaid and mixed with existing cultural and religious influences, which shaped the predominant form of Islam in Indonesia, particularly in Java.[33]
|
22 |
+
|
23 |
+
The first Europeans arrived in the archipelago in 1512, when Portuguese traders, led by Francisco Serrão, sought to monopolise the sources of nutmeg, cloves, and cubeb pepper in the Maluku Islands.[34] Dutch and British traders followed. In 1602, the Dutch established the Dutch East India Company (VOC) and became the dominant European power for almost 200 years. The VOC was dissolved in 1800 following bankruptcy, and the Netherlands established the Dutch East Indies as a nationalised colony.[35]
|
24 |
+
|
25 |
+
For most of the colonial period, Dutch control over the archipelago was tenuous. Dutch forces were engaged continuously in quelling rebellions both on and off Java. The influence of local leaders such as Prince Diponegoro in central Java, Imam Bonjol in central Sumatra, Pattimura in Maluku, and bloody 30-year war in Aceh weakened the Dutch and tied up the colonial military forces.[36][37][38] Only in the early 20th century did their dominance extend to what was to become Indonesia's current boundaries.[39][40][41][42]
|
26 |
+
|
27 |
+
The Japanese invasion and subsequent occupation during World War II ended Dutch rule[43][44] and encouraged the previously suppressed independence movement. Two days after the surrender of Japan in August 1945, Sukarno and Mohammad Hatta, influential nationalist leaders, proclaimed Indonesian independence and were appointed president and vice-president respectively.[45] The Netherlands attempted to re-establish their rule, and a bitter armed and diplomatic struggle ended in December 1949 when the Dutch formally recognised Indonesian independence in the face of international pressure.[46][47] Despite extraordinary political, social and sectarian divisions, Indonesians, on the whole, found unity in their fight for independence.[48][49]
|
28 |
+
|
29 |
+
As president, Sukarno moved Indonesia from democracy towards authoritarianism and maintained power by balancing the opposing forces of the military, political Islam, and the increasingly powerful Communist Party of Indonesia (PKI).[50] Tensions between the military and the PKI culminated in an attempted coup in 1965. The army, led by Major General Suharto, countered by instigating a violent anti-communist purge that killed between 500,000 and one million people.[51] The PKI was blamed for the coup and effectively destroyed.[52][53][54] Suharto capitalised on Sukarno's weakened position, and following a drawn-out power play with Sukarno, Suharto was appointed president in March 1968. His "New Order" administration,[55] supported by the United States,[56][57][58] encouraged foreign direct investment,[59][60] which was a crucial factor in the subsequent three decades of substantial economic growth.
|
30 |
+
|
31 |
+
Indonesia was the country hardest hit by the 1997 Asian financial crisis.[61] It brought out popular discontent with the New Order's corruption and suppression of political opposition and ultimately ended Suharto's presidency.[62][63][64][65] In 1999, East Timor seceded from Indonesia, following its 1975 invasion by Indonesia[66] and a 25-year occupation that was marked by international condemnation of human rights abuses.[67]
|
32 |
+
|
33 |
+
In the post-Suharto era, democratic processes have been strengthened by enhancing regional autonomy and instituting the country's first direct presidential election in 2004.[68] Political, economic and social instability, corruption, and terrorism remained problems in the 2000s; however, in recent years, the economy has performed strongly. Although relations among the diverse population are mostly harmonious, acute sectarian discontent and violence remain a problem in some areas.[69] A political settlement to an armed separatist conflict in Aceh was achieved in 2005 following the 2004 Indian Ocean earthquake and tsunami that killed 130,000 Indonesians.[70] In 2014, Joko Widodo became the first directly elected president from outside the military and political elite.[71]
|
34 |
+
|
35 |
+
Indonesia lies between latitudes 11°S and 6°N, and longitudes 95°E and 141°E. It is the largest archipelagic country in the world, extending 5,120 kilometres (3,181 mi) from east to west and 1,760 kilometres (1,094 mi) from north to south.[72] According to the country's Coordinating Ministry for Maritime and Investments Affairs, Indonesia has 17,504 islands (16,056 of which are registered at the UN),[73] scattered over both sides of the equator, around 6,000 of which are inhabited.[74] The largest are Java, Sumatra, Borneo (shared with Brunei and Malaysia), Sulawesi, and New Guinea (shared with Papua New Guinea). Indonesia shares land borders with Malaysia on Borneo, Papua New Guinea on the island of New Guinea, and East Timor on the island of Timor, and maritime borders with Singapore, Malaysia, Vietnam, the Philippines, Palau, and Australia.
|
36 |
+
|
37 |
+
At 4,884 metres (16,024 ft), Puncak Jaya is Indonesia's highest peak, and Lake Toba in Sumatra is the largest lake, with an area of 1,145 km2 (442 sq mi). Indonesia's largest rivers are in Kalimantan and New Guinea and include Kapuas, Barito, Mamberamo, Sepik and Mahakam. They serve as communication and transport links between the island's river settlements.[75]
|
38 |
+
|
39 |
+
Indonesia lies along the equator, and its climate tends to be relatively even year-round.[76] Indonesia has two seasons—a wet season and a dry season—with no extremes of summer or winter.[77] For most of Indonesia, the dry season falls between May and October with the wet season between November and April.[77] Indonesia's climate is almost entirely tropical, dominated by the tropical rainforest climate found in every large island of Indonesia. More cooling climate types do exist in mountainous regions that are 1,300 to 1,500 metres (4,300 to 4,900 feet) above sea level. The oceanic climate (Köppen Cfb) prevails in highland areas adjacent to rainforest climates, with reasonably uniform precipitation year-round. In highland areas near the tropical monsoon and tropical savanna climates, the subtropical highland climate (Köppen Cwb) is prevalent with a more pronounced dry season.
|
40 |
+
|
41 |
+
Some regions, such as Kalimantan and Sumatra, experience only slight differences in rainfall and temperature between the seasons, whereas others, such as Nusa Tenggara, experience far more pronounced differences with droughts in the dry season, and floods in the wet. Rainfall varies across regions, with more in western Sumatra, Java, and the interiors of Kalimantan and Papua, and less in areas closer to Australia, such as Nusa Tenggara, which tend to be dry. The almost uniformly warm waters that constitute 81% of Indonesia's area ensure that temperatures on land remain relatively constant. Humidity is quite high, at between 70 and 90%. Winds are moderate and generally predictable, with monsoons usually blowing in from the south and east in June through October, and from the northwest in November through March. Typhoons and large-scale storms pose little hazard to mariners; significant dangers come from swift currents in channels, such as the Lombok and Sape straits.
|
42 |
+
|
43 |
+
Tectonically, Indonesia is highly unstable, making it a site of numerous volcanoes and frequent earthquakes.[79] It lies on the Pacific Ring of Fire where the Indo-Australian Plate and the Pacific Plate are pushed under the Eurasian plate where they melt at about 100 kilometres (62 miles) deep. A string of volcanoes runs through Sumatra, Java, Bali and Nusa Tenggara, and then to the Banda Islands of Maluku to northeastern Sulawesi.[80] Of the 400 volcanoes, around 130 are active.[79] Between 1972 and 1991, there were 29 volcanic eruptions, mostly on Java.[81] Volcanic ash has made agricultural conditions unpredictable in some areas.[82] However, it has also resulted in fertile soils, a factor in historically sustaining high population densities of Java and Bali.[83]
|
44 |
+
|
45 |
+
A massive supervolcano erupted at present-day Lake Toba around 70,000 BCE. It is believed to have caused a global volcanic winter and cooling of the climate, and subsequently led to a genetic bottleneck in human evolution, though this is still in debate.[84] The 1815 eruption of Mount Tambora and the 1883 eruption of Krakatoa were among the largest in recorded history. The former caused 92,000 deaths and created an umbrella of volcanic ash which spread and blanketed parts of the archipelago, and made much of Northern Hemisphere without summer in 1816.[85] The latter produced the loudest sound in recorded history and caused 36,000 deaths due to the eruption itself and the resulting tsunamis, with significant additional effects around the world years after the event.[86] Recent catastrophic disasters due to seismic activity include the 2004 Indian Ocean earthquake and the 2006 Yogyakarta earthquake.
|
46 |
+
|
47 |
+
Indonesia's size, tropical climate, and archipelagic geography support one of the world's highest levels of biodiversity.[87] Its flora and fauna is a mixture of Asian and Australasian species.[88] The islands of the Sunda Shelf (Sumatra, Java, Borneo, and Bali) were once linked to mainland Asia, and have a wealth of Asian fauna. Large species such as the Sumatran tiger, rhinoceros, orangutan, Asian elephant, and leopard were once abundant as far east as Bali, but numbers and distribution have dwindled drastically. Having been long separated from the continental landmasses, Sulawesi, Nusa Tenggara, and Maluku have developed their unique flora and fauna.[89] Papua was part of the Australian landmass and is home to a unique fauna and flora closely related to that of Australia, including over 600 bird species.[90] Forests cover approximately 70% of the country.[91] However, the forests of the smaller, and more densely populated Java, have largely been removed for human habitation and agriculture.
|
48 |
+
|
49 |
+
Indonesia is second only to Australia in terms of total endemic species, with 36% of its 1,531 species of bird and 39% of its 515 species of mammal being endemic.[92] Tropical seas surround Indonesia's 80,000 kilometres (50,000 miles) of coastline. The country has a range of sea and coastal ecosystems, including beaches, dunes, estuaries, mangroves, coral reefs, seagrass beds, coastal mudflats, tidal flats, algal beds, and small island ecosystems.[14] Indonesia is one of Coral Triangle countries with the world's most enormous diversity of coral reef fish with more than 1,650 species in eastern Indonesia only.[93]
|
50 |
+
|
51 |
+
British naturalist Alfred Russel Wallace described a dividing line (Wallace Line) between the distribution of Indonesia's Asian and Australasian species.[94] It runs roughly north–south along the edge of the Sunda Shelf, between Kalimantan and Sulawesi, and along the deep Lombok Strait, between Lombok and Bali. Flora and fauna on the west of the line are generally Asian, while east from Lombok they are increasingly Australian until the tipping point at the Weber Line. In his 1869 book, The Malay Archipelago, Wallace described numerous species unique to the area.[95] The region of islands between his line and New Guinea is now termed Wallacea.[94]
|
52 |
+
|
53 |
+
Indonesia's large and growing population and rapid industrialisation present serious environmental issues. They are often given a lower priority due to high poverty levels and weak, under-resourced governance.[96] Problems include the destruction of peatlands, large-scale illegal deforestation—and the resulting Southeast Asian haze—over-exploitation of marine resources, air pollution, garbage management, and reliable water and wastewater services.[96] These issues contribute to Indonesia's poor ranking (number 116 out of 180 countries) in the 2020 Environmental Performance Index. The report also indicates that Indonesia's performance is generally below average in both regional and global context.[97]
|
54 |
+
|
55 |
+
Expansion of the palm oil industry requiring significant changes to the natural ecosystems is the one primary factor behind much of Indonesia's deforestation.[98] While it can generate wealth for local communities, it may degrade ecosystems and cause social problems.[99] This situation makes Indonesia the world's largest forest-based emitter of greenhouse gases.[100] It also threatens the survival of indigenous and endemic species. The International Union for Conservation of Nature (IUCN) identified 140 species of mammals as threatened, and 15 as critically endangered, including the Bali starling,[101] Sumatran orangutan,[102] and Javan rhinoceros.[103]
|
56 |
+
|
57 |
+
Several studies consider Indonesia to be at severe risk from the projected effects of climate change.[104] They predict that unreduced emissions would see an average temperature rise of around 1 °C (2 °F) by mid-century,[105][106] amounting to almost double the frequency of scorching days (above 35 °C or 95 °F) per year by 2030. That figure is predicted to rise further by the end of the century.[105] It would raise the frequency of drought and food shortages, having an impact on precipitation and the patterns of wet and dry seasons, the basis of Indonesia's agricultural system.[106] It would also encourage diseases and increases in wildfires, which threaten the country's enormous rainforest.[106] Rising sea levels, at current rates, would result in tens of millions of households being at risk of submersion by mid-century.[107] A majority of Indonesia's population lives in low-lying coastal areas,[106] including the capital Jakarta, the fastest-sinking city in the world.[108] Impoverished communities would likely be affected the most by climate change.[109]
|
58 |
+
|
59 |
+
Indonesia is a republic with a presidential system. Following the fall of the New Order in 1998, political and governmental structures have undergone sweeping reforms, with four constitutional amendments revamping the executive, legislative and judicial branches.[110] Chief among them is the delegation of power and authority to various regional entities while remaining a unitary state.[111] The President of Indonesia is the head of state and head of government, commander-in-chief of the Indonesian National Armed Forces (Tentara Nasional Indonesia, TNI), and the director of domestic governance, policy-making, and foreign affairs. The president may serve a maximum of two consecutive five-year terms.[112]
|
60 |
+
|
61 |
+
The highest representative body at the national level is the People's Consultative Assembly (Majelis Permusyawaratan Rakyat, MPR). Its main functions are supporting and amending the constitution, inaugurating and impeaching the president,[113][114] and formalising broad outlines of state policy. The MPR comprises two houses; the People's Representative Council (Dewan Perwakilan Rakyat, DPR), with 575 members, and the Regional Representative Council (Dewan Perwakilan Daerah, DPD), with 136.[115] The DPR passes legislation and monitors the executive branch. Reforms since 1998 have markedly increased its role in national governance,[110] while the DPD is a new chamber for matters of regional management.[116][114]
|
62 |
+
|
63 |
+
Most civil disputes appear before the State Court (Pengadilan Negeri); appeals are heard before the High Court (Pengadilan Tinggi). The Supreme Court of Indonesia (Mahkamah Agung) is the highest level of the judicial branch, and hears final cessation appeals and conducts case reviews. Other courts include the Constitutional Court (Mahkamah Konstitusi) that listens to constitutional and political matters and the Religious Court (Pengadilan Agama) that deals with codified Islamic Law (sharia) cases.[117] Additionally, the Judicial Commission (Komisi Yudisial) monitors the performance of judges.
|
64 |
+
|
65 |
+
Since 1999, Indonesia has had a multi-party system. In all legislative elections since the fall of the New Order, no political party has managed to win an overall majority of seats. The Indonesian Democratic Party of Struggle (PDI-P), which secured the most votes in the 2019 elections, is the party of the incumbent President, Joko Widodo.[118] Other notable parties include the Party of the Functional Groups (Golkar), the Great Indonesia Movement Party (Gerindra), the Democratic Party, and the Prosperous Justice Party (PKS). The 2019 elections resulted in nine political parties in the DPR, with a parliamentary threshold of 4% of the national vote.[119] The first general election was held in 1955 to elect members of the DPR and the Constitutional Assembly (Konstituante). At the national level, Indonesians did not elect a president until 2004. Since then, the president is elected for a five-year term, as are the party-aligned members of the DPR and the non-partisan DPD.[115][110] Beginning with 2015 local elections, elections for governors and mayors have occurred on the same date. As of 2019, both legislative and presidential elections coincide.
|
66 |
+
|
67 |
+
Indonesia has several levels of subdivisions. The first level is that of the provinces, with five out of a total of 34 having a special status. Each has a legislature (Dewan Perwakilan Rakyat Daerah, DPRD) and an elected governor. This number has evolved, with the most recent change being the split of North Kalimantan from East Kalimantan in 2012.[120] The second level is that of the regencies (kabupaten) and cities (kota), led by regents (bupati) and mayors (walikota) respectively and a legislature (DPRD Kabupaten/Kota). The third level is that of the districts (kecamatan, distrik in Papua, or kapanewon and kemantren in Yogyakarta), and the fourth is of the villages (either desa, kelurahan, kampung, nagari in West Sumatra, or gampong in Aceh).
|
68 |
+
|
69 |
+
The village is the lowest level of government administration. It is divided into several community groups (rukun warga, RW), which are further divided into neighbourhood groups (rukun tetangga, RT). In Java, the village (desa) is divided into smaller units called dusun or dukuh (hamlets), which are the same as RW. Following the implementation of regional autonomy measures in 2001, regencies and cities have become chief administrative units, responsible for providing most government services. The village administration level is the most influential on a citizen's daily life and handles matters of a village or neighbourhood through an elected village chief (lurah or kepala desa).
|
70 |
+
|
71 |
+
Aceh, Jakarta, Yogyakarta, Papua, and West Papua have greater legislative privileges and a higher degree of autonomy from the central government than the other provinces. A conservative Islamic territory, Aceh has the right to create some aspects of an independent legal system implementing sharia.[121] Yogyakarta is the only pre-colonial monarchy legally recognised in Indonesia, with the positions of governor and vice governor being prioritised for descendants of the Sultan of Yogyakarta and Paku Alam, respectively.[122] Papua and West Papua are the only provinces where the indigenous people have privileges in their local government.[123] Jakarta is the only city granted a provincial government due to its position as the capital of Indonesia.[124]
|
72 |
+
|
73 |
+
Indonesia maintains 132 diplomatic missions abroad, including 95 embassies.[125] The country adheres to what it calls a "free and active" foreign policy, seeking a role in regional affairs in proportion to its size and location but avoiding involvement in conflicts among other countries.[126]
|
74 |
+
|
75 |
+
Indonesia was a significant battleground during the Cold War. Numerous attempts by the United States and the Soviet Union,[127][128] and China to some degree,[129] culminated in the 1965 coup attempt and subsequent upheaval that led to a reorientation of foreign policy. Quiet alignment with the West while maintaining a non-aligned stance has characterised Indonesia's foreign policy since then.[130] Today, it maintains close relations with its neighbours and is a founding member of the Association of Southeast Asian Nations (ASEAN) and the East Asia Summit. In common with most of the Muslim world, Indonesia does not have diplomatic relations with Israel and has actively supported Palestine. However, observers have pointed out that Indonesia has ties with Israel, albeit discreetly.[131]
|
76 |
+
|
77 |
+
Indonesia has been a member of the United Nations since 1950 and was a founding member of the Non-Aligned Movement (NAM) and the Organisation of Islamic Cooperation (OIC).[132] Indonesia is a signatory to the ASEAN Free Trade Area agreement, the Cairns Group, and the World Trade Organization (WTO), and an occasional member of OPEC.[133] During the Indonesia–Malaysia confrontation, Indonesia withdrew from the UN due to the latter's election to the United Nations Security Council, although it returned 18 months later. It marked the first time in UN history that a member state had attempted a withdrawal.[134] Indonesia has been a humanitarian and development aid recipient since 1966,[135][136][137] and recently, the country has expressed interest in becoming an aid donor.[138]
|
78 |
+
|
79 |
+
Indonesia's Armed Forces (TNI) include the Army (TNI–AD), Navy (TNI–AL, which includes Marine Corps), and Air Force (TNI–AU).[139] The army has about 400,000 active-duty personnel. Defence spending in the national budget was 0.7% of GDP in 2018,[140] with controversial involvement of military-owned commercial interests and foundations.[141] The Armed Forces were formed during the Indonesian National Revolution when it undertook guerrilla warfare along with informal militia. Since then, territorial lines have formed the basis of all TNI branches' structure, aimed at maintaining domestic stability and deterring foreign threats.[142] The military has possessed a strong political influence since its founding, which peaked during the New Order. Political reforms in 1998 included the removal of the TNI's formal representation from the legislature. Nevertheless, its political influence remains, albeit at a reduced level.[143]
|
80 |
+
|
81 |
+
Since independence, the country has struggled to maintain unity against local insurgencies and separatist movements.[144] Some, notably in Aceh and Papua, have led to an armed conflict, and subsequent allegations of human rights abuses and brutality from all sides.[145][146] The former was resolved peacefully in 2005,[70] while the latter continues, amid a significant, albeit imperfect, implementation of regional autonomy laws, and a reported decline in the levels of violence and human rights abuses since 2004.[147] Other engagements of the army include the campaign against the Netherlands New Guinea to incorporate the territory into Indonesia, the Konfrontasi to oppose the creation of Malaysia, the mass killings of PKI, and the invasion of East Timor, which remains Indonesia's most massive military operation.[148][149]
|
82 |
+
|
83 |
+
Indonesia has a mixed economy in which both the private sector and government play vital roles.[150] As the only G20 member state in Southeast Asia,[151] the country has the largest economy in the region and is classified as a newly industrialised country. As of 2019[update], it is the world's 16th largest economy by nominal GDP and 7th in terms of GDP at PPP, estimated to be US$1.100 trillion and US$3.740 trillion respectively. Per capita GDP in PPP is US$14,020, while nominal per capita GDP is US$4,120. The debt ratio to GDP is 29.2%.[152] The services are the economy's largest sector and account for 43.4% of GDP (2018), followed by industry (39.7%) and agriculture (12.8%).[153] Since 2009, it has employed more people than other sectors, accounting for 47.7% of the total labour force, followed by agriculture (30.2%) and industry (21.9%).[154]
|
84 |
+
|
85 |
+
Over time, the structure of the economy has changed considerably.[155] Historically, it has been weighted heavily towards agriculture, reflecting both its stage of economic development and government policies in the 1950s and 1960s to promote agricultural self-sufficiency.[155] A gradual process of industrialisation and urbanisation began in the late 1960s and accelerated in the 1980s as falling oil prices saw the government focus on diversifying away from oil exports and towards manufactured exports.[155] This development continued throughout the 1980s and into the next decade despite the 1990 oil price shock, during which the GDP rose at an average rate of 7.1%. As a result, the official poverty rate fell from 60% to 15%.[156] Reduction of trade barriers from the mid-1980s made the economy more globally integrated. The growth, however, ended with the 1997 Asian financial crisis, which affected the economy severely. It caused a real GDP contraction by 13.1% in 1998, and inflation reached 72%. The economy reached its low point in mid-1999 with only 0.8% real GDP growth.
|
86 |
+
|
87 |
+
Relatively steady inflation[157] and an increase in GDP deflator and the Consumer Price Index[158] have contributed to strong economic growth in recent years. Since 2007, annual growth has accelerated to between 4% and 6% as a result of improvement in the banking sector and domestic consumption,[159] helping Indonesia weather the 2008–2009 Great Recession.[160] In 2011, the country regained the investment grade rating it had lost in 1997.[161] As of 2019[update], 9.41% of the population lived below the poverty line, and the official open unemployment rate was 5.28%.[162]
|
88 |
+
|
89 |
+
Indonesia has abundant natural resources like oil and natural gas, coal, tin, copper, gold, and nickel, while agriculture produces rice, palm oil, tea, coffee, cacao, medicinal plants, spices, and rubber. These commodities make up a large portion of the country's exports, with palm oil and coal briquettes as the leading export commodities. In addition to refined and crude petroleum as the main imports, telephones, vehicle parts and wheat cover the majority of additional imports.[163] China, the United States, Japan, Singapore, India, Malaysia, South Korea and Thailand are Indonesia's principal export markets and import partners.[164]
|
90 |
+
|
91 |
+
Indonesia's transport system has been shaped over time by the economic resource base of an archipelago, and the distribution of its 250 million people highly concentrated on Java.[165] All transport modes play a role in the country's transport system and are generally complementary rather than competitive. In 2016, the transport sector generated about 5.2% of GDP.[166]
|
92 |
+
|
93 |
+
The road transport system is predominant, with a total length of 542,310 kilometres (336,980 miles) as of 2018[update].[167] Jakarta has the most extended bus rapid transit system in the world, boasting some 251.2 kilometres (156.1 miles) in 13 corridors and ten cross-corridor routes.[168] Rickshaws such as bajaj and becak and share taxis such as Angkot and Metromini are a regular sight in the country. Most of the railways are in Java, used for both freight and passenger transport, such as local commuter rail services complementing the inter-city rail network in several cities. In the late 2010s, Jakarta and Palembang were the first cities in Indonesia to have rapid transit systems, with more planned for other cities in the future.[169] In 2015, the government announced a plan to build a high-speed rail, which would be a first in Southeast Asia.[170]
|
94 |
+
|
95 |
+
Indonesia's largest airport, Soekarno–Hatta International Airport is the busiest in the Southern Hemisphere, serving 66 million passengers in 2018.[171] Ngurah Rai International Airport and Juanda International Airport are the country's second- and third-busiest airport respectively. Garuda Indonesia, the country's flag carrier since 1949, is one of the world's leading airlines and a member of the global airline alliance SkyTeam. Port of Tanjung Priok is the busiest and most advanced Indonesian port,[172] handling more than 50% of Indonesia's trans-shipment cargo traffic.
|
96 |
+
|
97 |
+
In 2017, Indonesia was the world's 9th largest energy producer with 4,200 terawatt-hours (14.2 quadrillion British thermal units), and the 15th largest energy consumer, with 2,100 terawatt-hours (7.1 quadrillion British thermal units).[173] The country has substantial energy resources, including 22 billion barrels (3.5 billion cubic metres) of conventional oil and gas reserves (of which about 4 billion barrels are recoverable), 8 billion barrels of oil-equivalent of coal-based methane (CBM) resources, and 28 billion tonnes of recoverable coal.[174] While reliance on domestic coal and imported oil has increased,[175] Indonesia has seen progress in renewable energy with hydropower being the most abundant source. Furthermore, the country has the potential for geothermal, solar, wind, biomass and ocean energy.[176] Indonesia has set out to achieve 23% use of renewable energy by 2025 and 31% by 2050.[175] As of 2015[update], Indonesia's total national installed power generation capacity stands at 55,528.51 MW.[177]
|
98 |
+
|
99 |
+
The country's largest dam, Jatiluhur, has several purposes including the provision of hydroelectric power generation, water supply, flood control, irrigation and aquaculture. The earth-fill dam is 105 m (344 ft) high and withholds a reservoir of 3.0 billion m3 (2.4 million acre⋅ft). It helps to supply water to Jakarta and to irrigate 240,000 ha (590,000 acres) of rice fields[178] and has an installed capacity of 186.5 MW which feeds into the Java grid managed by the State Electricity Company (Perusahaan Listrik Negara, PLN).
|
100 |
+
|
101 |
+
Indonesia's expenditure on science and technology is relatively low, at less than 0.1% of GDP (2017).[179] Historical examples of scientific and technological developments include the paddy cultivation technique terasering, which is common in Southeast Asia, and the pinisi boats by the Bugis and Makassar people.[180] In the 1980s, Indonesian engineer Tjokorda Raka Sukawati invented a road construction technique named Sosrobahu that allows the construction of long stretches of flyovers above existing main roads with minimum traffic disruption. It later became widely used in several countries.[181] The country is also an active producer of passenger trains and freight wagons with its state-owned company, the Indonesian Railway Industry (INKA), and has exported trains abroad.[182]
|
102 |
+
|
103 |
+
Indonesia has a long history in developing military and small commuter aircraft as the only country in Southeast Asia to build and produce aircraft. With its state-owned company, the Indonesian Aerospace (PT. Dirgantara Indonesia), Indonesia has provided components for Boeing and Airbus. The company also collaborated with EADS CASA of Spain to develop the CN-235 that has seen use by several countries.[183] Former President B. J. Habibie played a vital role in this achievement.[184] Indonesia has also joined the South Korean programme to manufacture the fifth-generation jet fighter KAI KF-X.[185]
|
104 |
+
|
105 |
+
Indonesia has a space programme and space agency, the National Institute of Aeronautics and Space (Lembaga Penerbangan dan Antariksa Nasional, LAPAN). In the 1970s, Indonesia became the first developing country to operate a satellite system called Palapa,[186] a series of communication satellites owned by Indosat Ooredoo. The first satellite, PALAPA A1 was launched on 8 July 1976 from the Kennedy Space Center in Florida, United States.[187] As of 2019[update], Indonesia has launched 18 satellites for various purposes,[188] and LAPAN has expressed a desire to put satellites in orbit with native launch vehicles by 2040.[189]
|
106 |
+
|
107 |
+
Tourism contributed around US$19.7 billion to GDP in 2019. In 2018, Indonesia received 15.8 million visitors, a growth of 12.5% from last year, and received an average receipt of US$967.[191][192] China, Singapore, Malaysia, Australia, and Japan are the top five sources of visitors to Indonesia. Since 2011, Wonderful Indonesia has been the slogan of the country's international marketing campaign to promote tourism.[193]
|
108 |
+
|
109 |
+
Nature and culture are prime attractions of Indonesian tourism. The former can boast a unique combination of a tropical climate, a vast archipelago, and a long stretch of beaches, and the latter complement those with a rich cultural heritage reflecting Indonesia's dynamic history and ethnic diversity. Indonesia has a well-preserved natural ecosystem with rain forests that stretch over about 57% of Indonesia's land (225 million acres). Forests on Sumatra and Kalimantan are examples of popular destinations, such as the Orangutan wildlife reserve. Moreover, Indonesia has one of the world's longest coastlines, measuring 54,716 kilometres (33,999 mi). The ancient Borobudur and Prambanan temples as well as Toraja and Bali, with its traditional festivities, are some of the popular destinations for cultural tourism.[195]
|
110 |
+
|
111 |
+
Indonesia has nine UNESCO World Heritage Sites, including the Komodo National Park and the Sawahlunto Coal Mine; and a further 19 in a tentative list that includes Bunaken National Park and Raja Ampat Islands.[196] Other attractions include the specific points in Indonesian history, such as the colonial heritage of the Dutch East Indies in the old towns of Jakarta and Semarang, and the royal palaces of Pagaruyung, Ubud, and Yogyakarta.[195]
|
112 |
+
|
113 |
+
The 2010 census recorded Indonesia's population as 237.6 million, the fourth largest in the world, with high population growth at 1.9%.[197] Java is the world's most populous island,[198] where 58% of the country's population lives.[199] The population density is 138 people per km2 (357 per sq mi), ranking 88th in the world,[200] although Java has a population density of 1,067 people per km2 (2,435 per sq mi). In 1961, the first post-colonial census recorded a total of 97 million people.[201] It is expected to grow to around 295 million by 2030 and 321 million by 2050.[202] The country currently possesses a relatively young population, with a median age of 30.2 years (2017 estimate).[74]
|
114 |
+
|
115 |
+
The spread of the population is uneven throughout the archipelago with a varying habitat and level of development, ranging from the megacity of Jakarta to uncontacted tribes in Papua.[203] As of 2010, about 49.7% of the population lives in urban areas.[204] Jakarta is the country's primate city and the second-most populous urban area in the world with over 34 million residents.[205] About 8 million Indonesians live overseas; most settled in Malaysia, the Netherlands, Saudi Arabia, the United Arab Emirates, Hong Kong, Singapore, the United States, and Australia.[206]
|
116 |
+
|
117 |
+
Indonesia is an ethnically diverse country, with around 300 distinct native ethnic groups.[207] Most Indonesians are descended from Austronesian peoples whose languages had origins in Proto-Austronesian, which possibly originated in what is now Taiwan. Another major grouping is the Melanesians, who inhabit eastern Indonesia (the Maluku Islands and Western New Guinea).[25][208][209]
|
118 |
+
|
119 |
+
The Javanese are the largest ethnic group, constituting 40.2% of the population,[4] and are politically dominant.[210] They are predominantly located in the central to eastern parts of Java and also sizable numbers in most provinces. The Sundanese, Malay, Batak, Madurese, Minangkabau and Buginese are the next largest groups in the country.[b] A sense of Indonesian nationhood exists alongside strong regional identities.[211]
|
120 |
+
|
121 |
+
The country's official language is Indonesian, a variant of Malay based on its prestige dialect, which for centuries had been the lingua franca of the archipelago. It was promoted by nationalists in the 1920s and achieved official status under the name Bahasa Indonesia in 1945.[212] As a result of centuries-long contact with other languages, it is rich in local and foreign influences, including from Javanese, Sundanese, Minangkabau, Hindi, Sanskrit, Chinese, Arabic, Dutch, Portuguese and English.[213][214][215] Nearly every Indonesian speaks the language due to its widespread use in education, academics, communications, business, politics, and mass media. Most Indonesians also speak at least one of more than 700 local languages,[3] often as their first language. Most belong to the Austronesian language family, while there are over 270 Papuan languages spoken in eastern Indonesia.[3] Of these, Javanese is the most widely spoken.[74]
|
122 |
+
|
123 |
+
In 1930, Dutch and other Europeans (Totok), Eurasians, and derivative people like the Indos, numbered 240,000 or 0.4% of the total population.[216] Historically, they constituted only a tiny fraction of the native population and continue to do so today. Despite the Dutch presence for almost 350 years, the Dutch language never had a substantial number of speakers or official status.[217] The small minorities that can speak it or Dutch-based creole languages fluently are the aforementioned ethnic groups and descendants of Dutch colonisers. Today, there is some degree of fluency by either educated members of the oldest generation or legal professionals,[218] as specific law codes are still only available in Dutch.[219]
|
124 |
+
|
125 |
+
Religion in Indonesia (2018)[5]
|
126 |
+
|
127 |
+
While the constitution stipulates religious freedom,[220][114] the government officially recognises only six religions: Islam, Protestantism, Roman Catholicism, Hinduism, Buddhism, and Confucianism;[221][222] with indigenous religions only partly acknowledged.[222] Indonesia is the world's most populous Muslim-majority country[223] with 227 million adherents in 2017, with the majority being Sunnis (99%).[224] The Shias and Ahmadis respectively constitute 1% (1–3 million) and 0.2% (200,000–400,000) of the Muslim population.[222][225] Almost 10% of Indonesians are Christians, while the rest are Hindus, Buddhists, and others. Most Hindus are Balinese,[226] and most Buddhists are Chinese Indonesians.[227]
|
128 |
+
|
129 |
+
The natives of the Indonesian archipelago originally practised indigenous animism and dynamism, beliefs that are common to Austronesian people.[228] They worshipped and revered ancestral spirit, and believed that supernatural spirits (hyang) might inhabit certain places such as large trees, stones, forests, mountains, or sacred sites.[228] Examples of Indonesian native belief systems include the Sundanese Sunda Wiwitan, Dayak's Kaharingan, and the Javanese Kejawèn. They have had a significant impact on how other faiths are practised, evidenced by a large proportion of people—such as the Javanese abangan, Balinese Hindus, and Dayak Christians—practising a less orthodox, syncretic form of their religion.[229]
|
130 |
+
|
131 |
+
Hindu influences reached the archipelago as early as the first century CE.[230] The Sundanese kingdom of Salakanagara in western Java around 130 was the first historically recorded Indianised kingdom in the archipelago.[231] Buddhism arrived around the 6th century,[232] and its history in Indonesia is closely related to that of Hinduism, as some empires based on Buddhism had its roots around the same period. The archipelago has witnessed the rise and fall of powerful and influential Hindu and Buddhist empires such as Majapahit, Sailendra, Srivijaya, and Mataram. Though no longer a majority, Hinduism and Buddhism remain defining influences in Indonesian culture.
|
132 |
+
|
133 |
+
Islam was introduced by Sunni traders of the Shafi'i fiqh, as well as Sufi traders from the Indian subcontinent and southern Arabian peninsula as early as the 8th century CE.[233][234] For the most part, Islam overlaid and mixed with existing cultural and religious influences that resulted in a distinct form of Islam.[33][235] Trade, missionary works such as by the Wali Sanga and Chinese explorer Zheng He, and military campaigns by several sultanates helped accelerate the spread of the religion.[236][237] By the end of the 16th century, Islam had supplanted Hinduism and Buddhism as the dominant religion of Java and Sumatra.
|
134 |
+
|
135 |
+
Catholicism was brought by Portuguese traders and missionaries such as Jesuit Francis Xavier, who visited and baptised several thousand locals.[238][239] Its spread faced difficulty due to the VOC policy of banning the religion and the Dutch hostility due to the Eighty Years' War against Catholic Spain's rule. Protestantism is mostly a result of Calvinist and Lutheran missionary efforts during the Dutch colonial era.[240][241][242] Although they are the most common branch, there is a multitude of other denominations elsewhere in the country.[243]
|
136 |
+
|
137 |
+
There was a sizable Jewish presence in the archipelago until 1945, mostly Dutch and some Baghdadi Jews. Since most have left after Indonesia proclaimed independence, Judaism was never accorded official status, and only a tiny number of Jews remain today, mostly in Jakarta and Surabaya.[244] At the national and local level, Indonesia's political leadership and civil society groups have played a crucial role in interfaith relations, both positively and negatively. The invocation of the first principle of Indonesia's philosophical foundation, Pancasila (the belief in the one and only God) often serves as a reminder of religious tolerance,[245] though instances of intolerance have occurred. An overwhelming majority of Indonesians consider religion to be essential,[246] and its role is present in almost all aspects of society, including politics, education, marriage, and public holidays.[247][248]
|
138 |
+
|
139 |
+
Education is compulsory for 12 years.[249] Parents can choose between state-run, non-sectarian schools or private or semi-private religious (usually Islamic) schools, supervised by the ministries of Education and Religion, respectively.[250] Private international schools that do not follow the national curriculum are also available. The enrolment rate is 90% for primary education, 76% for secondary education, and 24% for tertiary education (2015). The literacy rate is 95% (2016), and the government spends about 3.6% of GDP (2015) on education.[251] In 2018, there were more than 4,500 higher educational institutions in Indonesia.[252] The top universities are the Java-based University of Indonesia, Bandung Institute of Technology and Gadjah Mada University.[252] Andalas University is pioneering the establishment of a leading university outside of Java.[253]
|
140 |
+
|
141 |
+
Government expenditure on healthcare is about 3.3% of GDP in 2016.[254] As part of an attempt to achieve universal health care, the government launched the National Health Insurance (Jaminan Kesehatan Nasional, JKN) in 2014 that provides health care to citizens.[255] They include coverage for a range of services from the public and also private firms that have opted to join the scheme. In recent decades, there have been remarkable improvements such as rising life expectancy (from 63 in 1990 to 71 in 2012) and declining child mortality (from 84 deaths per 1,000 births in 1990 to 27 deaths in 2015).[256] Nevertheless, Indonesia continues to face challenges that include maternal and child health, low air quality, malnutrition, high rate of smoking, and infectious diseases.[257]
|
142 |
+
|
143 |
+
Nearly 80% of Indonesia's population lives in the western parts of the archipelago,[258] but they are growing at a slower pace than the rest of the country. This situation creates a gap in wealth, unemployment rate, and health between densely populated islands and economic centres (such as Sumatra and Java) and sparsely populated, disadvantaged areas (such as Maluku and Papua).[259][260] Racism, especially against Chinese Indonesians since the colonial period, is still prevalent today.[261][262] There has been a marked increase of religious intolerance since 1998, with the most recent high-profile case being that of Chinese Christian former governor of Jakarta, Basuki Tjahaja Purnama.[263] LGBT issues have recently gained attention in Indonesia.[264] While homosexuality is legal in most parts of the country, it is illegal in Aceh and South Sumatra.[265] LGBT people and activists have regularly faced fierce opposition, intimidation, and discrimination launched even by authorities.[266]
|
144 |
+
|
145 |
+
The cultural history of the Indonesian archipelago spans more than two millennia. Influences from the Indian subcontinent, mainland China, the Middle East, Europe,[267][268] and the Austronesian peoples have historically shaped the cultural, linguistic and religious make-up of the archipelago. As a result, modern-day Indonesia has a multicultural, multilingual and multi-ethnic society,[3][207] with a complex cultural mixture that differs significantly from the original indigenous cultures. Indonesia currently holds ten items of UNESCO's Intangible Cultural Heritage, including a wayang puppet theatre, kris, batik,[269] pencak silat, angklung, and the three genres of traditional Balinese dance.[270]
|
146 |
+
|
147 |
+
Indonesian arts include both age-old art forms developed through centuries and a recently developed contemporary art. Despite often displaying local ingenuity, Indonesian arts have absorbed foreign influences—most notably from India, the Arab world, China and Europe, as a result of contacts and interactions facilitated, and often motivated, by trade.[271] Painting is an established and developed art in Bali, where its people are famed for their artistry. Their painting tradition started as classical Kamasan or Wayang style visual narrative, derived from visual art discovered on candi bas reliefs in eastern Java.[272]
|
148 |
+
|
149 |
+
There have been numerous discoveries of megalithic sculptures in Indonesia.[273] Subsequently, tribal art has flourished within the culture of Nias, Batak, Asmat, Dayak and Toraja.[274][275] Wood and stone are common materials used as the media for sculpting among these tribes. Between the 8th and 15th centuries, the Javanese civilisation has developed a refined stone sculpting art and architecture which was influenced by Hindu-Buddhist Dharmic civilisation. The temples of Borobudur and Prambanan are among the most famous examples of the practice.[276]
|
150 |
+
|
151 |
+
As with the arts, Indonesian architecture has absorbed foreign influences that have brought cultural changes and profound effect on building styles and techniques. The most dominant has traditionally been Indian; however, Chinese, Arab, and European influences have also been significant. Traditional carpentry, masonry, stone and woodwork techniques and decorations have thrived in vernacular architecture, with numbers of traditional houses' (rumah adat) styles that have been developed. The traditional houses and settlements in the country vary by ethnic groups, and each has a specific custom and history.[277] Examples include Toraja's Tongkonan, Minangkabau's Rumah Gadang and Rangkiang, Javanese style Pendopo pavilion with Joglo style roof, Dayak's longhouses, various Malay houses, Balinese houses and temples, and also different forms of rice barns (lumbung).
|
152 |
+
|
153 |
+
The music of Indonesia predates historical records. Various indigenous tribes incorporate chants and songs accompanied by musical instruments in their rituals. Angklung, kacapi suling, gong, gamelan, talempong, kulintang, and sasando are examples of traditional Indonesian instruments. The diverse world of Indonesian music genres is the result of the musical creativity of its people, and subsequent cultural encounters with foreign influences. These include gambus and qasida from the Middle East,[278] keroncong from Portugal,[279] and dangdut—one of the most popular music genres in Indonesia—with notable Hindi influence as well as Malay orchestras.[280] Today, the Indonesian music industry enjoys both nationwide and regional popularity in Malaysia, Singapore, and Brunei, due to common culture and intelligible languages between Indonesian and Malay.
|
154 |
+
|
155 |
+
Indonesian dances have a diverse history, with more than 3,000 original dances. Scholars believe that they had their beginning in rituals and religious worship.[281] Examples include war dances, a dance of witch doctors, and dance to call for rain or any agricultural rituals such as Hudoq. Indonesian dances derive its influences from the archipelago's prehistoric and tribal, Hindu-Buddhist, and Islamic periods. Recently, modern dances and urban teen dances have gained popularity due to the influence of Western culture, as well as those of Japan and South Korea to some extent. Traditional dances, however, such as the Javanese, Sundanese, Minang, Balinese, Saman continue to be a living and dynamic tradition.
|
156 |
+
|
157 |
+
Indonesia has various styles of clothing as a result of its long and rich cultural history. The national costume has its origins in the indigenous culture of the country and traditional textile traditions. The Javanese Batik and Kebaya[282] are arguably Indonesia's most recognised national costume, though they have Sundanese and Balinese origins as well.[283] Each province has a representation of traditional attire and dress,[267] such as Ulos of Batak from North Sumatra; Songket of Malay and Minangkabau from Sumatra; and Ikat of Sasak from Lombok. People wear national and regional costumes during traditional weddings, formal ceremonies, music performances, government and official occasions,[283] and they vary from traditional to modern attire.
|
158 |
+
|
159 |
+
Wayang, the Javanese, Sundanese, and Balinese shadow puppet theatre display several mythological legends such as Ramayana and Mahabharata.[284] Other forms of local drama include the Javanese Ludruk and Ketoprak, the Sundanese Sandiwara, Betawi Lenong,[285][286] and various Balinese dance drama. They incorporate humour and jest and often involve audiences in their performances.[287] Some theatre traditions also include music, dancing and the silat martial art such as Randai from Minangkabau people of West Sumatra. It is usually performed for traditional ceremonies and festivals,[288][289] and based on semi-historical Minangkabau legends and love story.[289] Modern performing art also developed in Indonesia with their distinct style of drama. Notable theatre, dance, and drama troupe such as Teater Koma are famous as it often portrays social and political satire of Indonesian society.[290]
|
160 |
+
|
161 |
+
The first film produced in the archipelago was Loetoeng Kasaroeng,[291] a silent film by Dutch director L. Heuveldorp. The film industry expanded after independence, with six films made in 1949 rising to 58 in 1955. Usmar Ismail, who made significant imprints in the 1950s and 1960s, is generally considered to be the pioneer of Indonesian films.[292] The latter part of the Sukarno era saw the use of cinema for nationalistic, anti-Western purposes, and foreign films were subsequently banned, while the New Order utilised a censorship code that aimed to maintain social order.[293] Production of films peaked during the 1980s, although it declined significantly in the next decade.[291] Notable films in this period include Pengabdi Setan (1980), Nagabonar (1987), Tjoet Nja' Dhien (1988), Catatan Si Boy (1989), and Warkop's comedy films.
|
162 |
+
|
163 |
+
Independent filmmaking was a rebirth of the film industry since 1998, where films started addressing previously banned topics, such as religion, race, and love.[293] Between 2000 and 2005, the number of films released each year steadily increased.[294] Riri Riza and Mira Lesmana were among the new generation of filmmakers who co-directed Kuldesak (1999), Petualangan Sherina (2000), Ada Apa dengan Cinta? (2002), and Laskar Pelangi (2008). In 2016, Warkop DKI Reborn: Jangkrik Boss Part 1 smashed box office records, becoming the most-watched Indonesian film with 6.8 million tickets sold.[295] Indonesia has held annual film festivals and awards, including the Indonesian Film Festival (Festival Film Indonesia) that has been held intermittently since 1955. It hands out the Citra Award, the film industry's most prestigious award. From 1973 to 1992, the festival was held annually and then discontinued until its revival in 2004.
|
164 |
+
|
165 |
+
Media freedom increased considerably after the fall of the New Order, during which the Ministry of Information monitored and controlled domestic media and restricted foreign media.[296] The television market includes several national commercial networks and provincial networks that compete with public TVRI, which held a monopoly on TV broadcasting from 1962 to 1989. By the early 21st century, the improved communications system had brought television signals to every village and people can choose from up to 11 channels.[297] Private radio stations carry news bulletins while foreign broadcasters supply programmes. The number of printed publications has increased significantly since 1998.[297]
|
166 |
+
|
167 |
+
Like other developing countries, Indonesia began development of the Internet in the early 1990s. Its first commercial Internet service provider, PT. Indo Internet began operation in Jakarta in 1994.[298] The country had 171 million Internet users in 2018, with a penetration rate that keeps increasing annually.[299] Most are between the ages of 15 and 19 and depend primarily on mobile phones for access, outnumbering both laptops and computers.[300]
|
168 |
+
|
169 |
+
The oldest evidence of writing in the Indonesian archipelago is a series of Sanskrit inscriptions dated to the 5th century. Many of Indonesia's peoples have firmly rooted oral traditions, which help to define and preserve their cultural identities.[302] In written poetry and prose, several traditional forms dominate, mainly syair, pantun, gurindam, hikayat and babad. Examples of these forms include Syair Abdul Muluk, Hikayat Hang Tuah, Sulalatus Salatin, and Babad Tanah Jawi.[303]
|
170 |
+
|
171 |
+
Early modern Indonesian literature originates in Sumatran tradition.[304][305] Literature and poetry flourished during the decades leading up to and after independence. Balai Pustaka, the government bureau for popular literature, was instituted in 1917 to promote the development of indigenous literature. Many scholars consider the 1950s and 1960s to be the Golden Age of Indonesian Literature.[306] The style and characteristics of modern Indonesian literature vary according to the dynamics of the country's political and social landscape,[306] most notably the war of independence in the second half of 1940s and the anti-communist mass killings in the mid-1960s.[307] Notable literary figures of the modern era include Multatuli, Mohammad Yamin, Merari Siregar, Marah Roesli, Pramoedya Ananta Toer, and Ayu Utami.
|
172 |
+
|
173 |
+
Indonesian cuisine is one of the most diverse, vibrant, and colourful in the world, full of intense flavour.[308] Many regional cuisines exist, often based upon indigenous culture and foreign influences such as Chinese, European, Middle Eastern, and Indian precedents.[309] Rice is the leading staple food and is served with side dishes of meat and vegetables. Spices (notably chilli), coconut milk, fish and chicken are fundamental ingredients.[310]
|
174 |
+
|
175 |
+
Some popular dishes such as nasi goreng, gado-gado, sate, and soto are prevalent and considered as national dishes. The Ministry of Tourism, however, chose tumpeng as the official national dish in 2014, describing it as binding the diversity of various culinary traditions.[311] Other popular dishes include rendang, one of the many Padang cuisines along with dendeng and gulai. In 2017, rendang was chosen as the "World's Most Delicious Food" by the CNN Travel reader's choice.[312] Another fermented food is oncom, similar in some ways to tempeh but uses a variety of bases (not only soy), created by different fungi, and particularly popular in West Java.
|
176 |
+
|
177 |
+
Sports are generally male-oriented, and spectators are often associated with illegal gambling.[313] Badminton and football are the most popular sports. Indonesia is among the only five countries that have won the Thomas and Uber Cup, the world team championship of men's and women's badminton. Along with weightlifting, it is the sport that contributes the most to Indonesia's Olympic medal tally. Liga 1 is the country's premier football club league. On the international stage, Indonesia has experienced limited success despite being the first Asian team to participate in the FIFA World Cup in 1938 as Dutch East Indies.[314] On the continental level, Indonesia won the bronze medal in the 1958 Asian Games. Indonesia's first appearance in the AFC Asian Cup was in 1996 and successfully qualified for the next three tournaments. They, however, failed to progress through the next stage in all occasions.
|
178 |
+
|
179 |
+
Other popular sports include boxing and basketball, which has a long history in Indonesia and was part of the first National Games (Pekan Olahraga Nasional, PON) in 1948.[315] Some of the famous Indonesian boxers include Ellyas Pical, three times IBF Super flyweight champion; Nico Thomas, Muhammad Rachman, and Chris John.[316] In motorsport, Rio Haryanto became the first Indonesian to compete in Formula One in 2016.[317] Sepak takraw and karapan sapi (bull racing) in Madura are some examples of traditional sports in Indonesia. In areas with a history of tribal warfare, mock fighting contests are held, such as caci in Flores and pasola in Sumba. Pencak Silat is an Indonesian martial art and in 1987, became one of the sporting events in the Southeast Asian Games, with Indonesia appearing as one of the leading competitors. In Southeast Asia, Indonesia is one of the top sports powerhouses by winning the Southeast Asian Games ten times since 1977, most recently in 2011.
|
180 |
+
|
181 |
+
Government
|
182 |
+
|
183 |
+
General
|
en/2728.html.txt
ADDED
@@ -0,0 +1,330 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Hinduism is an Indian religion and dharma, or way of life.[note 1][note 2] It is the world's third-largest religion with over 1.25 billion followers, or 15–16% of the global population, known as Hindus.[web 1][web 2] The word Hindu is an exonym,[1][2] and while Hinduism has been called the oldest religion in the world,[note 3] many practitioners refer to their religion as Sanātana Dharma, "the eternal way" which refers to the idea that its origins lie beyond human history, as revealed in the Hindu texts.[3][4][5][6][note 4] Another, though less fitting,[7] self-designation is Vaidika dharma,[8][9][10][11] the 'dharma related to the Vedas.'[web 3]
|
6 |
+
|
7 |
+
Hinduism includes a range of philosophies, and is linked by shared concepts, recognisable rituals, cosmology, pilgrimage to sacred sites and shared textual resources that discuss theology, philosophy, mythology, Vedic yajna, Yoga, agamic rituals, and temple building, among other topics.[12] Hinduism prescribes the eternal duties, such as honesty, refraining from injuring living beings (ahimsa), patience, forbearance, self-restraint, and compassion, among others.[web 4][13] Prominent themes in Hindu beliefs include the four Puruṣārthas, the proper goals or aims of human life; namely, Dharma (ethics/duties), Artha (prosperity/work), Kama (desires/passions) and Moksha (liberation/freedom from the cycle of death and rebirth/salvation),[14][15] as well as karma (action, intent and consequences) and Saṃsāra (cycle of death and rebirth).[16][17]
|
8 |
+
|
9 |
+
Hindu practices include rituals such as puja (worship) and recitations, japa, meditation (dhyana), family-oriented rites of passage, annual festivals, and occasional pilgrimages. Along with the practice of various Yogas, some Hindus leave their social world and material possessions and engage in lifelong Sannyasa (monasticism) in order to achieve Moksha.[18]
|
10 |
+
|
11 |
+
Hindu texts are classified into Śruti ("heard") and Smṛti ("remembered"), the major scriptures of which are the Vedas, the Upanishads, the Puranas, the Mahabharata, the Ramayana, and the Āgamas.[19][16] There are six āstika schools of Hindu philosophy, who recognise the authority of the Vedas, namely Sankhya, Yoga, Nyaya, Vaisheshika, Mimamsa and Vedanta.[20][21][22]
|
12 |
+
|
13 |
+
While the Puranic chronology presents a geneaology of thousands of years, starting with the Vedic rishis, scholars regard Hinduism as a fusion[note 5] or synthesis[23][note 6] of Brahmanical orthopraxy[note 7] with various Indian cultures,[24][note 8] having diverse roots[25][note 9] and no specific founder.[26] This Hindu synthesis emerged after the Vedic period, between ca. 500[27]–200[28] BCE and c. 300 CE,[27] in the period of the Second Urbanisation and the early classical period of Hinduism, when the Epics and the first Puranas were composed.[27][28] It flourished in the medieval period, with the decline of Buddhism in India.[29]
|
14 |
+
|
15 |
+
Currently, the four largest denominations of Hinduism are the Vaishnavism, Shaivism, Shaktism and Smartism.[30] Sources of authority and eternal truths in the Hindu texts play an important role, but there is also a strong Hindu tradition of questioning authority in order to deepen the understanding of these truths and to further develop the tradition.[31] Hinduism is the most widely professed faith in India, Nepal and Mauritius. Significant numbers of Hindu communities are found in Southeast Asia including in Bali, Indonesia,[32] the Caribbean, North America, Europe, Oceania, Africa, and other regions.[33][34]
|
16 |
+
|
17 |
+
The word Hindū is derived from Indo-Aryan[35]/Sanskrit[36] root Sindhu.[36][37] The Proto-Iranian sound change *s > h occurred between 850–600 BCE, according to Asko Parpola.[38]
|
18 |
+
|
19 |
+
The use of the English term "Hinduism" to describe a collection of practices and beliefs is a fairly recent construction: it was first used by Raja Ram Mohun Roy in 1816–17.[39] The term "Hinduism" was coined in around 1830 by those Indians who opposed British colonialism, and who wanted to distinguish themselves from other religious groups.[40][41][39] Before the British began to categorise communities strictly by religion, Indians generally did not define themselves exclusively through their religious beliefs; instead identities were largely segmented on the basis of locality, language, varna, jāti, occupation and sect.[42]
|
20 |
+
|
21 |
+
The word "Hindu" is much older, and it is believed that it was used as the name for the Indus River in the northwestern part of the Indian subcontinent.[39][36][note 10] According to Gavin Flood, "The actual term Hindu first occurs as a Persian geographical term for the people who lived beyond the river Indus (Sanskrit: Sindhu)",[36] more specifically in the 6th-century BCE inscription of Darius I (550–486 BCE).[43] The term Hindu in these ancient records is a geographical term and did not refer to a religion.[36] Among the earliest known records of 'Hindu' with connotations of religion may be in the 7th-century CE Chinese text Record of the Western Regions by Xuanzang,[43] and 14th-century Persian text Futuhu's-salatin by 'Abd al-Malik Isami.[note 11]
|
22 |
+
|
23 |
+
Thapar states that the word Hindu is found as heptahindu in Avesta – equivalent to Rigvedic sapta sindhu, while hndstn (pronounced Hindustan) is found in a Sasanian inscription from the 3rd century CE, both of which refer to parts of northwestern South Asia.[44] The Arabic term al-Hind referred to the people who live across the River Indus.[45] This Arabic term was itself taken from the pre-Islamic Persian term Hindū, which refers to all Indians. By the 13th century, Hindustan emerged as a popular alternative name of India, meaning the "land of Hindus".[46][note 12]
|
24 |
+
|
25 |
+
The term Hindu was later used occasionally in some Sanskrit texts such as the later Rajataranginis of Kashmir (Hinduka, c. 1450) and some 16th- to 18th-century Bengali Gaudiya Vaishnava texts including Chaitanya Charitamrita and Chaitanya Bhagavata. These texts used it to distinguish Hindus from Muslims who are called Yavanas (foreigners) or Mlecchas (barbarians), with the 16th-century Chaitanya Charitamrita text and the 17th-century Bhakta Mala text using the phrase "Hindu dharma".[47] It was only towards the end of the 18th century that European merchants and colonists began to refer to the followers of Indian religions collectively as Hindus.
|
26 |
+
|
27 |
+
The term Hinduism, then spelled Hindooism, was introduced into the English language in the 18th century to denote the religious, philosophical, and cultural traditions native to India.[48]
|
28 |
+
|
29 |
+
Hinduism includes a diversity of ideas on spirituality and traditions, but has no ecclesiastical order, no unquestionable religious authorities, no governing body, no prophet(s) nor any binding holy book; Hindus can choose to be polytheistic, pantheistic, panentheistic, pandeistic, henotheistic, monotheistic, monistic, agnostic, atheistic or humanist.[49][50][51] According to Doniger, "ideas about all the major issues of faith and lifestyle - vegetarianism, nonviolence, belief in rebirth, even caste - are subjects of debate, not dogma."[52]
|
30 |
+
|
31 |
+
Because of the wide range of traditions and ideas covered by the term Hinduism, arriving at a comprehensive definition is difficult.[36] The religion "defies our desire to define and categorize it".[53] Hinduism has been variously defined as a religion, a religious tradition, a set of religious beliefs, and "a way of life".[54][note 1] From a Western lexical standpoint, Hinduism like other faiths is appropriately referred to as a religion. In India the term dharma is preferred, which is broader than the Western term religion.
|
32 |
+
|
33 |
+
The study of India and its cultures and religions, and the definition of "Hinduism", has been shaped by the interests of colonialism and by Western notions of religion.[55][56] Since the 1990s, those influences and its outcomes have been the topic of debate among scholars of Hinduism,[55][note 13] and have also been taken over by critics of the Western view on India.[57][note 14]
|
34 |
+
|
35 |
+
Hinduism as it is commonly known can be subdivided into a number of major currents. Of the historical division into six darsanas (philosophies), two schools, Vedanta and Yoga, are currently the most prominent.[20] Classified by primary deity or deities, four major Hinduism modern currents are Vaishnavism (Vishnu), Shaivism (Shiva), Shaktism (Devi) and Smartism (five deities treated as same).[58][59] Hinduism also accepts numerous divine beings, with many Hindus considering the deities to be aspects or manifestations of a single impersonal absolute or ultimate reality or God, while some Hindus maintain that a specific deity represents the supreme and various deities are lower manifestations of this supreme.[60] Other notable characteristics include a belief in existence of ātman (soul, self), reincarnation of one's ātman, and karma as well as a belief in dharma (duties, rights, laws, conduct, virtues and right way of living).
|
36 |
+
|
37 |
+
McDaniel (2007) classifies Hinduism into six major kinds and numerous minor kinds, in order to understand expression of emotions among the Hindus.[61] The major kinds, according to McDaniel are, Folk Hinduism, based on local traditions and cults of local deities and is the oldest, non-literate system; Vedic Hinduism based on the earliest layers of the Vedas traceable to 2nd millennium BCE; Vedantic Hinduism based on the philosophy of the Upanishads, including Advaita Vedanta, emphasizing knowledge and wisdom; Yogic Hinduism, following the text of Yoga Sutras of Patanjali emphasizing introspective awareness; Dharmic Hinduism or "daily morality", which McDaniel states is stereotyped in some books as the "only form of Hindu religion with a belief in karma, cows and caste"; and Bhakti or devotional Hinduism, where intense emotions are elaborately incorporated in the pursuit of the spiritual.[61]
|
38 |
+
|
39 |
+
Michaels distinguishes three Hindu religions and four forms of Hindu religiosity.[62] The three Hindu religions are "Brahmanic-Sanskritic Hinduism", "folk religions and tribal religions", and "founded religions".[63] The four forms of Hindu religiosity are the classical "karma-marga",[64] jnana-marga,[65] bhakti-marga,[65] and "heroism", which is rooted in militaristic traditions, such as Ramaism and parts of political Hinduism.[64] This is also called virya-marga.[65] According to Michaels, one out of nine Hindu belongs by birth to one or both of the Brahmanic-Sanskritic Hinduism and Folk religion typology, whether practicing or non-practicing. He classifies most Hindus as belonging by choice to one of the "founded religions" such as Vaishnavism and Shaivism that are salvation-focussed and often de-emphasize Brahman priestly authority yet incorporate ritual grammar of Brahmanic-Sanskritic Hinduism.[66] He includes among "founded religions" Buddhism, Jainism, Sikhism that are now distinct religions, syncretic movements such as Brahmo Samaj and the Theosophical Society, as well as various "Guru-isms" and new religious movements such as Maharishi Mahesh Yogi and ISKCON.[67]
|
40 |
+
|
41 |
+
Inden states that the attempt to classify Hinduism by typology started in the imperial times, when proselytizing missionaries and colonial officials sought to understand and portray Hinduism from their interests.[68] Hinduism was construed as emanating not from a reason of spirit but fantasy and creative imagination, not conceptual but symbolical, not ethical but emotive, not rational or spiritual but of cognitive mysticism. This stereotype followed and fit, states Inden, with the imperial imperatives of the era, providing the moral justification for the colonial project.[68] From tribal Animism to Buddhism, everything was subsumed as part of Hinduism. The early reports set the tradition and scholarly premises for typology of Hinduism, as well as the major assumptions and flawed presuppositions that has been at the foundation of Indology. Hinduism, according to Inden, has been neither what imperial religionists stereotyped it to be, nor is it appropriate to equate Hinduism to be merely monist pantheism and philosophical idealism of Advaita Vedanta.[68]
|
42 |
+
|
43 |
+
To its adherents, Hinduism is a traditional way of life.[69] Many practitioners refer to the "orthodox" form of Hinduism as Sanātana Dharma, "the eternal law" or the "eternal way".[70][71] Hindus regard Hinduism to be thousands of years old. The Puranic chronology, the timeline of events in ancient Indian history as narrated in the Mahabaratha, the Ramayana, and the Puranas, envisions an chronology of events related to Hinduism starting well before 3000 BCE. The Sanskrit word dharma has a much broader meaning than religion and is not its equivalent. All aspects of a Hindu life, namely acquiring wealth (artha), fulfillment of desires (kama), and attaining liberation (moksha), are part of dharma, which encapsulates the "right way of living" and eternal harmonious principles in their fulfillment.[72][73]
|
44 |
+
|
45 |
+
According to the editors of the Encyclopædia Britannica, Sanātana Dharma historically referred to the "eternal" duties religiously ordained in Hinduism, duties such as honesty, refraining from injuring living beings (ahimsa), purity, goodwill, mercy, patience, forbearance, self-restraint, generosity, and asceticism. These duties applied regardless of a Hindu's class, caste, or sect, and they contrasted with svadharma, one's "own duty", in accordance with one's class or caste (varna) and stage in life (puruṣārtha).[web 4] In recent years, the term has been used by Hindu leaders, reformers, and nationalists to refer to Hinduism. Sanatana dharma has become a synonym for the "eternal" truth and teachings of Hinduism, that transcend history and are "unchanging, indivisible and ultimately nonsectarian".[web 4]
|
46 |
+
|
47 |
+
According to other scholars such as Kim Knott and Brian Hatcher, Sanātana Dharma refers to "timeless, eternal set of truths" and this is how Hindus view the origins of their religion. It is viewed as those eternal truths and tradition with origins beyond human history, truths divinely revealed (Shruti) in the Vedas – the most ancient of the world's scriptures.[74][4] To many Hindus, the Western term "religion" to the extent it means "dogma and an institution traceable to a single founder" is inappropriate for their tradition, states Hatcher. Hinduism, to them, is a tradition that can be traced at least to the ancient Vedic era.[4][75][note 15]
|
48 |
+
|
49 |
+
Some have referred to Hinduism as the Vaidika dharma.[8] The word 'Vaidika' in Sanskrit means 'derived from or conformable to the Veda' or 'relating to the Veda'.[web 3] Traditional scholars employed the terms Vaidika and Avaidika, those who accept the Vedas as a source of authoritative knowledge and those who do not, to differentiate various Indian schools from Jainism, Buddhism and Charvaka. According to Klaus Klostermaier, the term Vaidika dharma is the earliest self-designation of Hinduism.[9][10] According to Arvind Sharma, the historical evidence suggests that "the Hindus were referring to their religion by the term vaidika dharma or a variant thereof" by the 4th-century CE.[77] According to Brian K. Smith "[i]t is 'debatable at the very least' as to whether the term Vaidika Dharma cannot, with the proper concessions to historical, cultural and ideological specificity, be comparable to and translated as 'Hinduism' or 'Hindu religion'."[7]
|
50 |
+
|
51 |
+
According to Alexis Sanderson, the early Sanskrit texts differentiate between Vaidika, Vaishnava, Shaiva, Shakta, Saura, Buddhist and Jaina traditions. However, the late 1st-millennium CE Indic consensus had "indeed come to conceptualize a complex entity corresponding to Hinduism as opposed to Buddhism and Jainism excluding only certain forms of antinomian Shakta-Shaiva" from its fold.[78] Some in the Mimamsa school of Hindu philosophy considered the Agamas such as the Pancaratrika to be invalid because it did not conform to the Vedas. Some Kashmiri scholars rejected the esoteric tantric traditions to be a part of Vaidika dharma.[78][79] The Atimarga Shaivism ascetic tradition, datable to about 500 CE, challenged the Vaidika frame and insisted that their Agamas and practices were not only valid, they were superior than those of the Vaidikas.[80] However, adds Sanderson, this Shaiva ascetic tradition viewed themselves as being genuinely true to the Vedic tradition and "held unanimously that the Śruti and Smṛti of Brahmanism are universally and uniquely valid in their own sphere, [...] and that as such they [Vedas] are man's sole means of valid knowledge [...]".[80]
|
52 |
+
|
53 |
+
The term Vaidika dharma means a code of practice that is "based on the Vedas", but it is unclear what "based on the Vedas" really implies, states Julius Lipner.[75] The Vaidika dharma or "Vedic way of life", states Lipner, does not mean "Hinduism is necessarily religious" or that Hindus have a universally accepted "conventional or institutional meaning" for that term.[75] To many, it is as much a cultural term. Many Hindus do not have a copy of the Vedas nor have they ever seen or personally read parts of a Veda, like a Christian might relate to the Bible or a Muslim might to the Quran. Yet, states Lipner, "this does not mean that their [Hindus] whole life's orientation cannot be traced to the Vedas or that it does not in some way derive from it".[75]
|
54 |
+
|
55 |
+
Though many religious Hindus implicitly acknowledge the authority of the Vedas, this acknowledgment is often "no more than a declaration that someone considers himself [or herself] a Hindu,"[81][note 16] and "most Indians today pay lip service to the Veda and have no regard for the contents of the text."[82] Some Hindus challenge the authority of the Vedas, thereby implicitly acknowledging its importance to the history of Hinduism, states Lipner.[75]
|
56 |
+
|
57 |
+
Beginning in the 19th century, Indian modernists re-asserted Hinduism as a major asset of Indian civilisation,[56] meanwhile "purifying" Hinduism from its Tantric elements[85] and elevating the Vedic elements. Western stereotypes were reversed, emphasizing the universal aspects, and introducing modern approaches of social problems.[56] This approach had a great appeal, not only in India, but also in the west.[56] Major representatives of "Hindu modernism"[86] are Raja Rammohan Roy, Vivekananda, Sarvepalli Radhakrishnan and Mahatma Gandhi.[87]
|
58 |
+
|
59 |
+
Raja Rammohan Roy is known as the father of the Hindu Renaissance.[88] He was a major influence on Swami Vivekananda (1863–1902), who, according to Flood, was "a figure of great importance in the development of a modern Hindu self-understanding and in formulating the West's view of Hinduism".[89] Central to his philosophy is the idea that the divine exists in all beings, that all human beings can achieve union with this "innate divinity",[86] and that seeing this divine as the essence of others will further love and social harmony.[86] According to Vivekananda, there is an essential unity to Hinduism, which underlies the diversity of its many forms.[86] According to Flood, Vivekananda's vision of Hinduism "is one generally accepted by most English-speaking middle-class Hindus today".[90] Sarvepalli Radhakrishnan sought to reconcile western rationalism with Hinduism, "presenting Hinduism as an essentially rationalistic and humanistic religious experience".[91]
|
60 |
+
|
61 |
+
This "Global Hinduism"[92] has a worldwide appeal, transcending national boundaries[92] and, according to Flood, "becoming a world religion alongside Christianity, Islam and Buddhism",[92] both for the Hindu diaspora communities and for westerners who are attracted to non-western cultures and religions.[92] It emphasizes universal spiritual values such as social justice, peace and "the spiritual transformation of humanity".[92] It has developed partly due to "re-enculturation",[93] or the Pizza effect,[93] in which elements of Hindu culture have been exported to the West, gaining popularity there, and as a consequence also gained greater popularity in India.[93] This globalization of Hindu culture brought "to the West teachings which have become an important cultural force in western societies, and which in turn have become an important cultural force in India, their place of origin".[94]
|
62 |
+
|
63 |
+
The definition of Hinduism in Indian Law is: "Acceptance of the Vedas with reverence; recognition of the fact that the means or ways to salvation are diverse; and realization of the truth that the number of gods to be worshipped is large".[95][96]
|
64 |
+
|
65 |
+
The term Hinduism was coined in Western ethnography in the 18th century,[48][note 17] and refers to the fusion[note 5] or synthesis[note 6][23] of various Indian cultures and traditions,[24][note 8] with diverse roots[25][note 9] and no founder.[26] This Hindu synthesis emerged after the Vedic period, between ca. 500[27]–200[28] BCE and c. 300 CE,[27] in the period of the Second Urbanisation and the early classical period of Hinduism, when the Epics and the first Puranas were composed.[27][28] It flourished in the medieval period, with the decline of Buddhism in India.[29] Hinduism's tolerance to variations in belief and its broad range of traditions make it difficult to define as a religion according to traditional Western conceptions.[97]
|
66 |
+
|
67 |
+
Some academics suggest that Hinduism can be seen as a category with "fuzzy edges" rather than as a well-defined and rigid entity. Some forms of religious expression are central to Hinduism and others, while not as central, still remain within the category. Based on this idea Ferro-Luzzi has developed a 'Prototype Theory approach' to the definition of Hinduism.[98]
|
68 |
+
|
69 |
+
Hindu beliefs are vast and diverse, and thus Hinduism is often referred to as a family of religions rather than a single religion.[99] Within each religion in this family of religions, there are different theologies, practices, and sacred texts. This diversity has led to an array of descriptions for Hinduism. It has been described as henotheism,[100] monism,[101][102] polytheism, panentheism,[103] and monotheism.[104] Hinduism does not have a "unified system of belief encoded in a declaration of faith or a creed",[36] but is rather an umbrella term comprising the plurality of religious phenomena of India.[105] Sarvepalli Radhakrishnan mentions that "While fixed intellectual beliefs mark off one religion from another, Hinduism sets itself no such limits", a Hindu is ready to admit different points of view rather than believe in a "self certifying" absolute authority.[106] According to the Supreme Court of India,
|
70 |
+
|
71 |
+
Unlike other religions in the World, the Hindu religion does not claim any one Prophet, it does not worship any one God, it does not believe in any one philosophic concept, it does not follow any one act of religious rites or performances; in fact, it does not satisfy the traditional features of a religion or creed. It is a way of life and nothing more".[107]
|
72 |
+
|
73 |
+
Part of the problem with a single definition of the term Hinduism is the fact that Hinduism does not have a founder.[108] It is a synthesis of various traditions,[109] the "Brahmanical orthopraxy, the renouncer traditions and popular or local traditions".[110]
|
74 |
+
|
75 |
+
Theism is also difficult to use as a unifying doctrine for Hinduism, because while some Hindu philosophies postulate a theistic ontology of creation, other Hindus are or have been atheists.[111]
|
76 |
+
|
77 |
+
Despite the differences, there is also a sense of unity.[112] Most Hindu traditions revere a body of religious or sacred literature, the Vedas,[113] although there are exceptions.[114] These texts are a reminder of the ancient cultural heritage and point of pride for Hindus,[115][116] with Louis Renou stating that "even in the most orthodox domains, the reverence to the Vedas has come to be a simple raising of the hat".[115][117]
|
78 |
+
|
79 |
+
Halbfass states that, although Shaivism and Vaishnavism may be regarded as "self-contained religious constellations",[112] there is a degree of interaction and reference between the "theoreticians and literary representatives"[112] of each tradition that indicates the presence of "a wider sense of identity, a sense of coherence in a shared context and of inclusion in a common framework and horizon".[112]
|
80 |
+
|
81 |
+
Brahmins played an essential role in the development of the post-Vedic Hindu synthesis, disseminating Vedic culture to local communities, and integrating local religiosity into the trans-regional Brahmanic culture.[118] In the post-Gupta period Vedanta developed in southern India, where orthodox Brahmanic culture and the Hindu culture were preserved,[119] building on ancient Vedic traditions while "accommoda[ting] the multiple demands of Hinduism."[120]
|
82 |
+
|
83 |
+
The notion of common denominators for several religions and traditions of India further developed from 12th century CE.[121] Lorenzen traces the emergence of a "family resemblance", and what he calls as "beginnings of medieval and modern Hinduism" taking shape, at c. 300 – 600 CE, with the development of the early Puranas, and continuities with the earlier Vedic religion.[122] Lorenzen states that the establishment of a Hindu self-identity took place "through a process of mutual self-definition with a contrasting Muslim Other".[123] According to Lorenzen, this "presence of the Other"[123] is necessary to recognise the "loose family resemblance" among the various traditions and schools.[124]
|
84 |
+
|
85 |
+
According to the Indologist Alexis Sanderson, before Islam arrived in India, the "Sanskrit sources differentiated Vaidika, Vaiṣṇava, Śaiva, Śākta, Saura, Buddhist, and Jaina traditions, but they had no name that denotes the first five of these as a collective entity over and against Buddhism and Jainism." This absence of a formal name, states Sanderson, does not mean that the corresponding concept of Hinduism did not exist. By late 1st-millennium CE, the concept of a belief and tradition distinct from Buddhism and Jainism had emerged.[125] This complex tradition accepted in its identity almost all of what is currently Hinduism, except certain antinomian tantric movements.[125] Some conservative thinkers of those times questioned whether certain Shaiva, Vaishnava and Shakta texts or practices were consistent with the Vedas, or were invalid in their entirety. Moderates then, and most orthoprax scholars later, agreed that though there are some variations, the foundation of their beliefs, the ritual grammar, the spiritual premises and the soteriologies were same. "This sense of greater unity", states Sanderson, "came to be called Hinduism".[125]
|
86 |
+
|
87 |
+
According to Nicholson, already between the 12th and the 16th centuries "certain thinkers began to treat as a single whole the diverse philosophical teachings of the Upanishads, epics, Puranas, and the schools known retrospectively as the 'six systems' (saddarsana) of mainstream Hindu philosophy."[126] The tendency of "a blurring of philosophical distinctions" has also been noted by Burley.[127] Hacker called this "inclusivism"[113] and Michaels speaks of "the identificatory habit".[12] Lorenzen locates the origins of a distinct Hindu identity in the interaction between Muslims and Hindus,[128] and a process of "mutual self-definition with a contrasting Muslim other",[129][note 18] which started well before 1800.[130] Michaels notes:
|
88 |
+
|
89 |
+
As a counteraction to Islamic supremacy and as part of the continuing process of regionalization, two religious innovations developed in the Hindu religions: the formation of sects and a historicization which preceded later nationalism [...] [S]aints and sometimes militant sect leaders, such as the Marathi poet Tukaram (1609–1649) and Ramdas (1608–1681), articulated ideas in which they glorified Hinduism and the past. The Brahmins also produced increasingly historical texts, especially eulogies and chronicles of sacred sites (Mahatmyas), or developed a reflexive passion for collecting and compiling extensive collections of quotations on various subjects.[131]
|
90 |
+
|
91 |
+
This inclusivism[132] was further developed in the 19th and 20th centuries by Hindu reform movements and Neo-Vedanta,[133] and has become characteristic of modern Hinduism.[113]
|
92 |
+
|
93 |
+
The notion and reports on "Hinduism" as a "single world religious tradition"[134] was popularised by 19th-century proselytizing missionaries and European Indologists, roles sometimes served by the same person, who relied on texts preserved by Brahmins (priests) for their information of Indian religions, and animist observations that the missionary Orientalists presumed was Hinduism.[134][68][135] These reports influenced perceptions about Hinduism. Some scholars[weasel words] state that the colonial polemical reports led to fabricated stereotypes where Hinduism was mere mystic paganism devoted to the service of devils,[note 19] while other scholars state that the colonial constructions influenced the belief that the Vedas, Bhagavad Gita, Manusmriti and such texts were the essence of Hindu religiosity, and in the modern association of 'Hindu doctrine' with the schools of Vedanta (in particular Advaita Vedanta) as paradigmatic example of Hinduism's mystical nature".[137][note 20] Pennington, while concurring that the study of Hinduism as a world religion began in the colonial era, disagrees that Hinduism is a colonial European era invention.[144] He states that the shared theology, common ritual grammar and way of life of those who identify themselves as Hindus is traceable to ancient times.[144][note 21]
|
94 |
+
|
95 |
+
Prominent themes in Hindu beliefs include (but are not restricted to) Dharma (ethics/duties), Samsāra (the continuing cycle of birth, life, death and rebirth), Karma (action, intent and consequences), Moksha (liberation from samsara or liberation in this life), and the various Yogas (paths or practices).[17]
|
96 |
+
|
97 |
+
Classical Hindu thought accepts four proper goals or aims of human life: Dharma, Artha, Kama and Moksha. These are known as the Puruṣārthas:[14][15]
|
98 |
+
|
99 |
+
Dharma is considered the foremost goal of a human being in Hinduism.[151] The concept Dharma includes behaviors that are considered to be in accord with rta, the order that makes life and universe possible,[152] and includes duties, rights, laws, conduct, virtues and "right way of living".[153] Hindu Dharma includes the religious duties, moral rights and duties of each individual, as well as behaviors that enable social order, right conduct, and those that are virtuous.[153] Dharma, according to Van Buitenen,[154] is that which all existing beings must accept and respect to sustain harmony and order in the world. It is, states Van Buitenen, the pursuit and execution of one's nature and true calling, thus playing one's role in cosmic concert.[154] The Brihadaranyaka Upanishad states it as:
|
100 |
+
|
101 |
+
Nothing is higher than Dharma. The weak overcomes the stronger by Dharma, as over a king. Truly that Dharma is the Truth (Satya); Therefore, when a man speaks the Truth, they say, "He speaks the Dharma"; and if he speaks Dharma, they say, "He speaks the Truth!" For both are one.
|
102 |
+
|
103 |
+
In the Mahabharata, Krishna defines dharma as upholding both this-worldly and other-worldly affairs. (Mbh 12.110.11). The word Sanātana means eternal, perennial, or forever; thus, Sanātana Dharma signifies that it is the dharma that has neither beginning nor end.[157]
|
104 |
+
|
105 |
+
Artha is objective and virtuous pursuit of wealth for livelihood, obligations and economic prosperity. It is inclusive of political life, diplomacy and material well-being. The Artha concept includes all "means of life", activities and resources that enables one to be in a state one wants to be in, wealth, career and financial security.[158] The proper pursuit of artha is considered an important aim of human life in Hinduism.[159][160]
|
106 |
+
|
107 |
+
Kāma (Sanskrit, Pali; Devanagari: काम) means desire, wish, passion, longing, pleasure of the senses, the aesthetic enjoyment of life, affection, or love, with or without sexual connotations.[161][162] In Hinduism, Kama is considered an essential and healthy goal of human life when pursued without sacrificing Dharma, Artha and Moksha.[163]
|
108 |
+
|
109 |
+
Moksha (Sanskrit: मोक्ष mokṣa) or mukti (Sanskrit: मुक्ति) is the ultimate, most important goal in Hinduism. In one sense, Moksha is a concept associated with liberation from sorrow, suffering and saṃsāra (birth-rebirth cycle). A release from this eschatological cycle, in after life, particularly in theistic schools of Hinduism is called moksha.[164][154] In other schools of Hinduism, such as monistic, moksha is a goal achievable in current life, as a state of bliss through self-realization, of comprehending the nature of one's soul, of freedom and of "realizing the whole universe as the Self".[165][166]
|
110 |
+
|
111 |
+
Karma translates literally as action, work, or deed,[167] and also refers to a Vedic theory of "moral law of cause and effect".[168][169] The theory is a combination of (1) causality that may be ethical or non-ethical; (2) ethicization, that is good or bad actions have consequences; and (3) rebirth.[170] Karma theory is interpreted as explaining the present circumstances of an individual with reference to his or her actions in the past. These actions and their consequences may be in a person's current life, or, according to some schools of Hinduism, in past lives.[170][171] This cycle of birth, life, death and rebirth is called samsara. Liberation from samsara through moksha is believed to ensure lasting happiness and peace.[172][173] Hindu scriptures teach that the future is both a function of current human effort derived from free will and past human actions that set the circumstances.[174]
|
112 |
+
|
113 |
+
The ultimate goal of life, referred to as moksha, nirvana or samadhi, is understood in several different ways: as the realization of one's union with God; as the realization of one's eternal relationship with God; realization of the unity of all existence; perfect unselfishness and knowledge of the Self; as the attainment of perfect mental peace; and as detachment from worldly desires. Such realization liberates one from samsara, thereby ending the cycle of rebirth, sorrow and suffering.[175][176] Due to belief in the indestructibility of the soul,[177] death is deemed insignificant with respect to the cosmic self.[178]
|
114 |
+
|
115 |
+
The meaning of moksha differs among the various Hindu schools of thought. For example, Advaita Vedanta holds that after attaining moksha a person knows their "soul, self" and identifies it as one with Brahman and everyone in all respects.[179][180] The followers of Dvaita (dualistic) schools, in moksha state, identify individual "soul, self" as distinct from Brahman but infinitesimally close, and after attaining moksha expect to spend eternity in a loka (heaven). To theistic schools of Hinduism, moksha is liberation from samsara, while for other schools such as the monistic school, moksha is possible in current life and is a psychological concept. According to Deutsche, moksha is transcendental consciousness to the latter, the perfect state of being, of self-realization, of freedom and of "realizing the whole universe as the Self".[165][179] Moksha in these schools of Hinduism, suggests Klaus Klostermaier,[180] implies a setting free of hitherto fettered faculties, a removing of obstacles to an unrestricted life, permitting a person to be more truly a person in the full sense; the concept presumes an unused human potential of creativity, compassion and understanding which had been blocked and shut out. Moksha is more than liberation from life-rebirth cycle of suffering (samsara); Vedantic school separates this into two: jivanmukti (liberation in this life) and videhamukti (liberation after death).[181][182]
|
116 |
+
|
117 |
+
Hinduism is a diverse system of thought with beliefs spanning monotheism, polytheism, panentheism, pantheism, pandeism, monism, and atheism among others;[183][184][web 5] and its concept of God is complex and depends upon each individual and the tradition and philosophy followed. It is sometimes referred to as henotheistic (i.e., involving devotion to a single god while accepting the existence of others), but any such term is an overgeneralization.[185]
|
118 |
+
|
119 |
+
Who really knows? Who will here proclaim it? Whence was it produced? Whence is this creation? The gods came afterwards, with the creation of this universe. Who then knows whence it has arisen?
|
120 |
+
|
121 |
+
The Nasadiya Sukta (Creation Hymn) of the Rig Veda is one of the earliest texts[189] which "demonstrates a sense of metaphysical speculation" about what created the universe, the concept of god(s) and The One, and whether even The One knows how the universe came into being.[190][191] The Rig Veda praises various deities, none superior nor inferior, in a henotheistic manner.[192] The hymns repeatedly refer to One Truth and Reality. The "One Truth" of Vedic literature, in modern era scholarship, has been interpreted as monotheism, monism, as well as a deified Hidden Principles behind the great happenings and processes of nature.[193]
|
122 |
+
|
123 |
+
Hindus believe that all living creatures have a soul. This soul – the spirit or true "self" of every person, is called the ātman. The soul is believed to be eternal.[194] According to the monistic/pantheistic (non-dualist) theologies of Hinduism (such as Advaita Vedanta school), this Atman is indistinct from Brahman, the supreme spirit.[195] The goal of life, according to the Advaita school, is to realise that one's soul is identical to supreme soul, that the supreme soul is present in everything and everyone, all life is interconnected and there is oneness in all life.[196][197][198] Dualistic schools (see Dvaita and Bhakti) understand Brahman as a Supreme Being separate from individual souls.[199] They worship the Supreme Being variously as Vishnu, Brahma, Shiva, or Shakti, depending upon the sect. God is called Ishvara, Bhagavan, Parameshwara, Deva or Devi, and these terms have different meanings in different schools of Hinduism.[200][201][202]
|
124 |
+
|
125 |
+
Hindu texts accept a polytheistic framework, but this is generally conceptualized as the divine essence or luminosity that gives vitality and animation to the inanimate natural substances.[203] There is a divine in everything, human beings, animals, trees and rivers. It is observable in offerings to rivers, trees, tools of one's work, animals and birds, rising sun, friends and guests, teachers and parents.[203][204][205] It is the divine in these that makes each sacred and worthy of reverence. This seeing divinity in everything, state Buttimer and Wallin, makes the Vedic foundations of Hinduism quite distinct from Animism.[203] The animistic premise sees multiplicity, power differences and competition between man and man, man and animal, as well as man and nature. The Vedic view does not see this competition, rather sees a unifying divinity that connects everyone and everything.[203][206][207]
|
126 |
+
|
127 |
+
The Hindu scriptures refer to celestial entities called Devas (or devī in feminine form; devatā used synonymously for Deva in Hindi), which may be translated into English as gods or heavenly beings.[note 22] The devas are an integral part of Hindu culture and are depicted in art, architecture and through icons, and stories about them are related in the scriptures, particularly in Indian epic poetry and the Puranas. They are, however, often distinguished from Ishvara, a personal god, with many Hindus worshipping Ishvara in one of its particular manifestations as their iṣṭa devatā, or chosen ideal.[208][209] The choice is a matter of individual preference,[210] and of regional and family traditions.[210][note 23] The multitude of Devas are considered as manifestations of Brahman.[note 24]
|
128 |
+
|
129 |
+
The word avatar does not appear in the Vedic literature,[212] but appears in verb forms in post-Vedic literature, and as a noun particularly in the Puranic literature after the 6th century CE.[213] Theologically, the reincarnation idea is most often associated with the avatars of Hindu god Vishnu, though the idea has been applied to other deities.[214] Varying lists of avatars of Vishnu appear in Hindu scriptures, including the ten Dashavatara of the Garuda Purana and the twenty-two avatars in the Bhagavata Purana, though the latter adds that the incarnations of Vishnu are innumerable.[215] The avatars of Vishnu are important in Vaishnavism theology. In the goddess-based Shaktism tradition of Hinduism, avatars of the Devi are found and all goddesses are considered to be different aspects of the same metaphysical Brahman[216] and Shakti (energy).[217][218] While avatars of other deities such as Ganesha and Shiva are also mentioned in medieval Hindu texts, this is minor and occasional.[219]
|
130 |
+
|
131 |
+
Both theistic and atheistic ideas, for epistemological and metaphysical reasons, are profuse in different schools of Hinduism. The early Nyaya school of Hinduism, for example, was non-theist/atheist,[220] but later Nyaya school scholars argued that God exists and offered proofs using its theory of logic.[221][222] Other schools disagreed with Nyaya scholars. Samkhya,[223] Mimamsa[224][note 25] and Carvaka schools of Hinduism, were non-theist/atheist, arguing that "God was an unnecessary metaphysical assumption".[225][web 6][226] Its Vaisheshika school started as another non-theistic tradition relying on naturalism and that all matter is eternal, but it later introduced the concept of a non-creator God.[227][228] The Yoga school of Hinduism accepted the concept of a "personal god" and left it to the Hindu to define his or her god.[229] Advaita Vedanta taught a monistic, abstract Self and Oneness in everything, with no room for gods or deity, a perspective that Mohanty calls, "spiritual, not religious".[230] Bhakti sub-schools of Vedanta taught a creator God that is distinct from each human being.[199]
|
132 |
+
|
133 |
+
According to Graham Schweig, Hinduism has the strongest presence of the divine feminine in world religion from ancient times to the present.[231] The goddess is viewed as the heart of the most esoteric Saiva traditions.[232]
|
134 |
+
|
135 |
+
Authority and eternal truths play an important role in Hinduism.[233] Religious traditions and truths are believed to be contained in its sacred texts, which are accessed and taught by sages, gurus, saints or avatars.[233] But there is also a strong tradition of the questioning of authority, internal debate and challenging of religious texts in Hinduism. The Hindus believe that this deepens the understanding of the eternal truths and further develops the tradition. Authority "was mediated through [...] an intellectual culture that tended to develop ideas collaboratively, and according to the shared logic of natural reason."[233] Narratives in the Upanishads present characters questioning persons of authority.[233] The Kena Upanishad repeatedly asks kena, 'by what' power something is the case.[233] The Katha Upanishad and Bhagavad Gita present narratives where the student criticizes the teacher's inferior answers.[233] In the Shiva Purana, Shiva questions Vishnu and Brahma.[233] Doubt plays a repeated role in the Mahabharata.[233] Jayadeva's Gita Govinda presents criticism via the character of Radha.[233]
|
136 |
+
|
137 |
+
Hinduism has no central doctrinal authority and many practising Hindus do not claim to belong to any particular denomination or tradition.[234] Four major denominations are, however, used in scholarly studies: Vaishnavism, Shaivism, Shaktism and Smartism.[235][236] These denominations differ primarily in the central deity worshipped, the traditions and the soteriological outlook.[237] The denominations of Hinduism, states Lipner, are unlike those found in major religions of the world, because Hindu denominations are fuzzy with individuals practicing more than one, and he suggests the term "Hindu polycentrism".[238]
|
138 |
+
|
139 |
+
Vaishnavism is the devotional religious tradition that worships Vishnu[239] and his avatars, particularly Krishna and Rama.[240] The adherents of this sect are generally non-ascetic, monastic, oriented towards community events and devotionalism practices inspired by "intimate loving, joyous, playful" Krishna and other Vishnu avatars.[237] These practices sometimes include community dancing, singing of Kirtans and Bhajans, with sound and music believed by some to have meditative and spiritual powers.[241] Temple worship and festivals are typically elaborate in Vaishnavism.[242] The Bhagavad Gita and the Ramayana, along with Vishnu-oriented Puranas provide its theistic foundations.[243] Philosophically, their beliefs are rooted in the dualism sub-schools of Vedantic Hinduism.[244][245]
|
140 |
+
|
141 |
+
Shaivism is the tradition that focuses on Shiva. Shaivas are more attracted to ascetic individualism, and it has several sub-schools.[237] Their practices include Bhakti-style devotionalism, yet their beliefs lean towards nondual, monistic schools of Hinduism such as Advaita and Yoga.[235][241] Some Shaivas worship in temples, while others emphasize yoga, striving to be one with Shiva within.[246] Avatars are uncommon, and some Shaivas visualize god as half male, half female, as a fusion of the male and female principles (Ardhanarishvara). Shaivism is related to Shaktism, wherein Shakti is seen as spouse of Shiva.[235] Community celebrations include festivals, and participation, with Vaishnavas, in pilgrimages such as the Kumbh Mela.[247] Shaivism has been more commonly practiced in the Himalayan north from Kashmir to Nepal, and in south India.[248]
|
142 |
+
|
143 |
+
Shaktism focuses on goddess worship of Shakti or Devi as cosmic mother,[237] and it is particularly common in northeastern and eastern states of India such as Assam and Bengal. Devi is depicted as in gentler forms like Parvati, the consort of Shiva; or, as fierce warrior goddesses like Kali and Durga. Followers of Shaktism recognize Shakti as the power that underlies the male principle. Shaktism is also associated with Tantra practices.[249] Community celebrations include festivals, some of which include processions and idol immersion into sea or other water bodies.[250]
|
144 |
+
|
145 |
+
Smartism centers its worship simultaneously on all the major Hindu deities: Shiva, Vishnu, Shakti, Ganesha, Surya and Skanda.[251] The Smarta tradition developed during the (early) Classical Period of Hinduism around the beginning of the Common Era, when Hinduism emerged from the interaction between Brahmanism and local traditions.[252][253] The Smarta tradition is aligned with Advaita Vedanta, and regards Adi Shankara as its founder or reformer, who considered worship of God-with-attributes (Saguna Brahman) as a journey towards ultimately realizing God-without-attributes (nirguna Brahman, Atman, Self-knowledge).[254][255] The term Smartism is derived from Smriti texts of Hinduism, meaning those who remember the traditions in the texts.[235][256] This Hindu sect practices a philosophical Jnana yoga, scriptural studies, reflection, meditative path seeking an understanding of Self's oneness with God.[235][257]
|
146 |
+
|
147 |
+
There are no census data available on demographic history or trends for the traditions within Hinduism.[258] Estimates vary on the relative number of adherents in the different traditions of Hinduism. According to a 2010 estimate by Johnson and Grim, the Vaishnavism tradition is the largest group with about 641 million or 67.6% of Hindus, followed by Shaivism with 252 million or 26.6%, Shaktism with 30 million or 3.2% and other traditions including Neo-Hinduism and Reform Hinduism with 25 million or 2.6%.[259] In contrast, according to Jones and Ryan, Shaivism is the largest tradition of Hinduism.[260]
|
148 |
+
|
149 |
+
The ancient scriptures of Hinduism are in Sanskrit. These texts are classified into two: Shruti and Smriti. Shruti is apauruṣeyā, "not made of a man" but revealed to the rishis (seers), and regarded as having the highest authority, while the smriti are manmade and have secondary authority.[262] They are the two highest sources of dharma, the other two being Śiṣṭa Āchāra/Sadāchara (conduct of noble people) and finally Ātma tuṣṭi ("what is pleasing to oneself")[note 26]
|
150 |
+
|
151 |
+
Hindu scriptures were composed, memorized and transmitted verbally, across generations, for many centuries before they were written down.[264][265] Over many centuries, sages refined the teachings and expanded the Shruti and Smriti, as well as developed Shastras with epistemological and metaphysical theories of six classical schools of Hinduism.
|
152 |
+
|
153 |
+
Shruti (lit. that which is heard)[266] primarily refers to the Vedas, which form the earliest record of the Hindu scriptures, and are regarded as eternal truths revealed to the ancient sages (rishis).[267] There are four Vedas – Rigveda, Samaveda, Yajurveda and Atharvaveda. Each Veda has been subclassified into four major text types – the Samhitas (mantras and benedictions), the Aranyakas (text on rituals, ceremonies, sacrifices and symbolic-sacrifices), the Brahmanas (commentaries on rituals, ceremonies and sacrifices), and the Upanishads (text discussing meditation, philosophy and spiritual knowledge).[268][269][270] The first two parts of the Vedas were subsequently called the Karmakāṇḍa (ritualistic portion), while the last two form the Jñānakāṇḍa (knowledge portion, discussing spiritual insight and philosophical teachings).[271][272][273][274]
|
154 |
+
|
155 |
+
The Upanishads are the foundation of Hindu philosophical thought, and have profoundly influenced diverse traditions.[275][276] Of the Shrutis (Vedic corpus), they alone are widely influential among Hindus, considered scriptures par excellence of Hinduism, and their central ideas have continued to influence its thoughts and traditions.[275][277] Sarvepalli Radhakrishnan states that the Upanishads have played a dominating role ever since their appearance.[278] There are 108 Muktikā Upanishads in Hinduism, of which between 10 and 13 are variously counted by scholars as Principal Upanishads.[279][280]
|
156 |
+
The most notable of the Smritis ("remembered") are the Hindu epics and the Puranas. The epics consist of the Mahabharata and the Ramayana. The Bhagavad Gita is an integral part of the Mahabharata and one of the most popular sacred texts of Hinduism.[281] It is sometimes called Gitopanishad, then placed in the Shruti ("heard") category, being Upanishadic in content.[282] The Puranas, which started to be composed from c. 300 CE onward,[283] contain extensive mythologies, and are central in the distribution of common themes of Hinduism through vivid narratives. The Yoga Sutras is a classical text for the Hindu Yoga tradition, which gained a renewed popularity in the 20th century.[284]
|
157 |
+
Since the 19th-century Indian modernists have re-asserted the 'Aryan origins' of Hinduism, "purifying" Hinduism from its Tantric elements[85] and elevating the Vedic elements. Hindu modernists like Vivekananda see the Vedas as the laws of the spiritual world, which would still exist even if they were not revealed to the sages.[285][286] In Tantric tradition, the Agamas refer to authoritative scriptures or the teachings of Shiva to Shakti,[287] while Nigamas refers to the Vedas and the teachings of Shakti to Shiva.[287] In Agamic schools of Hinduism, the Vedic literature and the Agamas are equally authoritative.[288][289]
|
158 |
+
|
159 |
+
Most Hindus observe religious rituals at home.[291] The rituals vary greatly among regions, villages, and individuals. They are not mandatory in Hinduism. The nature and place of rituals is an individual's choice. Some devout Hindus perform daily rituals such as worshiping at dawn after bathing (usually at a family shrine, and typically includes lighting a lamp and offering foodstuffs before the images of deities), recitation from religious scripts, singing devotional hymns, yoga, meditation, chanting mantras and others.[292]
|
160 |
+
|
161 |
+
Vedic rituals of fire-oblation (yajna) and chanting of Vedic hymns are observed on special occasions, such as a Hindu wedding.[293] Other major life-stage events, such as rituals after death, include the yajña and chanting of Vedic mantras.[web 7]
|
162 |
+
|
163 |
+
The words of the mantras are "themselves sacred,"[294] and "do not constitute linguistic utterances."[295] Instead, as Klostermaier notes, in their application in Vedic rituals they become magical sounds, "means to an end."[note 27] In the Brahmanical perspective, the sounds have their own meaning, mantras are considered as "primordial rhythms of creation", preceding the forms to which they refer.[295] By reciting them the cosmos is regenerated, "by enlivening and nourishing the forms of creation at their base. As long as the purity of the sounds is preserved, the recitation of the mantras will be efficacious, irrespective of whether their discursive meaning is understood by human beings."[295][note 25]
|
164 |
+
|
165 |
+
Major life stage milestones are celebrated as sanskara (saṃskāra, rites of passage) in Hinduism.[296][297] The rites of passage are not mandatory, and vary in details by gender, community and regionally.[298] Gautama Dharmasutras composed in about the middle of 1st millennium BCE lists 48 sanskaras,[299] while Gryhasutra and other texts composed centuries later list between 12 and 16 sanskaras.[296][300] The list of sanskaras in Hinduism include both external rituals such as those marking a baby's birth and a baby's name giving ceremony, as well as inner rites of resolutions and ethics such as compassion towards all living beings and positive attitude.[299]
|
166 |
+
The major traditional rites of passage in Hinduism include[298] Garbhadhana (pregnancy), Pumsavana (rite before the fetus begins moving and kicking in womb), Simantonnayana (parting of pregnant woman's hair, baby shower), Jatakarman (rite celebrating the new born baby), Namakarana (naming the child), Nishkramana (baby's first outing from home into the world), Annaprashana (baby's first feeding of solid food), Chudakarana (baby's first haircut, tonsure), Karnavedha (ear piercing), Vidyarambha (baby's start with knowledge), Upanayana (entry into a school rite),[301][302] Keshanta and Ritusuddhi (first shave for boys, menarche for girls), Samavartana (graduation ceremony), Vivaha (wedding), Vratas (fasting, spiritual studies) and Antyeshti (cremation for an adult, burial for a child).[303] In contemporary times, there is regional variation among Hindus as to which of these sanskaras are observed; in some cases, additional regional rites of passage such as Śrāddha (ritual of feeding people after cremation) are practiced.[298][web 8]
|
167 |
+
|
168 |
+
Bhakti refers to devotion, participation in and the love of a personal god or a representational god by a devotee.[web 9][304] Bhakti marga is considered in Hinduism as one of many possible paths of spirituality and alternative means to moksha.[305] The other paths, left to the choice of a Hindu, are Jnana marga (path of knowledge), Karma marga (path of works), Rāja marga (path of contemplation and meditation).[306][307]
|
169 |
+
|
170 |
+
Bhakti is practiced in a number of ways, ranging from reciting mantras, japas (incantations), to individual private prayers within one's home shrine,[308] or in a temple or near a river bank, sometimes in the presence of an idol or image of a deity.[309][310] Hindu temples and domestic altars, states Lynn Foulston, are important elements of worship in contemporary theistic Hinduism.[311] While many visit a temple on a special occasion, most offer a brief prayer on an everyday basis at the domestic altar.[311] This bhakti is expressed in a domestic shrine which typically is a dedicated part of the home and includes the images of deities or the gurus the Hindu chooses.[311] Among Vaishnavism sub-traditions such as Swaminarayan, the home shrines can be elaborate with either a room dedicated to it or a dedicated part of the kitchen. The devotee uses this space for daily prayers or meditation, either before breakfast or after day's work.[312][313]
|
171 |
+
|
172 |
+
Bhakti is sometimes private inside household shrines and sometimes practiced as a community. It may include Puja, Aarti,[314] musical Kirtan or singing Bhajan, where devotional verses and hymns are read or poems are sung by a group of devotees.[web 10][315] While the choice of the deity is at the discretion of the Hindu, the most observed traditions of Hindu devotionalism include Vaishnavism (Vishnu), Shaivism (Shiva) and Shaktism (Shakti).[316] A Hindu may worship multiple deities, all as henotheistic manifestations of the same ultimate reality, cosmic spirit and absolute spiritual concept called Brahman in Hinduism.[317][318][note 24]
|
173 |
+
Bhakti marga, states Pechelis, is more than ritual devotionalism, it includes practices and spiritual activities aimed at refining one's state of mind, knowing god, participating in god, and internalizing god.[319][320] While Bhakti practices are popular and easily observable aspect of Hinduism, not all Hindus practice Bhakti, or believe in god-with-attributes (saguna Brahman).[321][322] Concurrent Hindu practices include a belief in god-without-attributes, and god within oneself.[323][324]
|
174 |
+
|
175 |
+
Hindu festivals (Sanskrit: Utsava; literally: "to lift higher") are ceremonies that weave individual and social life to dharma.[325][326] Hinduism has many festivals throughout the year, where the dates are set by the lunisolar Hindu calendar, many coinciding with either the full moon (Holi) or the new moon (Diwali), often with seasonal changes.[327] Some festivals are found only regionally and they celebrate local traditions, while a few such as Holi and Diwali are pan-Hindu.[327][328]
|
176 |
+
The festivals typically celebrate events from Hinduism, connoting spiritual themes and celebrating aspects of human relationships such as the Sister-Brother bond over the Raksha Bandhan (or Bhai Dooj) festival.[326][329] The same festival sometimes marks different stories depending on the Hindu denomination, and the celebrations incorporate regional themes, traditional agriculture, local arts, family get togethers, Puja rituals and feasts.[325][330]
|
177 |
+
|
178 |
+
Some major regional or pan-Hindu festivals include:
|
179 |
+
|
180 |
+
Many adherents undertake pilgrimages, which have historically been an important part of Hinduism and remain so today.[331] Pilgrimage sites are called Tirtha, Kshetra, Gopitha or Mahalaya.[332][333] The process or journey associated with Tirtha is called Tirtha-yatra.[334] According to the Hindu text Skanda Purana, Tirtha are of three kinds: Jangam Tirtha is to a place movable of a sadhu, a rishi, a guru; Sthawar Tirtha is to a place immovable, like Benaras, Haridwar, Mount Kailash, holy rivers; while Manas Tirtha is to a place of mind of truth, charity, patience, compassion, soft speech, soul.[335][336] Tīrtha-yatra is, states Knut A. Jacobsen, anything that has a salvific value to a Hindu, and includes pilgrimage sites such as mountains or forests or seashore or rivers or ponds, as well as virtues, actions, studies or state of mind.[337][338]
|
181 |
+
|
182 |
+
Pilgrimage sites of Hinduism are mentioned in the epic Mahabharata and the Puranas.[339][340] Most Puranas include large sections on Tirtha Mahatmya along with tourist guides,[341] which describe sacred sites and places to visit.[342][343][344] In these texts, Varanasi (Benares, Kashi), Rameshwaram, Kanchipuram, Dwarka, Puri, Haridwar, Sri Rangam, Vrindavan, Ayodhya, Tirupati, Mayapur, Nathdwara, twelve Jyotirlinga and Shakti Peetha have been mentioned as particularly holy sites, along with geographies where major rivers meet (sangam) or join the sea.[345][340] Kumbhamela is another major pilgrimage on the eve of the solar festival Makar Sankranti. This pilgrimage rotates at a gap of three years among four sites: Prayag Raj at the confluence of the Ganges and Yamuna rivers, Haridwar near source of the Ganges, Ujjain on the Shipra river and Nasik on the bank of the Godavari river.[346] This is one of world's largest mass pilgrimage, with an estimated 40 to 100 million people attending the event.[346][347][web 11] At this event, they say a prayer to the sun and bathe in the river,[346] a tradition attributed to Adi Shankara.[348]
|
183 |
+
|
184 |
+
Some pilgrimages are part of a Vrata (vow), which a Hindu may make for a number of reasons.[349][350] It may mark a special occasion, such as the birth of a baby, or as part of a rite of passage such as a baby's first haircut, or after healing from a sickness.[351][352] It may, states Eck, also be the result of prayers answered.[351] An alternative reason for Tirtha, for some Hindus, is to respect wishes or in memory of a beloved person after his or her death.[351] This may include dispersing their cremation ashes in a Tirtha region in a stream, river or sea to honor the wishes of the dead. The journey to a Tirtha, assert some Hindu texts, helps one overcome the sorrow of the loss.[351][note 28]
|
185 |
+
|
186 |
+
Other reasons for a Tirtha in Hinduism is to rejuvenate or gain spiritual merit by traveling to famed temples or bathe in rivers such as the Ganges.[355][356][357] Tirtha has been one of the recommended means of addressing remorse and to perform penance, for unintentional errors and intentional sins, in the Hindu tradition.[358][359] The proper procedure for a pilgrimage is widely discussed in Hindu texts.[360] The most accepted view is that the greatest austerity comes from traveling on foot, or part of the journey is on foot, and that the use of a conveyance is only acceptable if the pilgrimage is otherwise impossible.[361]
|
187 |
+
|
188 |
+
Hindu society has been categorised into four classes, called varnas. They are the Brahmins: Vedic teachers and priests; the Kshatriyas: warriors and kings; the Vaishyas: farmers and merchants; and the Shudras: servants and labourers.[362]
|
189 |
+
The Bhagavad Gītā links the varna to an individual's duty (svadharma), inborn nature (svabhāva), and natural tendencies (guṇa).[363] The Manusmṛiti categorises the different castes.[web 12]
|
190 |
+
Some mobility and flexibility within the varnas challenge allegations of social discrimination in the caste system, as has been pointed out by several sociologists,[364][365] although some other scholars disagree.[366] Scholars debate whether the so-called caste system is part of Hinduism sanctioned by the scriptures or social custom.[367][web 13][note 29] And various contemporary scholars have argued that the caste system was constructed by the British colonial regime.[368]
|
191 |
+
A renunciant man of knowledge is usually called Varnatita or "beyond all varnas" in Vedantic works. The bhiksu is advised to not bother about the caste of the family from which he begs his food. Scholars like Adi Sankara affirm that not only is Brahman beyond all varnas, the man who is identified with Him also transcends the distinctions and limitations of caste.[369]
|
192 |
+
|
193 |
+
In whatever way a Hindu defines the goal of life, there are several methods (yogas) that sages have taught for reaching that goal. Yoga is a Hindu discipline which trains the body, mind and consciousness for health, tranquility and spiritual insight. This is done through a system of postures and exercises to practise control of the body and mind.[370] Texts dedicated to Yoga include the Yoga Sutras, the Hatha Yoga Pradipika, the Bhagavad Gita and, as their philosophical and historical basis, the Upanishads. Yoga is means, and the four major marga (paths) discussed in Hinduism are: Bhakti Yoga (the path of love and devotion), Karma Yoga (the path of right action), Rāja Yoga (the path of meditation), Jñāna Yoga (the path of wisdom)[371] An individual may prefer one or some yogas over others, according to his or her inclination and understanding. Practice of one yoga does not exclude others.
|
194 |
+
|
195 |
+
Hinduism has a developed system of symbolism and iconography to represent the sacred in art, architecture, literature and worship. These symbols gain their meaning from the scriptures or cultural traditions. The syllable Om (which represents the Brahman and Atman) has grown to represent Hinduism itself, while other markings such as the Swastika sign represent auspiciousness,[373] and Tilaka (literally, seed) on forehead – considered to be the location of spiritual third eye,[374] marks ceremonious welcome, blessing or one's participation in a ritual or rite of passage.[375] Elaborate Tilaka with lines may also identify a devotee of a particular denomination. Flowers, birds, animals, instruments, symmetric mandala drawings, objects, idols are all part of symbolic iconography in Hinduism.[376][377]
|
196 |
+
|
197 |
+
Hindus advocate the practice of ahiṃsā (nonviolence) and respect for all life because divinity is believed to permeate all beings, including plants and non-human animals.[378] The term ahiṃsā appears in the Upanishads,[379] the epic Mahabharata[380] and ahiṃsā is the first of the five Yamas (vows of self-restraint) in Patanjali's Yoga Sutras.[381]
|
198 |
+
|
199 |
+
In accordance with ahiṃsā, many Hindus embrace vegetarianism to respect higher forms of life. Estimates of strict lacto vegetarians in India (includes adherents of all religions) who never eat any meat, fish or eggs vary between 20% and 42%, while others are either less strict vegetarians or non-vegetarians.[382] Those who eat meat seek Jhatka (quick death) method of meat production, and dislike Halal (slow bled death) method, believing that quick death method reduces suffering to the animal.[383][384] The food habits vary with region, with Bengali Hindus and Hindus living in Himalayan regions, or river delta regions, regularly eating meat and fish.[385] Some avoid meat on specific festivals or occasions.[386] Observant Hindus who do eat meat almost always abstain from beef. The cow in Hindu society is traditionally identified as a caretaker and a maternal figure,[387] and Hindu society honours the cow as a symbol of unselfish giving.[388]
|
200 |
+
There are many Hindu groups that have continued to abide by a strict vegetarian diet in modern times. Some adhere to a diet that is devoid of meat, eggs, and seafood.[389] Food affects body, mind and spirit in Hindu beliefs.[390][391] Hindu texts such as Śāṇḍilya Upanishad[392] and Svātmārāma[393][394] recommend Mitahara (eating in moderation) as one of the Yamas (virtuous self restraints). The Bhagavad Gita links body and mind to food one consumes in verses 17.8 through 17.10.[395]
|
201 |
+
|
202 |
+
Some Hindus such as those belonging to the Shaktism tradition,[396] and Hindus in regions such as Bali and Nepal[397][398] practise animal sacrifice.[397] The sacrificed animal is eaten as ritual food.[399] In contrast, the Vaishnava Hindus abhor and vigorously oppose animal sacrifice.[400][401] The principle of non-violence to animals has been so thoroughly adopted in Hinduism that animal sacrifice is uncommon[402] and historically reduced to a vestigial marginal practice.[403]
|
203 |
+
|
204 |
+
A Hindu temple is a house of god(s).[404] It is a space and structure designed to bring human beings and gods together, infused with symbolism to express the ideas and beliefs of Hinduism.[405] A temple incorporates all elements of Hindu cosmology, the highest spire or dome representing Mount Meru – reminder of the abode of Brahma and the center of spiritual universe,[406] the carvings and iconography symbolically presenting dharma, kama, artha, moksha and karma.[407][408] The layout, the motifs, the plan and the building process recite ancient rituals, geometric symbolisms, and reflect beliefs and values innate within various schools of Hinduism.[405] Hindu temples are spiritual destinations for many Hindus (not all), as well as landmarks for arts, annual festivals, rite of passage rituals, and community celebrations.[409][410]
|
205 |
+
|
206 |
+
Hindu temples come in many styles, diverse locations, deploy different construction methods and are adapted to different deities and regional beliefs.[411] Two major styles of Hindu temples include the Gopuram style found in south India, and Nagara style found in north India.[412][413] Other styles include cave, forest and mountain temples.[414] Yet, despite their differences, almost all Hindu temples share certain common architectural principles, core ideas, symbolism and themes.[405]
|
207 |
+
Many temples feature one or more idols (murtis). The idol and Grabhgriya in the Brahma-pada (the center of the temple), under the main spire, serves as a focal point (darsana, a sight) in a Hindu temple.[415] In larger temples, the central space typically is surrounded by an ambulatory for the devotee to walk around and ritually circumambulate the Purusa (Brahman), the universal essence.[405]
|
208 |
+
|
209 |
+
Traditionally the life of a Hindu is divided into four Āśramas (phases or life stages; another meaning includes monastery).[416] The four ashramas are: Brahmacharya (student), Grihastha (householder), Vanaprastha (retired) and Sannyasa (renunciation).[417]
|
210 |
+
Brahmacharya represents the bachelor student stage of life. Grihastha refers to the individual's married life, with the duties of maintaining a household, raising a family, educating one's children, and leading a family-centred and a dharmic social life.[417] Grihastha stage starts with Hindu wedding, and has been considered as the most important of all stages in sociological context, as Hindus in this stage not only pursued a virtuous life, they produced food and wealth that sustained people in other stages of life, as well as the offsprings that continued mankind.[418] Vanaprastha is the retirement stage, where a person hands over household responsibilities to the next generation, took an advisory role, and gradually withdrew from the world.[419][420] The Sannyasa stage marks renunciation and a state of disinterest and detachment from material life, generally without any meaningful property or home (ascetic state), and focused on Moksha, peace and simple spiritual life.[421][422]
|
211 |
+
The Ashramas system has been one facet of the Dharma concept in Hinduism.[418] Combined with four proper goals of human life (Purusartha), the Ashramas system traditionally aimed at providing a Hindu with fulfilling life and spiritual liberation.[418] While these stages are typically sequential, any person can enter Sannyasa (ascetic) stage and become an Ascetic at any time after the Brahmacharya stage.[423] Sannyasa is not religiously mandatory in Hinduism, and elderly people are free to live with their families.[424]
|
212 |
+
|
213 |
+
Some Hindus choose to live a monastic life (Sannyāsa) in pursuit of liberation (moksha) or another form of spiritual perfection.[18] Monastics commit themselves to a simple and celibate life, detached from material pursuits, of meditation and spiritual contemplation.[425] A Hindu monk is called a Sanyāsī, Sādhu, or Swāmi. A female renunciate is called a Sanyāsini. Renunciates receive high respect in Hindu society because of their simple ahimsa-driven lifestyle and dedication to spiritual liberation (moksha) – believed to be the ultimate goal of life in Hinduism.[422] Some monastics live in monasteries, while others wander from place to place, depending on donated food and charity for their needs.[426]
|
214 |
+
|
215 |
+
James Mill (1773–1836), in his The History of British India (1817),[427] distinguished three phases in the history of India, namely Hindu, Muslim and British civilisations.[427][428] This periodisation has been criticised for the misconceptions it has given rise to.[429] Another periodisation is the division into "ancient, classical, medieval and modern periods".[430] An elaborate periodisation may be as follows:[12]
|
216 |
+
|
217 |
+
Hinduism is a fusion[436][note 5] or synthesis[27][note 6] of various Indian cultures and traditions.[27][note 8] Among the roots of Hinduism are the historical Vedic religion of Iron Age India,[437] itself already the product of "a composite of the Indo-Aryan and Harappan cultures and civilizations",[438][note 31] but also the Sramana[439] or renouncer traditions[110] of northeast India,[439] and mesolithic[440] and neolithic[441] cultures of India, such as the religions of the Indus Valley Civilisation,[442] Dravidian traditions,[443] and the local traditions[110] and tribal religions.[444][note 32]
|
218 |
+
|
219 |
+
This "Hindu synthesis" emerged after the Vedic period, between 500[27]-200[28] BCE and c. 300 CE,[27] the beginning of the "Epic and Puranic" c.q. "Preclassical" period,[27][28] and incorporated śramaṇic[28][445] and Buddhist influences[28][446] and the emerging bhakti tradition into the Brahmanical fold via the Smriti literature.[447][28] From northern India this "Hindu synthesis", and its societal divisions, spread to southern India and parts of Southeast Asia, as the Brahmanical culture was adopted by courts and rulers.[448] Hinduism co-existed for several centuries with Buddhism,[449] to finally gain the upper hand at all levels in the 8th century.[450][web 15][note 33]
|
220 |
+
|
221 |
+
According to Eliot Deutsch, brahmins played an essential role in the development of this synthesis. They were bilingual and bicultural, speaking both their local language, and popular Sanskrit, which transcended regional differences in culture and language. They were able to "translate the mainstream of the large culture in terms of the village and the culture of the village in terms of the mainstream," thereby integrating the local culture into a larger whole.[118] While vaidikas and, to a lesser degree, smartas, remained faithfull to the traditional Vedic lore, a new brahminism arose which composed litanies for the local and regional gods, and became the ministers of these local traditions.[118]
|
222 |
+
|
223 |
+
The earliest prehistoric religion in India that may have left its traces in Hinduism comes from mesolithic as observed in the sites such as the rock paintings of Bhimbetka rock shelters dating to a period of 30,000 BCE or older,[note 34] as well as neolithic times.[note 35] Some of the religious practices can be considered to have originated in 4000 BCE. Several tribal religions still exist, though their practices may not resemble those of prehistoric religions.[web 16]
|
224 |
+
|
225 |
+
According to anthropologist Possehl, the Indus Valley Civilization "provides a logical, if somewhat arbitrary, starting point for some aspects of the later Hindu tradition".[452] The religion of this period included worship of a Great male god, which is compared to a proto-Shiva, and probably a Mother Goddess, that may prefigure Shakti. However these links of deities and practices of the Indus religion to later-day Hinduism are subject to both political contention and scholarly dispute.[453]
|
226 |
+
|
227 |
+
The Vedic period is the period when the Vedas were composed, the liturgical texts of the religion of some of the Indo-Aryans[454][note 36] as codified at the Kuru Kingdom,[456] lasted from c. 1500 to 500 BCE.[457][note 37] The Indo-Aryans were semi-nomadic pastoralists[456] who migrated into north-western India after the collapse of the Indus Valley Civilization.[455][459][460]
|
228 |
+
|
229 |
+
The Puranic chronology, the timeline of events in ancient Indian history as narrated in the Mahabaratha, the Ramayana, and the Puranas, envisions an older chronology for the Vedic culture. In this view, the Vedas were received thousands of years ago. The Kurukshetra War, the background-scene of the Baghavad Gita, which may relate histoical events taking which took place ca. 1000 BCE at the heartland of Aryavarta,[456][461] is dated in this chronology at ca. 3100 BCE, while Gulshan (1940) dates the start of the reign of Manu Vaivasvata at 7350 BCE.[462] Some Indian writers and archaeologists have opposed the notion of a migration of Indo-Aryans into India and argued for an indigenous origin of the Indo-Aryans,[463][455][464] but though popular in India, these ideas have no support in academic mainstream scholarship.[note 38] The linguistic and religious data show clear links with Indo-European languages and religion,[465] while recent genetic research shows that people related to the Corded Ware culture arrived in India from the steppes via the Inner Asia Mountain Corridor in the second millennium BCE.[466][467][468][web 17][web 18] According to Singh, "The dominant view is that the Indo-Aryans came to the subcontinent as immigrants."[464]
|
230 |
+
|
231 |
+
During the early Vedic period (c. 1500 – c. 1100 BCE[456]) Vedic tribes were pastoralists, wandering around in north-west India.[469] After 1100 BCE the Vedic tribes moved into the western Ganges Plain, adapting an agrarian lifestyle.[456][470][471] Rudimentary state-forms appeared, of which the Kuru-Pañcāla union was the most influential.[472][473] It was a tribal union, which developed into the first recorded state-level society in South Asia around 1000 BCE.[456] This, according to Witzel, decisively changed their religious heritage of the early Vedic period, collecting their ritual hymns into the Vedic collections, and shifting ritual exchange within a tribe to social exchange within the larger Kuru realm through complicated Srauta rituals.[474] In this period, states Samuel, emerged the Brahmana and Aranyaka layers of Vedic texts, which merged into the earliest Upanishads.[475] These texts began to ask the meaning of a ritual, adding increasing levels of philosophical and metaphysical speculation,[475] or "Hindu synthesis".[27]
|
232 |
+
|
233 |
+
The Indo-Aryans brought with them their language[476] and religion.[477][478] The Vedic beliefs and practices of the pre-classical era were closely related to the hypothesised Proto-Indo-European religion,[479] and the Indo-Iranian religion.[480][note 39]
|
234 |
+
|
235 |
+
The Vedic religion history is unclear and "heavily contested", states Samuel.[487] In the later Vedic period, it co-existed with local religions, such as the mother goddess worshipping Yaksha cults.[488][web 19] The Vedic was itself likely the product of "a composite of the indo-Aryan and Harappan cultures and civilizations".[438] David Gordon White cites three other mainstream scholars who "have emphatically demonstrated" that Vedic religion is partially derived from the Indus Valley Civilizations.[489][note 31] Their religion was further developed when they migrated into the Ganges Plain after c. 1100 BCE and became settled farmers,[456][491][492] further syncretising with the native cultures of northern India.[493]
|
236 |
+
|
237 |
+
The composition of the Vedic literature began in the 2nd millennium BCE.[494][495] The oldest of these Vedic texts is the Rigveda, composed between c. 1500 – 1200 BCE,[496][497][498] though a wider approximation of c. 1700 – 1100 BCE has also been given.[499][500]
|
238 |
+
|
239 |
+
The evidence suggests that the Vedic religion evolved in "two superficially contradictory directions", state Jamison and Witzel, namely an ever more "elaborate, expensive, and specialized system of rituals",[501] which survives in the present-day srauta-ritual,[502] and "abstraction and internalization of the principles underlying ritual and cosmic speculation" within oneself,[501][503] akin to the Jain and Buddhist tradition.
|
240 |
+
|
241 |
+
The first half of the 1st millennium BCE was a period of great intellectual and social-cultural ferment in ancient India.[504][505] New ideas developed both in the Vedic tradition in the form of the Upanishads, and outside of the Vedic tradition through the Śramaṇa movements.[506][507][508][note 40] For example, prior to the birth of the Buddha and the Mahavira, and related Sramana movements, the Brahmanical tradition had questioned the meaning and efficacy of Vedic rituals,[510] then internalized and variously reinterpreted the Vedic fire rituals as ethical concepts such as Truth, Rite, Tranquility or Restraint.[511] The 9th and 8th centuries BCE witnessed the composition of the earliest Upanishads with such ideas.[511][512]:183 Other ancient Principal Upanishads were composed in the centuries that followed, forming the foundation of classical Hinduism and the Vedanta (conclusion of the Veda) literature.[513]
|
242 |
+
|
243 |
+
Brahmanism, also called Brahminism, developed out of the Vedic religion, incorporating non-Vedic religious ideas, and expanding to a region stretching from the northwest Indian subcontinent to the Ganges valley.[514] Brahmanism included the Vedic corpus, but also post-Vedic texts such as the Dharmasutras and Dharmasastras, which gave prominence to the priestly (Brahmin) class of the society.[514] The emphasis on ritual and the dominant position of Brahmans developed as an ideology developed in the Kuru-Pancala realm, and expanded into a wider realm after the demish of the Kuru-Pancala realm.[456] It co-existed with local religions, such as the Yaksha cults.[493][515][516]
|
244 |
+
|
245 |
+
Increasing urbanisation of India between 800 and 400 BCE, and possibly the spread of urban diseases, contributed to the rise of ascetic movements and of new ideas which challenged the orthodox Brahmanism.[517] These ideas led to Sramana movements, of which Mahavira (c. 549 – 477 BCE), proponent of Jainism, and Buddha (c. 563 – 483), founder of Buddhism, were the most prominent icons.[512]:184 According to Bronkhorst, the sramana culture arose in greater Magadha, which was Indo-European, but not Vedic. In this culture, kashtriyas were placed higher than Brahmins, and it rejected Vedic authority and rituals.[518][519] Geoffrey Samuel, following Tom Hopkins, also argues that the Gangetic plain, which gave rise to Jainism and Buddhism, incorporated a culture which was different form the Brahmanical orthodoxy practiced in the Kuru-Pancala region.[520]
|
246 |
+
|
247 |
+
The ascetic tradition of Vedic period in part created the foundational theories of samsara and of moksha (liberation from samsara), which became characteristic for Hinduism, along with Buddhism and Jainism.[note 41][521]
|
248 |
+
|
249 |
+
These ascetic concepts were adopted by schools of Hinduism as well as other major Indian religions, but key differences between their premises defined their further development. Hinduism, for example, developed its ideas with the premise that every human being has a soul (atman, self), while Buddhism developed with the premise that there is no soul or self.[522][523][524]
|
250 |
+
The chronology of these religious concepts is unclear, and scholars contest which religion affected the other as well as the chronological sequence of the ancient texts.[525][526] Pratt notes that Oldenberg (1854–1920), Neumann (1865–1915) and Radhakrishnan (1888–1975) believed that the Buddhist canon had been influenced by Upanishads, while la Vallee Poussin thinks the influence was nihil, and "Eliot and several others insist that on some points such as the existence of soul or self the Buddha was directly antithetical to the Upanishads".[527][note 42]
|
251 |
+
|
252 |
+
The post-Vedic period of the Second Urbanisation saw a decline of Brahmanism.[529][530] At the end of the Vedic period, the meaning of the words of the Vedas had become obscure, and was perceived as "a fixed sequence of sounds"[531][note 43] with a magical power, "means to an end."[note 44] With the growth of cities, which treatened the income and patronage of the rural Brahmins; the rise of Buddhism; and the Indian campaign of Alexander the Great (327-325 BCE), the rise of the Mauryan Empire (322-185 BCE), and the Saka invasions and rule of northwestern India (2nd c. BC - 4th c. CE), Brahmanism faced a grave threat to its existence.[532][533] In some later texts, Northwest-India (which earlier texts consider as part of Aryavarta) is even seen as "impure", probably due to invasions. The Karnaparva 43.5-8 states that those who live on the Sindhu and the five rivers of the Punjab are impure and dharmabahya.
|
253 |
+
|
254 |
+
From about 500 BCE through about 300 CE, the Vedic-Brahmanic synthesis or "Hindu synthesis" continued.[27] Classical Hindu and Sramanic (particularly Buddhist) ideas spread within Indian subcontinent, as well outside India such as in Central Asia,[534] and the parts of Southeast Asia (coasts of Indonesia and peninsular Thailand).[note 45][535]
|
255 |
+
|
256 |
+
The decline of Brahmanism was overcome by providing new services[536] and incorporating the non-Vedic Indo-Aryan relgious heritage of the eastern Ganges plain and local religious traditions, giving rise to contemporary Hinduism.[532] The "Hindu synthesis" or "Brahmanical synthesis"[27][28] incorporated Sramanic and Buddhist influences[28][446][which?] into the "Brahmanical fold" via the Smriti ("remembered") literature.[447][28] According to Embree, several other religious traditions had existed side by side with the Vedic religion. These indigenous religions "eventually found a place under the broad mantle of the Vedic religion".[537] The Smriti texts of the period between 200 BCE-100 CE affirmed the authority of the Vedas. The acceptance of the ideas in the Vedas and Upanishads became a central criterium for defining Hinduism, while the heterodox movements rejected those ideas.[538]
|
257 |
+
|
258 |
+
The major Sanskrit epics, Ramayana and Mahabharata, which belong to the Smriti, were compiled over a protracted period during the late centuries BCE and the early centuries CE.[447][web 20] These are legendary dialogues interspersed with philosophical treatises. The Bhagavad Gita was composed in this period and consolidated diverse philosophies and soteriological ideas.[539]
|
259 |
+
|
260 |
+
During this period, the foundational texts of several schools of Hindu philosophy were formally written down, including Samkhya, Yoga, Nyaya, Vaisheshika, Purva-Mimamsa and Vedanta.[540] The Smriti literature of Hinduism, particularly the Sutras, as well as other Hindu texts such as the Arthashastra and Sushruta Samhita were also written or expanded during this period.[447][541]
|
261 |
+
|
262 |
+
Many influential Yoga Upanishads, states Gavin Flood, were composed before the 3rd century CE.[542][543] Seven Sannyasa Upanishads of Hinduism were composed between the last centuries of the 1st millennium BCE and before the 3rd century CE.[544][545] All these texts describe Hindu renunciation and monastic values, and express strongly Advaita Vedanta tradition ideas. This, state Patrick Olivelle and other scholars, is likely because the monasteries of Advaita tradition of Hinduism had become well established in ancient times.[546][547][548] The first version of Natyasastra – a Hindu text on performance arts that integrates Vedic ideology – was also completed before the 2nd century CE.[549][550]
|
263 |
+
|
264 |
+
During the Gupta period, the first stone and cave Hindu temples dedicated to Hindu deities were built, some of which have survived into the modern era.[551][note 46] Numerous monasteries and universities were also built during the Gupta dynasty era, which supported Vedic and non-Vedic studies, including the famed Nalanda.[553][554]
|
265 |
+
|
266 |
+
The first version of early Puranas, likely composed between 250 and 500 CE, show continuities with the Vedic religion, but also an expanded mythology of Vishnu, Shiva and Devi (goddess).[555] The Puranas were living texts that were revised over time,[556] and Lorenzen suggests these texts may reflect the beginnings of "medieval Hinduism".[122]
|
267 |
+
|
268 |
+
After the end of the Gupta Empire, power became decentralised in India. The disintegration of central power also led to regionalisation of religiosity, and religious rivalry.[557] Rural and devotional movements arose within Hinduism, along with Shaivism, Vaisnavism, Bhakti and Tantra,[557] that competed with each other, as well as with numerous sects of Buddhism and Jainism.[557][558] Buddhism declined, though many of its ideas, and even the Buddha himself, were absorbed into certain Brahmanical traditions.[559]
|
269 |
+
|
270 |
+
Srauta rituals declined in India and were replaced with Buddhist and Hindu initiatory rituals for royal courts.[560] Over time, some Buddhist practices were integrated into Hinduism, monumental Hindu temples were built in South Asia and Southeast Asia,[561] while Vajrayana Buddhism literature developed as a result of royal courts sponsoring both Buddhism and Saivism.[562]</ref>
|
271 |
+
|
272 |
+
The first edition of many Puranas were composed in this period. Examples include Bhagavata Purana and Vishnu Purana with legends of Krishna,[563] while Padma Purana and Kurma Purana expressed reverence for Vishnu, Shiva and Shakti with equal enthusiasm;[564] all of them included topics such as Yoga practice and pilgrimage tour guides to Hindu holy sites.[565][566] Early colonial era orientalists proposed that the Puranas were religious texts of medieval Hinduism.[567] However, modern era scholars, such as Urs App, Ronald Inden and Ludo Rocher state that this is highly misleading because these texts were continuously revised, exist in numerous very different versions and are too inconsistent to be religious texts.[567][568][569]
|
273 |
+
|
274 |
+
Bhakti ideas centered around loving devotion to Vishnu and Shiva with songs and music, were pioneered in this period by the Alvars and Nayanars of South India.[570][571] Major Hinduism scholars of this period included Adi Shankara, Maṇḍana-Miśra, Padmapada and Sureśvara of the Advaita schools;[572] Śabara, Vatsyayana and Samkarasvamin of Nyaya-Vaisesika schools; Mathara and Yuktidipika (author unknown) of Samkhya-Yoga; Bhartrhari, Vasugupta and Abhinavagupta of Kashmir Shaivism, and Ramanuja of Vishishtadvaita school of Hinduism (Sri Vaishnavism).[573][web 21][574]
|
275 |
+
|
276 |
+
The Islamic rule period witnessed Hindu-Muslim confrontation and violence,[575][576] but "violence did not normally characterize the relations of Muslim and Hindu."[577][578] Enslavement of non-Muslims, especially Hindus in India, was part of the Muslim raids and conquests.[579][580] After the 14th century slavery become less common[581] and in 1562 "Akbar abolished the practice of enslaving the families of war captives."[582] Akbar recognized Hinduism, protected Hindu temples, and abolished Jizya (head taxes) against Hindus.[580][583] Occasionally, Muslim rulers of the Delhi Sultanate and the Mughal Empire, before and after Akbar, from the 12th century to the 18th century, destroyed Hindu temples (eg. Kesavadeva Temple was destroyed at Mathura at Eidgah was built,[584][585] Bindumadhava temple was destroyed and Alamgir Mosque was built[586][587][589][note 47]) and persecuted non-Muslims.
|
277 |
+
|
278 |
+
Though Islam came to Indian subcontinent in the early 7th century with the advent of Arab traders, it started impacting Indian religions after the 10th century, and particularly after the 12th century with the establishment and then expansion of Islamic rule.[590][591] During this period Buddhism declined rapidly, and a distinct Indo-Islamic culture emerged.[592] Under Akbar an "intriguing blend of Perso-Islamic and Rajput-Hindu traditions became manifest."[593] Nevertheless, many orthodox ulamas ("learned Islamic jurists") opposed the rapprochement of Hinduism and Islam,[593] and the two merely co-existed,[594] although there was more accommodation at the peasantry level of Indian society.[594]
|
279 |
+
|
280 |
+
According to Hardy, the Muslim rulers were not concerned with the number of converts, since the stability and continuity of their regime did not depend on the number of Muslims.[595] In general, religious conversion was a gradual process, with some converts attracted to pious Muslim saints, while others converted to Islam to gain religious zijia tax relief on Hindus, land grant, marriage partners, social and economic advancement,[596] or freedom from slavery.[597] In border regions such as the Punjab and eastern Bengal, the share of Muslims grew as large as 70% to 90% of the population, whereas in the heartland of Muslim rule, the upper Gangetic Plain, the Muslims constituted only 10 to 15% of the population.[note 48]
|
281 |
+
|
282 |
+
Between the 14th and 18th century, Hinduism was revived in certain provinces of India under two powerful states, viz. Vijayanagar and Maratha. In the 14th and 15th centuries Southern India saw the rise of the Hindu Vijayanagar Empire, which served as a barrier against invasion by the Muslim sultanates of the north, and it fostered the reconstruction of Hindu life and administration.[web 22] Vidyaranya, also known as Madhava, who was the 12th Jagadguru of the Śringeri Śarada Pītham from 1380–6,[598] and a minister in the Vijayanagara Empire,[599] helped establish Shankara as a rallying symbol of values, and helped spread historical and cultural influence of Shankara's Vedanta philosophies.[600][601] The Hindu Maratha Confederacy rose to power in the 18th century and ended up overthrowing Muslim power in India[602][603]
|
283 |
+
|
284 |
+
Another Hindu polity was the Eastern Ganga and Surya, which ruled much of present-day Odisha (historically known as Kalinga) from 11th century till mid-16th century CE. During the 13th and 14th centuries, when large parts of India were under the rule of Muslim powers, an independent Kalinga became a stronghold of Hindu religion, philosophy, art, and architecture. The Eastern Ganga rulers were great patrons of religion and the arts, and the temples they built are considered among the masterpieces of Hindu architecture.[604][605]
|
285 |
+
|
286 |
+
Hinduism underwent profound changes, aided in part by teachers such as Ramanuja, Madhva, and Chaitanya.[606] Tantra disappeared in northern India, partly due to Muslim rule,[607] while the Bhakti movement grew, with followers engaging in emotional, passionate and community-oriented devotional worship, participating in saguna or nirguna Brahman ideologies.[608][606][609] According to Nicholson, already between the 12th and the 16th century, "certain thinkers began to treat as a single whole the diverse philosophical teachings of the Upanishads, epics, Puranas, and the schools known retrospectively as the 'six systems' (saddarsana) of mainstream Hindu philosophy."[126][note 49] Michaels notes that a historicization emerged which preceded later nationalism, articulating ideas which glorified Hinduism and the past.[131]
|
287 |
+
|
288 |
+
With the onset of the British Raj, the colonization of India by the British, there also started a Hindu Renaissance in the 19th century, which profoundly changed the understanding of Hinduism in both India and the west.[610] Indology as an academic discipline of studying Indian culture from a European perspective was established in the 19th century, led by scholars such as Max Müller and John Woodroffe. They brought Vedic, Puranic and Tantric literature and philosophy to Europe and the United States. Western orientalist searched for the "essence" of the Indian religions, discerning this in the Vedas,[611] and meanwhile creating the notion of "Hinduism" as a unified body of religious praxis[612] and the popular picture of 'mystical India'.[612][610] This idea of a Vedic essence was taken over by Hindu reform movements as the Brahmo Samaj, which was supported for a while by the Unitarian Church,[613] together with the ideas of Universalism and Perennialism, the idea that all religions share a common mystic ground.[614] This "Hindu modernism", with proponents like Vivekananda, Aurobindo and Radhakrishnan, became central in the popular understanding of Hinduism.[615][616][617][618][56]
|
289 |
+
|
290 |
+
Influential 20th-century Hindus were Ramana Maharshi, B.K.S. Iyengar, Paramahansa Yogananda, Maharishi Mahesh Yogi, Srila Prabhupada (founder of ISKCON), Sri Chinmoy, Swami Rama and others who translated, reformulated and presented Hinduism's foundational texts for contemporary audiences in new iterations, raising the profiles of Yoga and Vedanta in the West and attracting followers and attention in India and abroad.
|
291 |
+
|
292 |
+
Hindu practices such as Yoga, Ayurvedic health, Tantric sexuality through Neotantra and the Kama Sutra have spread beyond Hindu communities and have been accepted by several non-Hindus:
|
293 |
+
|
294 |
+
Hinduism is attracting Western adherents through the affiliated practice of yoga. Yoga centers in the West—which generally advocate vegetarianism—attract young, well-educated Westerners who are drawn by yoga's benefits for the physical and emotional health; there they are introduced to the Hindu philosophical system taught by most yoga teachers, known as Vedanta.[619]
|
295 |
+
|
296 |
+
It is estimated that around 30 million Americans and 5 million Europeans regularly practice some form of Hatha Yoga.[620] In Australia, the number of practitioners is about 300,000.[web 23] In New Zealand the number is also around 300,000.[web 24]
|
297 |
+
|
298 |
+
In the 20th century, Hinduism also gained prominence as a political force and a source for national identity in India. With origins traced back to the establishment of the Hindu Mahasabha in the 1910s, the movement grew with the formulation and development of the Hindutva ideology in the following decades; the establishment of Rashtriya Swayamsevak Sangh (RSS) in 1925; and the entry, and later success, of RSS offshoots Jana Sangha and Bharatiya Janata Party (BJP) in electoral politics in post-independence India.[621] Hindu religiosity plays an important role in the nationalist movement.[622][note 50][note 51]
|
299 |
+
|
300 |
+
Hinduism is a major religion in India. Hinduism was followed by around 79.8% of the country's population of 1.21 billion (2011 census) (960 million adherents).[web 25] Other significant populations are found in Nepal (23 million), Bangladesh (15 million) and the Indonesian island of Bali (3.9 million).[627] The majority of the Vietnamese Cham people also follow Hinduism, with the largest proportion in Ninh Thuận Province.[628]
|
301 |
+
|
302 |
+
Countries with the greatest proportion of Hindus:
|
303 |
+
|
304 |
+
Demographically, Hinduism is the world's third largest religion, after Christianity and Islam.[web 40][629]
|
305 |
+
|
306 |
+
In the modern era, religious conversion from and to Hinduism has been a controversial subject. Some state the concept of missionary conversion, either way, is anathema to the precepts of Hinduism.[631]
|
307 |
+
|
308 |
+
Religious conversion to Hinduism has a long history outside India. Merchants and traders of India, particularly from the Indian peninsula, carried their religious ideas, which led to religious conversions to Hinduism in southeast Asia.[632][633][634] Within India, archeological and textual evidence such as the 2nd-century BCE Heliodorus pillar suggest that Greeks and other foreigners converted to Hinduism.[635][636] The debate on proselytization and religious conversion between Christianity, Islam and Hinduism is more recent, and started in the 19th century.[637][638][note 52]
|
309 |
+
|
310 |
+
Religious leaders of some Hindu reform movements such as the Arya Samaj launched Shuddhi movement to proselytize and reconvert Muslims and Christians back to Hinduism,[642][643] while those such as the Brahmo Samaj suggested Hinduism to be a non-missionary religion.[631] All these sects of Hinduism have welcomed new members to their group, while other leaders of Hinduism's diverse schools have stated that given the intensive proselytization activities from missionary Islam and Christianity, this "there is no such thing as proselytism in Hinduism" view must be re-examined.[631][642][644]
|
311 |
+
|
312 |
+
The appropriateness of conversion from major religions to Hinduism, and vice versa, has been and remains an actively debated topic in India,[645][646][647] and in Indonesia.[648]
|
313 |
+
|
314 |
+
Hinduism
|
315 |
+
|
316 |
+
Related systems and religions
|
317 |
+
|
318 |
+
|
319 |
+
|
320 |
+
Subnotes
|
321 |
+
|
322 |
+
Introductory
|
323 |
+
|
324 |
+
Origins
|
325 |
+
|
326 |
+
Texts
|
327 |
+
|
328 |
+
Research on Hinduism
|
329 |
+
|
330 |
+
Audio on Hinduism
|
en/2729.html.txt
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
In macroeconomics, an industry is a sector that produces goods or related services within an economy.[1] The major source of revenue of a group or company is an indicator of what industry it should be classified in.[2] When a large corporate group has multiple sources of revenue generation, it is considered to be working in different industries. The manufacturing industry became a key sector of production and labour in European and North American countries during the Industrial Revolution, upsetting previous mercantile and feudal economies. This came through many successive rapid advances in technology, such as the development of steam power and the production of steel and coal.
|
4 |
+
|
5 |
+
Following the Industrial Revolution, possibly a third of the economic output came from manufacturing industries. Many developed countries and many developing/semi-developed countries (China, India etc.) depend significantly on manufacturing industry.
|
6 |
+
|
7 |
+
Slavery, the practice of utilizing forced labor to produce goods[3][failed verification] and services, has occurred since antiquity throughout the world as a means of low-cost production. It typically produces goods for which profit depends on economies of scale, especially those for which labor was simple and easy to supervise.[4] International law has declared slavery illegal.[5]
|
8 |
+
|
9 |
+
Guilds, associations of artisans and merchants, oversee the production and distribution of a particular good. Guilds have their roots in the Roman Empire as collegia (singular: collegium) Membership in these early guilds was voluntary. The Roman collegia did not survive the fall of Rome.[6] In the early middle ages, guilds once again began to emerge in Europe, reaching a degree of maturity by the beginning of the 14th century.[7][need quotation to verify] While few guilds remain today[update], some modern labor structures resemble those of traditional guilds.[8] Other guilds, such as the SAG-AFTRA act as trade unions rather than as classical guilds. Professor Sheilagh Ogilvie claims that guilds negatively affected quality, skills, and innovation in areas where they were present.[9]
|
10 |
+
|
11 |
+
The industrial revolution (from the mid-18th century to the mid-19th century) saw the development and popularization of mechanized means of production as a replacement for hand production.[10] The industrial revolution played a role in the abolition of slavery in Europe and in North America.[11]
|
12 |
+
|
13 |
+
In a process dubbed tertiarization, the economic preponderance of primary and secondary industries has declined in recent centuries relative to the rising importance of tertiary industry,[12][13]
|
14 |
+
resulting in the post-industrial economy. Specialization in industry[14]
|
15 |
+
and in the classification of industry has also occurred. Thus (for example) a record producer might claim to speak on behalf of the Japanese rock industry, the recording industry, the music industry or the entertainment industry - and any formulation will sound grandiose and weighty.
|
16 |
+
|
17 |
+
The Industrial Revolution led to the development of factories for large-scale production with consequent changes in society.[15] Originally the factories were steam-powered, but later transitioned to electricity once an electrical grid was developed. The mechanized assembly line was introduced to assemble parts in a repeatable fashion, with individual workers performing specific steps during the process. This led to significant increases in efficiency, lowering the cost of the end process. Later automation was increasingly used to replace human operators. This process has accelerated with the development of the computer and the robot.
|
18 |
+
|
19 |
+
Historically certain manufacturing industries have gone into a decline due to various economic factors, including the development of replacement technology or the loss of competitive advantage. An example of the former is the decline in carriage manufacturing when the automobile was mass-produced.
|
20 |
+
|
21 |
+
A recent trend has been the migration of prosperous, industrialized nations towards a post-industrial society. This is manifested by an increase in the service sector at the expense of manufacturing, and the development of an information-based economy, the so-called informational revolution. In a post-industrial society, manufacturers relocate to more profitable locations through a process of off-shoring.
|
22 |
+
|
23 |
+
Measurements of manufacturing industries outputs and economic effect are not historically stable. Traditionally, success has been measured in the number of jobs created. The reduced number of employees in the manufacturing sector has been assumed to result from a decline in the competitiveness of the sector, or the introduction of the lean manufacturing process.
|
24 |
+
|
25 |
+
Related to this change is the upgrading of the quality of the product being manufactured. While it is possible to produce a low-technology product with low-skill labour, the ability to manufacture high-technology products well is dependent on a highly skilled staff.
|
26 |
+
|
27 |
+
An industrial society is a society driven by the use of technology to enable mass production, supporting a large population with a high capacity for division of labour. Today, industry is an important part of most societies and nations. A government must have some kind of industrial policy, regulating industrial placement, industrial pollution, financing and industrial labour.
|
28 |
+
|
29 |
+
In an industrial society, industry employs a major part of the population. This occurs typically in the manufacturing sector. A labour union is an organization of workers who have banded together to achieve common goals in key areas such as wages, hours, and other working conditions. The trade union, through its leadership, bargains with the employer on behalf of union members (rank and file members) and negotiates labour contracts with employers. This movement first rose among industrial workers.
|
30 |
+
|
31 |
+
The Industrial Revolution changed warfare, with mass-produced weaponry and supplies, machine-powered transportation, mobilization, the total war concept and weapons of mass destruction. Early instances of industrial warfare were the Crimean War and the American Civil War, but its full potential showed during the world wars. See also military-industrial complex, arms industries, military industry and modern warfare.
|
32 |
+
|
33 |
+
The twenty largest countries by industrial output (in nominal terms) at peak level as of 2018, according to the IMF and CIA World Factbook
|
en/273.html.txt
ADDED
@@ -0,0 +1,172 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
|
6 |
+
|
7 |
+
Apiformes (from Latin 'apis')
|
8 |
+
|
9 |
+
Bees are flying insects closely related to wasps and ants, known for their role in pollination and, in the case of the best-known bee species, the western honey bee, for producing honey. Bees are a monophyletic lineage within the superfamily Apoidea. They are presently considered a clade, called Anthophila. There are over 16,000 known species of bees in seven recognized biological families.[1][2] Some species — including honey bees, bumblebees, and stingless bees — live socially in colonies while some species — including mason bees, carpenter bees, leafcutter bees, and sweat bees — are solitary.
|
10 |
+
|
11 |
+
Bees are found on every continent except for Antarctica, in every habitat on the planet that contains insect-pollinated flowering plants. The most common bees in the Northern Hemisphere are the Halictidae, or sweat bees, but they are small and often mistaken for wasps or flies. Bees range in size from tiny stingless bee species, whose workers are less than 2 millimetres (0.08 in) long, to Megachile pluto, the largest species of leafcutter bee, whose females can attain a length of 39 millimetres (1.54 in).
|
12 |
+
|
13 |
+
Bees feed on nectar and pollen, the former primarily as an energy source and the latter primarily for protein and other nutrients. Most pollen is used as food for their larvae. Vertebrate predators of bees include birds such as bee-eaters; insect predators include beewolves and dragonflies.
|
14 |
+
|
15 |
+
Bee pollination is important both ecologically and commercially, and the decline in wild bees has increased the value of pollination by commercially managed hives of honey bees. The analysis of 353 wild bee and hoverfly species across Britain from 1980 to 2013 found the insects have been lost from a quarter of the places they inhabited in 1980.[3]
|
16 |
+
|
17 |
+
Human beekeeping or apiculture has been practised for millennia, since at least the times of Ancient Egypt and Ancient Greece. Bees have appeared in mythology and folklore, through all phases of art and literature from ancient times to the present day, although primarily focused in the Northern Hemisphere where beekeeping is far more common.
|
18 |
+
|
19 |
+
The ancestors of bees were wasps in the family Crabronidae, which were predators of other insects. The switch from insect prey to pollen may have resulted from the consumption of prey insects which were flower visitors and were partially covered with pollen when they were fed to the wasp larvae. This same evolutionary scenario may have occurred within the vespoid wasps, where the pollen wasps evolved from predatory ancestors. Until recently, the oldest non-compression bee fossil had been found in New Jersey amber, Cretotrigona prisca of Cretaceous age, a corbiculate bee.[4] A bee fossil from the early Cretaceous (~100 mya), Melittosphex burmensis, is considered "an extinct lineage of pollen-collecting Apoidea sister to the modern bees".[5] Derived features of its morphology (apomorphies) place it clearly within the bees, but it retains two unmodified ancestral traits (plesiomorphies) of the legs (two mid-tibial spurs, and a slender hind basitarsus), showing its transitional status.[5] By the Eocene (~45 mya) there was already considerable diversity among eusocial bee lineages.[6][a]
|
20 |
+
|
21 |
+
The highly eusocial corbiculate Apidae appeared roughly 87 Mya, and the Allodapini (within the Apidae) around 53 Mya.[9]
|
22 |
+
The Colletidae appear as fossils only from the late Oligocene (~25 Mya) to early Miocene.[10]
|
23 |
+
The Melittidae are known from Palaeomacropis eocenicus in the Early Eocene.[11]
|
24 |
+
The Megachilidae are known from trace fossils (characteristic leaf cuttings) from the Middle Eocene.[12]
|
25 |
+
The Andrenidae are known from the Eocene-Oligocene boundary, around 34 Mya, of the Florissant shale.[13]
|
26 |
+
The Halictidae first appear in the Early Eocene[14] with species[15][16] found in amber. The Stenotritidae are known from fossil brood cells of Pleistocene age.[17]
|
27 |
+
|
28 |
+
The earliest animal-pollinated flowers were shallow, cup-shaped blooms pollinated by insects such as beetles, so the syndrome of insect pollination was well established before the first appearance of bees. The novelty is that bees are specialized as pollination agents, with behavioral and physical modifications that specifically enhance pollination, and are the most efficient pollinating insects. In a process of coevolution, flowers developed floral rewards[18] such as nectar and longer tubes, and bees developed longer tongues to extract the nectar.[19] Bees also developed structures known as scopal hairs and pollen baskets to collect and carry pollen. The location and type differ among and between groups of bees. Most species have scopal hairs on their hind legs or on the underside of their abdomens. Some species in the family Apidae have pollen baskets on their hind legs, while very few lack these and instead collect pollen in their crops.[2] The appearance of these structures drove the adaptive radiation of the angiosperms, and, in turn, bees themselves.[7] Bees coevolved not only with flowers but it is believed that some species coevolved with mites. Some provide tufts of hairs called acarinaria that appear to provide lodgings for mites; in return, it is believed that mites eat fungi that attack pollen, so the relationship in this case may be mutualistc.[20][21]
|
29 |
+
|
30 |
+
This phylogenetic tree is based on Debevic et al, 2012, which used molecular phylogeny to demonstrate that the bees (Anthophila) arose from deep within the Crabronidae, which is therefore paraphyletic. The placement of the Heterogynaidae is uncertain.[22] The small subfamily Mellininae was not included in this analysis.
|
31 |
+
|
32 |
+
Ampulicidae (Cockroach wasps)
|
33 |
+
|
34 |
+
Heterogynaidae (possible placement #1)
|
35 |
+
|
36 |
+
Sphecidae (sensu stricto)
|
37 |
+
|
38 |
+
Crabroninae (part of "Crabronidae")
|
39 |
+
|
40 |
+
Bembicini
|
41 |
+
|
42 |
+
Nyssonini, Astatinae
|
43 |
+
|
44 |
+
Heterogynaidae (possible placement #2)
|
45 |
+
|
46 |
+
Pemphredoninae, Philanthinae
|
47 |
+
|
48 |
+
Anthophila (bees)
|
49 |
+
|
50 |
+
This cladogram of the bee families is based on Hedtke et al., 2013, which places the former families Dasypodaidae and Meganomiidae as subfamilies inside the Melittidae.[23] English names, where available, are given in parentheses.
|
51 |
+
|
52 |
+
Melittidae (inc. Dasypodainae, Meganomiinae) at least 50 Mya
|
53 |
+
|
54 |
+
Apidae (inc. honeybees, cuckoo bees, carpenter bees) ≈87 Mya
|
55 |
+
|
56 |
+
Megachilidae (mason, leafcutter bees) ≈50 Mya
|
57 |
+
|
58 |
+
Andrenidae (mining bees) ≈34 Mya
|
59 |
+
|
60 |
+
Halictidae (sweat bees) ≈50 Mya
|
61 |
+
|
62 |
+
Colletidae (plasterer bees) ≈25 Mya
|
63 |
+
|
64 |
+
Stenotritidae (large Australian bees) ≈2 Mya
|
65 |
+
|
66 |
+
Bees differ from closely related groups such as wasps by having branched or plume-like setae (hairs), combs on the forelimbs for cleaning their antennae, small anatomical differences in limb structure, and the venation of the hind wings; and in females, by having the seventh dorsal abdominal plate divided into two half-plates.[24]
|
67 |
+
|
68 |
+
Bees have the following characteristics:
|
69 |
+
|
70 |
+
The largest species of bee is thought to be Wallace's giant bee Megachile pluto, whose females can attain a length of 39 millimetres (1.54 in).[26] The smallest species may be dwarf stingless bees in the tribe Meliponini whose workers are less than 2 millimetres (0.08 in) in length.[27]
|
71 |
+
|
72 |
+
According to inclusive fitness theory, organisms can gain fitness not just through increasing their own reproductive output, but also that of close relatives. In evolutionary terms, individuals should help relatives when Cost < Relatedness * Benefit. The requirements for eusociality are more easily fulfilled by haplodiploid species such as bees because of their unusual relatedness structure.[28]
|
73 |
+
|
74 |
+
In haplodiploid species, females develop from fertilized eggs and males from unfertilized eggs. Because a male is haploid (has only one copy of each gene), his daughters (which are diploid, with two copies of each gene) share 100% of his genes and 50% of their mother's. Therefore, they share 75% of their genes with each other. This mechanism of sex determination gives rise to what W. D. Hamilton termed "supersisters", more closely related to their sisters than they would be to their own offspring.[29] Workers often do not reproduce, but they can pass on more of their genes by helping to raise their sisters (as queens) than they would by having their own offspring (each of which would only have 50% of their genes), assuming they would produce similar numbers. This unusual situation has been proposed as an explanation of the multiple (at least 9) evolutions of eusociality within Hymenoptera.[30][31]
|
75 |
+
|
76 |
+
Haplodiploidy is neither necessary nor sufficient for eusociality. Some eusocial species such as termites are not haplodiploid. Conversely, all bees are haplodiploid but not all are eusocial, and among eusocial species many queens mate with multiple males, creating half-sisters that share only 25% of each-other's genes.[32] But, monogamy (queens mating singly) is the ancestral state for all eusocial species so far investigated, so it is likely that haplodiploidy contributed to the evolution of eusociality in bees.[30]
|
77 |
+
|
78 |
+
Bees may be solitary or may live in various types of communities. Eusociality appears to have originated from at least three independent origins in halictid bees.[33] The most advanced of these are species with eusocial colonies; these are characterised by cooperative brood care and a division of labour into reproductive and non-reproductive adults, plus overlapping generations.[34] This division of labour creates specialized groups within eusocial societies which are called castes. In some species, groups of cohabiting females may be sisters, and if there is a division of labour within the group, they are considered semisocial. The group is called eusocial if, in addition, the group consists of a mother (the queen) and her daughters (workers). When the castes are purely behavioural alternatives, with no morphological differentiation other than size, the system is considered primitively eusocial, as in many paper wasps; when the castes are morphologically discrete, the system is considered highly eusocial.[19]
|
79 |
+
|
80 |
+
True honey bees (genus Apis, of which seven species are currently recognized) are highly eusocial, and are among the best known insects. Their colonies are established by swarms, consisting of a queen and several hundred workers. There are 29 subspecies of one of these species, Apis mellifera, native to Europe, the Middle East, and Africa. Africanized bees are a hybrid strain of A. mellifera that escaped from experiments involving crossing European and African subspecies; they are extremely defensive.[35]
|
81 |
+
|
82 |
+
Stingless bees are also highly eusocial. They practise mass provisioning, with complex nest architecture and perennial colonies also established via swarming.[36]
|
83 |
+
|
84 |
+
Many bumblebees are eusocial, similar to the eusocial Vespidae such as hornets in that the queen initiates a nest on her own rather than by swarming. Bumblebee colonies typically have from 50 to 200 bees at peak population, which occurs in mid to late summer. Nest architecture is simple, limited by the size of the pre-existing nest cavity, and colonies rarely last more than a year.[37] In 2011, the International Union for Conservation of Nature set up the Bumblebee Specialist Group to review the threat status of all bumblebee species worldwide using the IUCN Red List criteria.[38]
|
85 |
+
|
86 |
+
There are many more species of primitively eusocial than highly eusocial bees, but they have been studied less often. Most are in the family Halictidae, or "sweat bees". Colonies are typically small, with a dozen or fewer workers, on average. Queens and workers differ only in size, if at all. Most species have a single season colony cycle, even in the tropics, and only mated females hibernate. A few species have long active seasons and attain colony sizes in the hundreds, such as Halictus hesperus.[39] Some species are eusocial in parts of their range and solitary in others,[40] or have a mix of eusocial and solitary nests in the same population.[41] The orchid bees (Apidae) include some primitively eusocial species with similar biology. Some allodapine bees (Apidae) form primitively eusocial colonies, with progressive provisioning: a larva's food is supplied gradually as it develops, as is the case in honey bees and some bumblebees.[42]
|
87 |
+
|
88 |
+
Most other bees, including familiar insects such as carpenter bees, leafcutter bees and mason bees are solitary in the sense that every female is fertile, and typically inhabits a nest she constructs herself. There is no division of labor so these nests lack queens and worker bees for these species. Solitary bees typically produce neither honey nor beeswax.
|
89 |
+
Bees collect pollen to feed their young, and have the necessary adaptations to do this. However, certain wasp species such as pollen wasps have similar behaviours, and a few species of bee scavenge from carcases to feed their offspring.[24] Solitary bees are important pollinators; they gather pollen to provision their nests with food for their brood. Often it is mixed with nectar to form a paste-like consistency. Some solitary bees have advanced types of pollen-carrying structures on their bodies. Very few species of solitary bee are being cultured for commercial pollination. Most of these species belong to a distinct set of genera which are commonly known by their nesting behavior or preferences, namely: carpenter bees, sweat bees, mason bees, plasterer bees, squash bees, dwarf carpenter bees, leafcutter bees, alkali bees and digger bees.[43]
|
90 |
+
|
91 |
+
Most solitary bees nest in the ground in a variety of soil textures and conditions while others create nests in hollow reeds or twigs, holes in wood. The female typically creates a compartment (a "cell") with an egg and some provisions for the resulting larva, then seals it off. A nest may consist of numerous cells. When the nest is in wood, usually the last (those closer to the entrance) contain eggs that will become males. The adult does not provide care for the brood once the egg is laid, and usually dies after making one or more nests. The males typically emerge first and are ready for mating when the females emerge. Solitary bees are either stingless or very unlikely to sting (only in self-defense, if ever).[44][45]
|
92 |
+
|
93 |
+
While solitary, females each make individual nests. Some species, such as the European mason bee Hoplitis anthocopoides,[46] and the Dawson's Burrowing bee, Amegilla dawsoni,[47] are gregarious, preferring to make nests near others of the same species, and giving the appearance of being social. Large groups of solitary bee nests are called aggregations, to distinguish them from colonies. In some species, multiple females share a common nest, but each makes and provisions her own cells independently. This type of group is called "communal" and is not uncommon. The primary advantage appears to be that a nest entrance is easier to defend from predators and parasites when there are multiple females using that same entrance on a regular basis.[46]
|
94 |
+
|
95 |
+
The life cycle of a bee, be it a solitary or social species, involves the laying of an egg, the development through several moults of a legless larva, a pupation stage during which the insect undergoes complete metamorphosis, followed by the emergence of a winged adult. Most solitary bees and bumble bees in temperate climates overwinter as adults or pupae and emerge in spring when increasing numbers of flowering plants come into bloom. The males usually emerge first and search for females with which to mate. The sex of a bee is determined by whether or not the egg is fertilised; after mating, a female stores the sperm, and determines which sex is required at the time each individual egg is laid, fertilised eggs producing female offspring and unfertilised eggs, males. Tropical bees may have several generations in a year and no diapause stage.[48][49][50][51]
|
96 |
+
|
97 |
+
The egg is generally oblong, slightly curved and tapering at one end. Solitary bees, lay each egg in a separate cell with a supply of mixed pollen and nectar next to it. This may be rolled into a pellet or placed in a pile and is known as mass provisioning. Social bee species provision progressively, that is, they feed the larva regularly while it grows. The nest varies from a hole in the ground or in wood, in solitary bees, to a substantial structure with wax combs in bumblebees and honey bees.[52]
|
98 |
+
|
99 |
+
In most species, larvae are whitish grubs, roughly oval and bluntly-pointed at both ends. They have 15 segments and spiracles in each segment for breathing. They have no legs but move within the cell, helped by tubercles on their sides. They have short horns on the head, jaws for chewing food and an appendage on either side of the mouth tipped with a bristle. There is a gland under the mouth that secretes a viscous liquid which solidifies into the silk they use to produce a cocoon. The cocoon is semi-transparent and the pupa can be seen through it. Over the course of a few days, the larva undergoes metamorphosis into a winged adult. When ready to emerge, the adult splits its skin dorsally and climbs out of the exuviae and breaks out of the cell.[52]
|
100 |
+
|
101 |
+
Nest of common carder bumblebee, wax canopy removed to show winged workers and pupae in irregularly placed wax cells
|
102 |
+
|
103 |
+
Carpenter bee nests in a cedar wood beam (sawn open)
|
104 |
+
|
105 |
+
Honeybees on brood comb with eggs and larvae in cells
|
106 |
+
|
107 |
+
Antoine Magnan's 1934 book Le vol des insectes, says that he and André Sainte-Laguë had applied the equations of air resistance to insects and found that their flight could not be explained by fixed-wing calculations, but that "One shouldn't be surprised that the results of the calculations don't square with reality".[53] This has led to a common misconception that bees "violate aerodynamic theory". In fact it merely confirms that bees do not engage in fixed-wing flight, and that their flight is explained by other mechanics, such as those used by helicopters.[54] In 1996 it was shown that vortices created by many insects' wings helped to provide lift.[55] High-speed cinematography[56] and robotic mock-up of a bee wing[57] showed that lift was generated by "the unconventional combination of short, choppy wing strokes, a rapid rotation of the wing as it flops over and reverses direction, and a very fast wing-beat frequency". Wing-beat frequency normally increases as size decreases, but as the bee's wing beat covers such a small arc, it flaps approximately 230 times per second, faster than a fruitfly (200 times per second) which is 80 times smaller.[58]
|
108 |
+
|
109 |
+
The ethologist Karl von Frisch studied navigation in the honey bee. He showed that honey bees communicate by the waggle dance, in which a worker indicates the location of a food source to other workers in the hive. He demonstrated that bees can recognize a desired compass direction in three different ways: by the sun, by the polarization pattern of the blue sky, and by the earth's magnetic field. He showed that the sun is the preferred or main compass; the other mechanisms are used under cloudy skies or inside a dark beehive.[59] Bees navigate using spatial memory with a "rich, map-like organization".[60]
|
110 |
+
|
111 |
+
The gut of bees is relatively simple, but multiple metabolic strategies exist in the gut microbiota.[61] Pollinating bees consume nectar and pollen, which require different digestion strategies by somewhat specialized bacteria. While nectar is a liquid of mostly monosaccharide sugars and so easily absorbed, pollen contains complex polysaccharides: branching pectin and hemicellulose.[62] Approximately five groups of bacteria are involved in digestion. Three groups specialize in simple sugars (Snodgrassella and two groups of Lactobacillus), and two other groups in complex sugars (Gilliamella and Bifidobacterium). Digestion of pectin and hemicellulose is dominated by bacterial clades Gilliamella and Bifidobacterium respectively. Bacteria that cannot digest polysaccharides obtain enzymes from their neighbors, and bacteria that lack certain amino acids do the same, creating multiple ecological niches.[63]
|
112 |
+
|
113 |
+
Although most bee species are nectarivorous and palynivorous, some are not. Particularly unusual are vulture bees in the genus Trigona, which consume carrion and wasp brood, turning meat into a honey-like substance.[64]
|
114 |
+
|
115 |
+
Most bees are polylectic (generalist) meaning they collect pollen from a range of flowering plants, but some are oligoleges (specialists), in that they only gather pollen from one or a few species or genera of closely related plants.[65] Specialist pollinators also include bee species which gather floral oils instead of pollen, and male orchid bees, which gather aromatic compounds from orchids (one of the few cases where male bees are effective pollinators). Bees are able to sense the presence of desirable flowers through ultraviolet patterning on flowers, floral odors,[66] and even electromagnetic fields.[67] Once landed, a bee then uses nectar quality[66] and pollen taste[68] to determine whether to continue visiting similar flowers.
|
116 |
+
|
117 |
+
In rare cases, a plant species may only be effectively pollinated by a single bee species, and some plants are endangered at least in part because their pollinator is also threatened. But, there is a pronounced tendency for oligolectic bees to be associated with common, widespread plants visited by multiple pollinator species. For example, the creosote bush in the arid parts of the United States southwest is associated with some 40 oligoleges.[69]
|
118 |
+
|
119 |
+
Many bees are aposematically coloured, typically orange and black, warning of their ability to defend themselves with a powerful sting. As such they are models for Batesian mimicry by non-stinging insects such as bee-flies, robber flies and hoverflies,[70] all of which gain a measure of protection by superficially looking and behaving like bees.[70]
|
120 |
+
|
121 |
+
Bees are themselves Müllerian mimics of other aposematic insects with the same colour scheme, including wasps, lycid and other beetles, and many butterflies and moths (Lepidoptera) which are themselves distasteful, often through acquiring bitter and poisonous chemicals from their plant food. All the Müllerian mimics, including bees, benefit from the reduced risk of predation that results from their easily recognised warning coloration.[71]
|
122 |
+
|
123 |
+
Bees are also mimicked by plants such as the bee orchid which imitates both the appearance and the scent of a female bee; male bees attempt to mate (pseudocopulation) with the furry lip of the flower, thus pollinating it.[72]
|
124 |
+
|
125 |
+
Brood parasites occur in several bee families including the apid subfamily Nomadinae.[73] Females of these species lack pollen collecting structures (the scopa) and do not construct their own nests. They typically enter the nests of pollen collecting species, and lay their eggs in cells provisioned by the host bee. When the "cuckoo" bee larva hatches, it consumes the host larva's pollen ball, and often the host egg also.[74] In particular, the Arctic bee species, Bombus hyperboreus is an aggressive species that attacks and enslaves other bees of the same subgenus. However, unlike many other bee brood parasites, they have pollen baskets and often collect pollen.[75]
|
126 |
+
|
127 |
+
In Southern Africa, hives of African honeybees (A. mellifera scutellata) are being destroyed by parasitic workers of the Cape honeybee, A. m. capensis. These lay diploid eggs ("thelytoky"), escaping normal worker policing, leading to the colony's destruction; the parasites can then move to other hives.[76]
|
128 |
+
|
129 |
+
The cuckoo bees in the Bombus subgenus Psithyrus are closely related to, and resemble, their hosts in looks and size. This common pattern gave rise to the ecological principle "Emery's rule". Others parasitize bees in different families, like Townsendiella, a nomadine apid, two species of which are cleptoparasites of the dasypodaid genus Hesperapis,[77] while the other species in the same genus attacks halictid bees.[78]
|
130 |
+
|
131 |
+
Four bee families (Andrenidae, Colletidae, Halictidae, and Apidae) contain some species that are crepuscular. Most are tropical or subtropical, but some live in arid regions at higher latitudes. These bees have greatly enlarged ocelli, which are extremely sensitive to light and dark, though incapable of forming images. Some have refracting superposition compound eyes: these combine the output of many elements of their compound eyes to provide enough light for each retinal photoreceptor. Their ability to fly by night enables them to avoid many predators, and to exploit flowers that produce nectar only or also at night.[79]
|
132 |
+
|
133 |
+
Vertebrate predators of bees include bee-eaters, shrikes and flycatchers, which make short sallies to catch insects in flight.[80] Swifts and swallows[80] fly almost continually, catching insects as they go. The honey buzzard attacks bees' nests and eats the larvae.[81] The greater honeyguide interacts with humans by guiding them to the nests of wild bees. The humans break open the nests and take the honey and the bird feeds on the larvae and the wax.[82] Among mammals, predators such as the badger dig up bumblebee nests and eat both the larvae and any stored food.[83]
|
134 |
+
|
135 |
+
Specialist ambush predators of visitors to flowers include crab spiders, which wait on flowering plants for pollinating insects; predatory bugs, and praying mantises,[80] some of which (the flower mantises of the tropics) wait motionless, aggressive mimics camouflaged as flowers.[84] Beewolves are large wasps that habitually attack bees;[80] the ethologist Niko Tinbergen estimated that a single colony of the beewolf Philanthus triangulum might kill several thousand honeybees in a day: all the prey he observed were honeybees.[85] Other predatory insects that sometimes catch bees include robber flies and dragonflies.[80] Honey bees are affected by parasites including acarine and Varroa mites.[86] However, some bees are believed to have a mutualistic relationship with mites.[21]
|
136 |
+
|
137 |
+
Homer's Hymn to Hermes describes three bee-maidens with the power of divination and thus speaking truth, and identifies the food of the gods as honey. Sources associated the bee maidens with Apollo and, until the 1980s, scholars followed Gottfried Hermann (1806) in incorrectly identifying the bee-maidens with the Thriae.[87] Honey, according to a Greek myth, was discovered by a nymph called Melissa ("Bee"); and honey was offered to the Greek gods from Mycenean times. Bees were also associated with the Delphic oracle and the prophetess was sometimes called a bee.[88]
|
138 |
+
|
139 |
+
The image of a community of honey bees has been used from ancient to modern times, in Aristotle and Plato; in Virgil and Seneca; in Erasmus and Shakespeare; Tolstoy, and by political and social theorists such as Bernard Mandeville and Karl Marx as a model for human society.[89] In English folklore, bees would be told of important events in the household, in a custom known as "Telling the bees".[90]
|
140 |
+
|
141 |
+
Some of the oldest examples of bees in art are rock paintings in Spain which have been dated to 15,000 BC.[91]
|
142 |
+
|
143 |
+
W. B. Yeats's poem The Lake Isle of Innisfree (1888) contains the couplet "Nine bean rows will I have there, a hive for the honey bee, / And live alone in the bee loud glade." At the time he was living in Bedford Park in the West of London.[92] Beatrix Potter's illustrated book The Tale of Mrs Tittlemouse (1910) features Babbity Bumble and her brood (pictured). Kit Williams' treasure hunt book The Bee on the Comb (1984) uses bees and beekeeping as part of its story and puzzle. Sue Monk Kidd's The Secret Life of Bees (2004), and the 2009 film starring Dakota Fanning, tells the story of a girl who escapes her abusive home and finds her way to live with a family of beekeepers, the Boatwrights.
|
144 |
+
|
145 |
+
The humorous 2007 animated film Bee Movie used Jerry Seinfeld's first script and was his first work for children; he starred as a bee named Barry B. Benson, alongside Renée Zellweger. Critics found its premise awkward and its delivery tame.[93] Dave Goulson's A Sting in the Tale (2014) describes his efforts to save bumblebees in Britain, as well as much about their biology. The playwright Laline Paull's fantasy The Bees (2015) tells the tale of a hive bee named Flora 717 from hatching onwards.[94]
|
146 |
+
|
147 |
+
Humans have kept honey bee colonies, commonly in hives, for millennia. Beekeepers collect honey, beeswax, propolis, pollen, and royal jelly from hives; bees are also kept to pollinate crops and to produce bees for sale to other beekeepers.
|
148 |
+
|
149 |
+
Depictions of humans collecting honey from wild bees date to 15,000 years ago; efforts to domesticate them are shown in Egyptian art around 4,500 years ago.[95] Simple hives and smoke were used;[96][97] jars of honey were found in the tombs of pharaohs such as Tutankhamun. From the 18th century, European understanding of the colonies and biology of bees allowed the construction of the moveable comb hive so that honey could be harvested without destroying the colony.[98][99] Among Classical Era authors, beekeeping with the use of smoke is described in Aristotle's History of Animals Book 9.[100] The account mentions that bees die after stinging; that workers remove corpses from the hive, and guard it; castes including workers and non-working drones, but "kings" rather than queens; predators including toads and bee-eaters; and the waggle dance, with the "irresistible suggestion" of άpοσειονται ("aroseiontai", it waggles) and παρακολουθούσιν ("parakolouthousin", they watch).[101][b]
|
150 |
+
|
151 |
+
Beekeeping is described in detail by Virgil in his Eclogues; it is also mentioned in his Aeneid, and in Pliny's Natural History.[101]
|
152 |
+
|
153 |
+
Bees play an important role in pollinating flowering plants, and are the major type of pollinator in many ecosystems that contain flowering plants. It is estimated that one third of the human food supply depends on pollination by insects, birds and bats, most of which is accomplished by bees, whether wild or domesticated.[102][103] Over the last half century, there has been a general decline in the species richness of wild bees and other pollinators, probably attributable to stress from increased parasites and disease, the use of pesticides, and a general decrease in the number of wild flowers. Climate change probably exacerbates the problem.[104]
|
154 |
+
|
155 |
+
Contract pollination has overtaken the role of honey production for beekeepers in many countries. After the introduction of Varroa mites, feral honey bees declined dramatically in the US, though their numbers have since recovered.[105][106] The number of colonies kept by beekeepers declined slightly, through urbanization, systematic pesticide use, tracheal and Varroa mites, and the closure of beekeeping businesses. In 2006 and 2007 the rate of attrition increased, and was described as colony collapse disorder.[107] In 2010 invertebrate iridescent virus and the fungus Nosema ceranae were shown to be in every killed colony, and deadly in combination.[108][109][110][111] Winter losses increased to about 1/3.[112][113] Varroa mites were thought to be responsible for about half the losses.[114]
|
156 |
+
|
157 |
+
Apart from colony collapse disorder, losses outside the US have been attributed to causes including pesticide seed dressings, using neonicotinoids such as Clothianidin, Imidacloprid and Thiamethoxam.[115][116] From 2013 the European Union restricted some pesticides to stop bee populations from declining further.[117] In 2014 the Intergovernmental Panel on Climate Change report warned that bees faced increased risk of extinction because of global warming.[118] In 2018 the European Union decided to ban field use of all three major neonicotinoids; they remain permitted in veterinary, greenhouse, and vehicle transport usage.[119]
|
158 |
+
|
159 |
+
Farmers have focused on alternative solutions to mitigate these problems. By raising native plants, they provide food for native bee pollinators like Lasioglossum vierecki[120] and L. leucozonium,[121] leading to less reliance on honey bee populations.
|
160 |
+
|
161 |
+
Honey is a natural product produced by bees and stored for their own use, but its sweetness has always appealed to humans. Before domestication of bees was even attempted, humans were raiding their nests for their honey. Smoke was often used to subdue the bees and such activities are depicted in rock paintings in Spain dated to 15,000 BC.[91]
|
162 |
+
|
163 |
+
Honey bees are used commercially to produce honey.[122] They also produce some substances used as dietary supplements with possible health benefits, pollen,[123] propolis,[124] and royal jelly,[125] though all of these can also cause allergic reactions.
|
164 |
+
|
165 |
+
Bees are partly considered edible insects. Indigenous people in many countries eat insects, including the larvae and pupae of bees, mostly stingless species. They also gather larvae, pupae and surrounding cells, known as bee brood, for consumption.[126] In the Indonesian dish botok tawon from Central and East Java, bee larvae are eaten as a companion to rice, after being mixed with shredded coconut, wrapped in banana leaves, and steamed.[127][128]
|
166 |
+
|
167 |
+
Bee brood (pupae and larvae) although low in calcium, has been found to be high in protein and carbohydrate, and a useful source of phosphorus, magnesium, potassium, and trace minerals iron, zinc, copper, and selenium. In addition, while bee brood was high in fat, it contained no fat soluble vitamins (such as A, D, and E) but it was a good source of most of the water-soluble B-vitamins including choline as well as vitamin C. The fat was composed mostly of saturated and monounsaturated fatty acids with 2.0% being polyunsaturated fatty acids.[129][130]
|
168 |
+
|
169 |
+
Apitherapy is a branch of alternative medicine that uses honey bee products, including raw honey, royal jelly, pollen, propolis, beeswax and apitoxin (Bee venom).[131] The claim that apitherapy treats cancer, which some proponents of apitherapy make, remains unsupported by evidence-based medicine.[132][133]
|
170 |
+
|
171 |
+
The painful stings of bees are mostly associated with the poison gland and the Dufour's gland which are abdominal exocrine glands containing various chemicals. In Lasioglossum leucozonium, the Dufour's Gland mostly contains octadecanolide as well as some eicosanolide. There is also evidence of n-triscosane, n-heptacosane,[134] and 22-docosanolide.[135] However, the secretions of these glands could also be used for nest construction.[134]
|
172 |
+
|
en/2730.html.txt
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
In macroeconomics, an industry is a sector that produces goods or related services within an economy.[1] The major source of revenue of a group or company is an indicator of what industry it should be classified in.[2] When a large corporate group has multiple sources of revenue generation, it is considered to be working in different industries. The manufacturing industry became a key sector of production and labour in European and North American countries during the Industrial Revolution, upsetting previous mercantile and feudal economies. This came through many successive rapid advances in technology, such as the development of steam power and the production of steel and coal.
|
4 |
+
|
5 |
+
Following the Industrial Revolution, possibly a third of the economic output came from manufacturing industries. Many developed countries and many developing/semi-developed countries (China, India etc.) depend significantly on manufacturing industry.
|
6 |
+
|
7 |
+
Slavery, the practice of utilizing forced labor to produce goods[3][failed verification] and services, has occurred since antiquity throughout the world as a means of low-cost production. It typically produces goods for which profit depends on economies of scale, especially those for which labor was simple and easy to supervise.[4] International law has declared slavery illegal.[5]
|
8 |
+
|
9 |
+
Guilds, associations of artisans and merchants, oversee the production and distribution of a particular good. Guilds have their roots in the Roman Empire as collegia (singular: collegium) Membership in these early guilds was voluntary. The Roman collegia did not survive the fall of Rome.[6] In the early middle ages, guilds once again began to emerge in Europe, reaching a degree of maturity by the beginning of the 14th century.[7][need quotation to verify] While few guilds remain today[update], some modern labor structures resemble those of traditional guilds.[8] Other guilds, such as the SAG-AFTRA act as trade unions rather than as classical guilds. Professor Sheilagh Ogilvie claims that guilds negatively affected quality, skills, and innovation in areas where they were present.[9]
|
10 |
+
|
11 |
+
The industrial revolution (from the mid-18th century to the mid-19th century) saw the development and popularization of mechanized means of production as a replacement for hand production.[10] The industrial revolution played a role in the abolition of slavery in Europe and in North America.[11]
|
12 |
+
|
13 |
+
In a process dubbed tertiarization, the economic preponderance of primary and secondary industries has declined in recent centuries relative to the rising importance of tertiary industry,[12][13]
|
14 |
+
resulting in the post-industrial economy. Specialization in industry[14]
|
15 |
+
and in the classification of industry has also occurred. Thus (for example) a record producer might claim to speak on behalf of the Japanese rock industry, the recording industry, the music industry or the entertainment industry - and any formulation will sound grandiose and weighty.
|
16 |
+
|
17 |
+
The Industrial Revolution led to the development of factories for large-scale production with consequent changes in society.[15] Originally the factories were steam-powered, but later transitioned to electricity once an electrical grid was developed. The mechanized assembly line was introduced to assemble parts in a repeatable fashion, with individual workers performing specific steps during the process. This led to significant increases in efficiency, lowering the cost of the end process. Later automation was increasingly used to replace human operators. This process has accelerated with the development of the computer and the robot.
|
18 |
+
|
19 |
+
Historically certain manufacturing industries have gone into a decline due to various economic factors, including the development of replacement technology or the loss of competitive advantage. An example of the former is the decline in carriage manufacturing when the automobile was mass-produced.
|
20 |
+
|
21 |
+
A recent trend has been the migration of prosperous, industrialized nations towards a post-industrial society. This is manifested by an increase in the service sector at the expense of manufacturing, and the development of an information-based economy, the so-called informational revolution. In a post-industrial society, manufacturers relocate to more profitable locations through a process of off-shoring.
|
22 |
+
|
23 |
+
Measurements of manufacturing industries outputs and economic effect are not historically stable. Traditionally, success has been measured in the number of jobs created. The reduced number of employees in the manufacturing sector has been assumed to result from a decline in the competitiveness of the sector, or the introduction of the lean manufacturing process.
|
24 |
+
|
25 |
+
Related to this change is the upgrading of the quality of the product being manufactured. While it is possible to produce a low-technology product with low-skill labour, the ability to manufacture high-technology products well is dependent on a highly skilled staff.
|
26 |
+
|
27 |
+
An industrial society is a society driven by the use of technology to enable mass production, supporting a large population with a high capacity for division of labour. Today, industry is an important part of most societies and nations. A government must have some kind of industrial policy, regulating industrial placement, industrial pollution, financing and industrial labour.
|
28 |
+
|
29 |
+
In an industrial society, industry employs a major part of the population. This occurs typically in the manufacturing sector. A labour union is an organization of workers who have banded together to achieve common goals in key areas such as wages, hours, and other working conditions. The trade union, through its leadership, bargains with the employer on behalf of union members (rank and file members) and negotiates labour contracts with employers. This movement first rose among industrial workers.
|
30 |
+
|
31 |
+
The Industrial Revolution changed warfare, with mass-produced weaponry and supplies, machine-powered transportation, mobilization, the total war concept and weapons of mass destruction. Early instances of industrial warfare were the Crimean War and the American Civil War, but its full potential showed during the world wars. See also military-industrial complex, arms industries, military industry and modern warfare.
|
32 |
+
|
33 |
+
The twenty largest countries by industrial output (in nominal terms) at peak level as of 2018, according to the IMF and CIA World Factbook
|
en/2731.html.txt
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
In macroeconomics, an industry is a sector that produces goods or related services within an economy.[1] The major source of revenue of a group or company is an indicator of what industry it should be classified in.[2] When a large corporate group has multiple sources of revenue generation, it is considered to be working in different industries. The manufacturing industry became a key sector of production and labour in European and North American countries during the Industrial Revolution, upsetting previous mercantile and feudal economies. This came through many successive rapid advances in technology, such as the development of steam power and the production of steel and coal.
|
4 |
+
|
5 |
+
Following the Industrial Revolution, possibly a third of the economic output came from manufacturing industries. Many developed countries and many developing/semi-developed countries (China, India etc.) depend significantly on manufacturing industry.
|
6 |
+
|
7 |
+
Slavery, the practice of utilizing forced labor to produce goods[3][failed verification] and services, has occurred since antiquity throughout the world as a means of low-cost production. It typically produces goods for which profit depends on economies of scale, especially those for which labor was simple and easy to supervise.[4] International law has declared slavery illegal.[5]
|
8 |
+
|
9 |
+
Guilds, associations of artisans and merchants, oversee the production and distribution of a particular good. Guilds have their roots in the Roman Empire as collegia (singular: collegium) Membership in these early guilds was voluntary. The Roman collegia did not survive the fall of Rome.[6] In the early middle ages, guilds once again began to emerge in Europe, reaching a degree of maturity by the beginning of the 14th century.[7][need quotation to verify] While few guilds remain today[update], some modern labor structures resemble those of traditional guilds.[8] Other guilds, such as the SAG-AFTRA act as trade unions rather than as classical guilds. Professor Sheilagh Ogilvie claims that guilds negatively affected quality, skills, and innovation in areas where they were present.[9]
|
10 |
+
|
11 |
+
The industrial revolution (from the mid-18th century to the mid-19th century) saw the development and popularization of mechanized means of production as a replacement for hand production.[10] The industrial revolution played a role in the abolition of slavery in Europe and in North America.[11]
|
12 |
+
|
13 |
+
In a process dubbed tertiarization, the economic preponderance of primary and secondary industries has declined in recent centuries relative to the rising importance of tertiary industry,[12][13]
|
14 |
+
resulting in the post-industrial economy. Specialization in industry[14]
|
15 |
+
and in the classification of industry has also occurred. Thus (for example) a record producer might claim to speak on behalf of the Japanese rock industry, the recording industry, the music industry or the entertainment industry - and any formulation will sound grandiose and weighty.
|
16 |
+
|
17 |
+
The Industrial Revolution led to the development of factories for large-scale production with consequent changes in society.[15] Originally the factories were steam-powered, but later transitioned to electricity once an electrical grid was developed. The mechanized assembly line was introduced to assemble parts in a repeatable fashion, with individual workers performing specific steps during the process. This led to significant increases in efficiency, lowering the cost of the end process. Later automation was increasingly used to replace human operators. This process has accelerated with the development of the computer and the robot.
|
18 |
+
|
19 |
+
Historically certain manufacturing industries have gone into a decline due to various economic factors, including the development of replacement technology or the loss of competitive advantage. An example of the former is the decline in carriage manufacturing when the automobile was mass-produced.
|
20 |
+
|
21 |
+
A recent trend has been the migration of prosperous, industrialized nations towards a post-industrial society. This is manifested by an increase in the service sector at the expense of manufacturing, and the development of an information-based economy, the so-called informational revolution. In a post-industrial society, manufacturers relocate to more profitable locations through a process of off-shoring.
|
22 |
+
|
23 |
+
Measurements of manufacturing industries outputs and economic effect are not historically stable. Traditionally, success has been measured in the number of jobs created. The reduced number of employees in the manufacturing sector has been assumed to result from a decline in the competitiveness of the sector, or the introduction of the lean manufacturing process.
|
24 |
+
|
25 |
+
Related to this change is the upgrading of the quality of the product being manufactured. While it is possible to produce a low-technology product with low-skill labour, the ability to manufacture high-technology products well is dependent on a highly skilled staff.
|
26 |
+
|
27 |
+
An industrial society is a society driven by the use of technology to enable mass production, supporting a large population with a high capacity for division of labour. Today, industry is an important part of most societies and nations. A government must have some kind of industrial policy, regulating industrial placement, industrial pollution, financing and industrial labour.
|
28 |
+
|
29 |
+
In an industrial society, industry employs a major part of the population. This occurs typically in the manufacturing sector. A labour union is an organization of workers who have banded together to achieve common goals in key areas such as wages, hours, and other working conditions. The trade union, through its leadership, bargains with the employer on behalf of union members (rank and file members) and negotiates labour contracts with employers. This movement first rose among industrial workers.
|
30 |
+
|
31 |
+
The Industrial Revolution changed warfare, with mass-produced weaponry and supplies, machine-powered transportation, mobilization, the total war concept and weapons of mass destruction. Early instances of industrial warfare were the Crimean War and the American Civil War, but its full potential showed during the world wars. See also military-industrial complex, arms industries, military industry and modern warfare.
|
32 |
+
|
33 |
+
The twenty largest countries by industrial output (in nominal terms) at peak level as of 2018, according to the IMF and CIA World Factbook
|
en/2732.html.txt
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
In macroeconomics, an industry is a sector that produces goods or related services within an economy.[1] The major source of revenue of a group or company is an indicator of what industry it should be classified in.[2] When a large corporate group has multiple sources of revenue generation, it is considered to be working in different industries. The manufacturing industry became a key sector of production and labour in European and North American countries during the Industrial Revolution, upsetting previous mercantile and feudal economies. This came through many successive rapid advances in technology, such as the development of steam power and the production of steel and coal.
|
4 |
+
|
5 |
+
Following the Industrial Revolution, possibly a third of the economic output came from manufacturing industries. Many developed countries and many developing/semi-developed countries (China, India etc.) depend significantly on manufacturing industry.
|
6 |
+
|
7 |
+
Slavery, the practice of utilizing forced labor to produce goods[3][failed verification] and services, has occurred since antiquity throughout the world as a means of low-cost production. It typically produces goods for which profit depends on economies of scale, especially those for which labor was simple and easy to supervise.[4] International law has declared slavery illegal.[5]
|
8 |
+
|
9 |
+
Guilds, associations of artisans and merchants, oversee the production and distribution of a particular good. Guilds have their roots in the Roman Empire as collegia (singular: collegium) Membership in these early guilds was voluntary. The Roman collegia did not survive the fall of Rome.[6] In the early middle ages, guilds once again began to emerge in Europe, reaching a degree of maturity by the beginning of the 14th century.[7][need quotation to verify] While few guilds remain today[update], some modern labor structures resemble those of traditional guilds.[8] Other guilds, such as the SAG-AFTRA act as trade unions rather than as classical guilds. Professor Sheilagh Ogilvie claims that guilds negatively affected quality, skills, and innovation in areas where they were present.[9]
|
10 |
+
|
11 |
+
The industrial revolution (from the mid-18th century to the mid-19th century) saw the development and popularization of mechanized means of production as a replacement for hand production.[10] The industrial revolution played a role in the abolition of slavery in Europe and in North America.[11]
|
12 |
+
|
13 |
+
In a process dubbed tertiarization, the economic preponderance of primary and secondary industries has declined in recent centuries relative to the rising importance of tertiary industry,[12][13]
|
14 |
+
resulting in the post-industrial economy. Specialization in industry[14]
|
15 |
+
and in the classification of industry has also occurred. Thus (for example) a record producer might claim to speak on behalf of the Japanese rock industry, the recording industry, the music industry or the entertainment industry - and any formulation will sound grandiose and weighty.
|
16 |
+
|
17 |
+
The Industrial Revolution led to the development of factories for large-scale production with consequent changes in society.[15] Originally the factories were steam-powered, but later transitioned to electricity once an electrical grid was developed. The mechanized assembly line was introduced to assemble parts in a repeatable fashion, with individual workers performing specific steps during the process. This led to significant increases in efficiency, lowering the cost of the end process. Later automation was increasingly used to replace human operators. This process has accelerated with the development of the computer and the robot.
|
18 |
+
|
19 |
+
Historically certain manufacturing industries have gone into a decline due to various economic factors, including the development of replacement technology or the loss of competitive advantage. An example of the former is the decline in carriage manufacturing when the automobile was mass-produced.
|
20 |
+
|
21 |
+
A recent trend has been the migration of prosperous, industrialized nations towards a post-industrial society. This is manifested by an increase in the service sector at the expense of manufacturing, and the development of an information-based economy, the so-called informational revolution. In a post-industrial society, manufacturers relocate to more profitable locations through a process of off-shoring.
|
22 |
+
|
23 |
+
Measurements of manufacturing industries outputs and economic effect are not historically stable. Traditionally, success has been measured in the number of jobs created. The reduced number of employees in the manufacturing sector has been assumed to result from a decline in the competitiveness of the sector, or the introduction of the lean manufacturing process.
|
24 |
+
|
25 |
+
Related to this change is the upgrading of the quality of the product being manufactured. While it is possible to produce a low-technology product with low-skill labour, the ability to manufacture high-technology products well is dependent on a highly skilled staff.
|
26 |
+
|
27 |
+
An industrial society is a society driven by the use of technology to enable mass production, supporting a large population with a high capacity for division of labour. Today, industry is an important part of most societies and nations. A government must have some kind of industrial policy, regulating industrial placement, industrial pollution, financing and industrial labour.
|
28 |
+
|
29 |
+
In an industrial society, industry employs a major part of the population. This occurs typically in the manufacturing sector. A labour union is an organization of workers who have banded together to achieve common goals in key areas such as wages, hours, and other working conditions. The trade union, through its leadership, bargains with the employer on behalf of union members (rank and file members) and negotiates labour contracts with employers. This movement first rose among industrial workers.
|
30 |
+
|
31 |
+
The Industrial Revolution changed warfare, with mass-produced weaponry and supplies, machine-powered transportation, mobilization, the total war concept and weapons of mass destruction. Early instances of industrial warfare were the Crimean War and the American Civil War, but its full potential showed during the world wars. See also military-industrial complex, arms industries, military industry and modern warfare.
|
32 |
+
|
33 |
+
The twenty largest countries by industrial output (in nominal terms) at peak level as of 2018, according to the IMF and CIA World Factbook
|
en/2733.html.txt
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
In macroeconomics, an industry is a sector that produces goods or related services within an economy.[1] The major source of revenue of a group or company is an indicator of what industry it should be classified in.[2] When a large corporate group has multiple sources of revenue generation, it is considered to be working in different industries. The manufacturing industry became a key sector of production and labour in European and North American countries during the Industrial Revolution, upsetting previous mercantile and feudal economies. This came through many successive rapid advances in technology, such as the development of steam power and the production of steel and coal.
|
4 |
+
|
5 |
+
Following the Industrial Revolution, possibly a third of the economic output came from manufacturing industries. Many developed countries and many developing/semi-developed countries (China, India etc.) depend significantly on manufacturing industry.
|
6 |
+
|
7 |
+
Slavery, the practice of utilizing forced labor to produce goods[3][failed verification] and services, has occurred since antiquity throughout the world as a means of low-cost production. It typically produces goods for which profit depends on economies of scale, especially those for which labor was simple and easy to supervise.[4] International law has declared slavery illegal.[5]
|
8 |
+
|
9 |
+
Guilds, associations of artisans and merchants, oversee the production and distribution of a particular good. Guilds have their roots in the Roman Empire as collegia (singular: collegium) Membership in these early guilds was voluntary. The Roman collegia did not survive the fall of Rome.[6] In the early middle ages, guilds once again began to emerge in Europe, reaching a degree of maturity by the beginning of the 14th century.[7][need quotation to verify] While few guilds remain today[update], some modern labor structures resemble those of traditional guilds.[8] Other guilds, such as the SAG-AFTRA act as trade unions rather than as classical guilds. Professor Sheilagh Ogilvie claims that guilds negatively affected quality, skills, and innovation in areas where they were present.[9]
|
10 |
+
|
11 |
+
The industrial revolution (from the mid-18th century to the mid-19th century) saw the development and popularization of mechanized means of production as a replacement for hand production.[10] The industrial revolution played a role in the abolition of slavery in Europe and in North America.[11]
|
12 |
+
|
13 |
+
In a process dubbed tertiarization, the economic preponderance of primary and secondary industries has declined in recent centuries relative to the rising importance of tertiary industry,[12][13]
|
14 |
+
resulting in the post-industrial economy. Specialization in industry[14]
|
15 |
+
and in the classification of industry has also occurred. Thus (for example) a record producer might claim to speak on behalf of the Japanese rock industry, the recording industry, the music industry or the entertainment industry - and any formulation will sound grandiose and weighty.
|
16 |
+
|
17 |
+
The Industrial Revolution led to the development of factories for large-scale production with consequent changes in society.[15] Originally the factories were steam-powered, but later transitioned to electricity once an electrical grid was developed. The mechanized assembly line was introduced to assemble parts in a repeatable fashion, with individual workers performing specific steps during the process. This led to significant increases in efficiency, lowering the cost of the end process. Later automation was increasingly used to replace human operators. This process has accelerated with the development of the computer and the robot.
|
18 |
+
|
19 |
+
Historically certain manufacturing industries have gone into a decline due to various economic factors, including the development of replacement technology or the loss of competitive advantage. An example of the former is the decline in carriage manufacturing when the automobile was mass-produced.
|
20 |
+
|
21 |
+
A recent trend has been the migration of prosperous, industrialized nations towards a post-industrial society. This is manifested by an increase in the service sector at the expense of manufacturing, and the development of an information-based economy, the so-called informational revolution. In a post-industrial society, manufacturers relocate to more profitable locations through a process of off-shoring.
|
22 |
+
|
23 |
+
Measurements of manufacturing industries outputs and economic effect are not historically stable. Traditionally, success has been measured in the number of jobs created. The reduced number of employees in the manufacturing sector has been assumed to result from a decline in the competitiveness of the sector, or the introduction of the lean manufacturing process.
|
24 |
+
|
25 |
+
Related to this change is the upgrading of the quality of the product being manufactured. While it is possible to produce a low-technology product with low-skill labour, the ability to manufacture high-technology products well is dependent on a highly skilled staff.
|
26 |
+
|
27 |
+
An industrial society is a society driven by the use of technology to enable mass production, supporting a large population with a high capacity for division of labour. Today, industry is an important part of most societies and nations. A government must have some kind of industrial policy, regulating industrial placement, industrial pollution, financing and industrial labour.
|
28 |
+
|
29 |
+
In an industrial society, industry employs a major part of the population. This occurs typically in the manufacturing sector. A labour union is an organization of workers who have banded together to achieve common goals in key areas such as wages, hours, and other working conditions. The trade union, through its leadership, bargains with the employer on behalf of union members (rank and file members) and negotiates labour contracts with employers. This movement first rose among industrial workers.
|
30 |
+
|
31 |
+
The Industrial Revolution changed warfare, with mass-produced weaponry and supplies, machine-powered transportation, mobilization, the total war concept and weapons of mass destruction. Early instances of industrial warfare were the Crimean War and the American Civil War, but its full potential showed during the world wars. See also military-industrial complex, arms industries, military industry and modern warfare.
|
32 |
+
|
33 |
+
The twenty largest countries by industrial output (in nominal terms) at peak level as of 2018, according to the IMF and CIA World Factbook
|
en/2734.html.txt
ADDED
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Information can be thought of as the resolution of uncertainty; it is that which answers the question of "what an entity is" and thus defines both its essence and nature of its characteristics. The concept of information has different meanings in different contexts.[1] Thus the concept becomes related to notions of constraint, communication, control, data, form, education, knowledge, meaning, understanding, mental stimuli, pattern, perception, representation, and entropy.
|
4 |
+
|
5 |
+
Information is associated with data, as data represents values attributed to parameters, and information is data in context and with meaning attached. Information also relates to knowledge, as knowledge signifies understanding of an abstract or concrete concept.[2][better source needed]
|
6 |
+
|
7 |
+
In terms of communication, information is expressed either as the content of a message or through direct or indirect observation. That which is perceived can be construed as a message in its own right, and in that sense, information is always conveyed as the content of a message.
|
8 |
+
|
9 |
+
Information can be encoded into various forms for transmission and interpretation (for example, information may be encoded into a sequence of signs, or transmitted via a signal). It can also be encrypted for safe storage and communication.
|
10 |
+
|
11 |
+
The uncertainty of an event is measured by its probability of occurrence and is inversely proportional to that. The more uncertain an event, the more information is required to resolve uncertainty of that event. The bit is a typical unit of information, but other units such as the nat may be used. For example, the information encoded in one "fair" coin flip is log2(2/1) = 1 bit, and in two fair coin flips is
|
12 |
+
log2(4/1) = 2 bits.
|
13 |
+
|
14 |
+
The English word "information" apparently derives from the Latin stem (information-) of the nominative (informatio): this noun derives from the verb informare (to inform) in the sense of "to give form to the mind", "to discipline", "instruct", "teach". Inform itself comes (via French informer) from the Latin verb informare, which means to give form, or to form an idea of. Furthermore, Latin itself already contained the word informatio meaning concept or idea, but the extent to which this may have influenced the development of the word information in English is not clear.
|
15 |
+
|
16 |
+
The ancient Greek word for form was μορφή (morphe; cf. morph) and also εἶδος (eidos) "kind, idea, shape, set", the latter word was famously used in a technical philosophical sense by Plato (and later Aristotle) to denote the ideal identity or essence of something (see Theory of Forms). 'Eidos' can also be associated with thought, proposition, or even concept.
|
17 |
+
|
18 |
+
The ancient Greek word for information is πληροφορία, which transliterates (plērophoria) from
|
19 |
+
πλήρης (plērēs) "fully" and φέρω (phorein) frequentative of (pherein) to carry through. It literally means "bears fully" or "conveys fully". In modern Greek the word Πληροφορία is still in daily use and has the same meaning as the word information in English. In addition to its primary meaning, the word Πληροφορία as a symbol has deep roots in Aristotle's semiotic triangle. In this regard it can be interpreted to communicate information to the one decoding that specific type of sign. This is something that occurs frequently with the etymology of many words in ancient and modern Greek where there is a very strong denotative relationship between the signifier, e.g. the word symbol that conveys a specific encoded interpretation, and the signified, e.g. a concept whose meaning the interpreter attempts to decode.
|
20 |
+
|
21 |
+
In English, "information" is an uncountable mass noun.
|
22 |
+
|
23 |
+
In information theory, information is taken as an ordered sequence of symbols from an alphabet, say an input alphabet χ, and an output alphabet ϒ. Information processing consists of an input-output function that maps any input sequence from χ into an output sequence from ϒ. The mapping may be probabilistic or deterministic. It may have memory or be memoryless.[3]
|
24 |
+
|
25 |
+
Often information can be viewed as a type of input to an organism or system. Inputs are of two kinds; some inputs are important to the function of the organism (for example, food) or system (energy) by themselves. In his book Sensory Ecology[4] biophysicist David B. Dusenbery called these causal inputs. Other inputs (information) are important only because they are associated with causal inputs and can be used to predict the occurrence of a causal input at a later time (and perhaps another place). Some information is important because of association with other information but eventually there must be a connection to a causal input.
|
26 |
+
|
27 |
+
In practice, information is usually carried by weak stimuli that must be detected by specialized sensory systems and amplified by energy inputs before they can be functional to the organism or system. For example, light is mainly (but not only, e.g. plants can grow in the direction of the lightsource) a causal input to plants but for animals it only provides information. The colored light reflected from a flower is too weak for photosynthesis but the visual system of the bee detects it and the bee's nervous system uses the information to guide the bee to the flower, where the bee often finds nectar or pollen, which are causal inputs, serving a nutritional function.
|
28 |
+
|
29 |
+
The cognitive scientist and applied mathematician Ronaldo Vigo argues that information is a concept that requires at least two related entities to make quantitative sense. These are, any dimensionally defined category of objects S, and any of its subsets R. R, in essence, is a representation of S, or, in other words, conveys representational (and hence, conceptual) information about S. Vigo then defines the amount of information that R conveys about S as the rate of change in the complexity of S whenever the objects in R are removed from S. Under "Vigo information", pattern, invariance, complexity, representation, and information—five fundamental constructs of universal science—are unified under a novel mathematical framework.[5][6][7] Among other things, the framework aims to overcome the limitations of Shannon-Weaver information when attempting to characterize and measure subjective information.
|
30 |
+
|
31 |
+
Information is any type of pattern that influences the formation or transformation of other patterns.[8][9] In this sense, there is no need for a conscious mind to perceive, much less appreciate, the pattern.[citation needed] Consider, for example, DNA. The sequence of nucleotides is a pattern that influences the formation and development of an organism without any need for a conscious mind. One might argue though that for a human to consciously define a pattern, for example a nucleotide, naturally involves conscious information processing.
|
32 |
+
|
33 |
+
Systems theory at times seems to refer to information in this sense, assuming information does not necessarily involve any conscious mind, and patterns circulating (due to feedback) in the system can be called information. In other words, it can be said that information in this sense is something potentially perceived as representation, though not created or presented for that purpose. For example, Gregory Bateson defines "information" as a "difference that makes a difference".[10]
|
34 |
+
|
35 |
+
If, however, the premise of "influence" implies that information has been perceived by a conscious mind and also interpreted by it, the specific context associated with this interpretation may cause the transformation of the information into knowledge. Complex definitions of both "information" and "knowledge" make such semantic and logical analysis difficult, but the condition of "transformation" is an important point in the study of information as it relates to knowledge, especially in the business discipline of knowledge management. In this practice, tools and processes are used to assist a knowledge worker in performing research and making decisions, including steps such as:
|
36 |
+
|
37 |
+
Stewart (2001) argues that transformation of information into knowledge is critical, lying at the core of value creation and competitive advantage for the modern enterprise.
|
38 |
+
|
39 |
+
The Danish Dictionary of Information Terms[11] argues that information only provides an answer to a posed question. Whether the answer provides knowledge depends on the informed person. So a generalized definition of the concept should be: "Information" = An answer to a specific question".
|
40 |
+
|
41 |
+
When Marshall McLuhan speaks of media and their effects on human cultures, he refers to the structure of artifacts that in turn shape our behaviors and mindsets. Also, pheromones are often said to be "information" in this sense.
|
42 |
+
|
43 |
+
Information has a well-defined meaning in physics. In 2003 J. D. Bekenstein claimed that a growing trend in physics was to define the physical world as being made up of information itself (and thus information is defined in this way) (see Digital physics). Examples of this include the phenomenon of quantum entanglement, where particles can interact without reference to their separation or the speed of light. Material information itself cannot travel faster than light even if that information is transmitted indirectly. This could lead to all attempts at physically observing a particle with an "entangled" relationship to another being slowed, even though the particles are not connected in any other way other than by the information they carry.
|
44 |
+
|
45 |
+
The mathematical universe hypothesis suggests a new paradigm, in which virtually everything, from particles and fields, through biological entities and consciousness, to the multiverse itself, could be described by mathematical patterns of information. By the same token, the cosmic void can be conceived of as the absence of material information in space (setting aside the virtual particles that pop in and out of existence due to quantum fluctuations, as well as the gravitational field and the dark energy). Nothingness can be understood then as that within which no matter, energy, space, time, or any other type of information could exist, which would be possible if symmetry and structure break within the manifold of the multiverse (i.e. the manifold would have tears or holes). Physical information exists beyond event horizons, since astronomical observations show that, due to the expansion of the universe, distant objects continue to pass the cosmological horizon, as seen from a present time, local observer point of view.
|
46 |
+
|
47 |
+
Another link is demonstrated by the Maxwell's demon thought experiment. In this experiment, a direct relationship between information and another physical property, entropy, is demonstrated. A consequence is that it is impossible to destroy information without increasing the entropy of a system; in practical terms this often means generating heat. Another more philosophical outcome is that information could be thought of as interchangeable with energy. Toyabe et al. experimentally showed in nature that information can be converted into work.[12] Thus, in the study of logic gates, the theoretical lower bound of thermal energy released by an AND gate is higher than for the NOT gate (because information is destroyed in an AND gate and simply converted in a NOT gate). Physical information is of particular importance in the theory of quantum computers.
|
48 |
+
|
49 |
+
In thermodynamics, information is any kind of event that affects the state of a dynamic system that can interpret the information.
|
50 |
+
|
51 |
+
The information cycle (addressed as a whole or in its distinct components) is of great concern to information technology, information systems, as well as information science. These fields deal with those processes and techniques pertaining to information capture (through sensors) and generation (through computation, formulation or composition), processing (including encoding, encryption, compression, packaging), transmission (including all telecommunication methods), presentation (including visualization / display methods), storage (such as magnetic or optical, including holographic methods), etc.
|
52 |
+
|
53 |
+
Information visualization (shortened as InfoVis) depends on the computation and digital representation of data, and assists users in pattern recognition and anomaly detection.
|
54 |
+
|
55 |
+
Partial map of the Internet, with nodes representing IP addresses
|
56 |
+
|
57 |
+
Galactic (including dark) matter distribution in a cubic section of the Universe
|
58 |
+
|
59 |
+
Information embedded in an abstract mathematical object with symmetry breaking nucleus
|
60 |
+
|
61 |
+
Visual representation of a strange attractor, with converted data of its fractal structure
|
62 |
+
|
63 |
+
Information security (shortened as InfoSec) is the ongoing process of exercising due diligence to protect information, and information systems, from unauthorized access, use, disclosure, destruction, modification, disruption or distribution, through algorithms and procedures focused on monitoring and detection, as well as incident response and repair.
|
64 |
+
|
65 |
+
Information analysis is the process of inspecting, transforming, and modelling information, by converting raw data into actionable knowledge, in support of the decision-making process.
|
66 |
+
|
67 |
+
Information quality (shortened as InfoQ) is the potential of a dataset to achieve a specific (scientific or practical) goal using a given empirical analysis method.
|
68 |
+
|
69 |
+
Information communication represents the convergence of informatics, telecommunication and audio-visual media & content.
|
70 |
+
|
71 |
+
It is estimated that the world's technological capacity to store information grew from 2.6 (optimally compressed) exabytes in 1986 – which is the informational equivalent to less than one 730-MB CD-ROM per person (539 MB per person) – to 295 (optimally compressed) exabytes in 2007.[13] This is the informational equivalent of almost 61 CD-ROM per person in 2007.[14]
|
72 |
+
|
73 |
+
The world's combined technological capacity to receive information through one-way broadcast networks was the informational equivalent of 174 newspapers per person per day in 2007.[13]
|
74 |
+
|
75 |
+
The world's combined effective capacity to exchange information through two-way telecommunication networks was the informational equivalent of 6 newspapers per person per day in 2007.[14]
|
76 |
+
|
77 |
+
As of 2007, an estimated 90% of all new information is digital, mostly stored on hard drives.[15]
|
78 |
+
|
79 |
+
Records are specialized forms of information. Essentially, records are information produced consciously or as by-products of business activities or transactions and retained because of their value. Primarily, their value is as evidence of the activities of the organization but they may also be retained for their informational value. Sound records management ensures that the integrity of records is preserved for as long as they are required.
|
80 |
+
|
81 |
+
The international standard on records management, ISO 15489, defines records as "information created, received, and maintained as evidence and information by an organization or person, in pursuance of legal obligations or in the transaction of business".[16] The International Committee on Archives (ICA) Committee on electronic records defined a record as, "recorded information produced or received in the initiation, conduct or completion of an institutional or individual activity and that comprises content, context and structure sufficient to provide evidence of the activity".[17]
|
82 |
+
|
83 |
+
Records may be maintained to retain corporate memory of the organization or to meet legal, fiscal or accountability requirements imposed on the organization. Willis expressed the view that sound management of business records and information delivered "...six key requirements for good corporate governance...transparency; accountability; due process; compliance; meeting statutory and common law requirements; and security of personal and corporate information."[18]
|
84 |
+
|
85 |
+
Michael Buckland has classified "information" in terms of its uses: "information as process", "information as knowledge", and "information as thing".[19]
|
86 |
+
|
87 |
+
Beynon-Davies[20][21] explains the multi-faceted concept of information in terms of signs and signal-sign systems. Signs themselves can be considered in terms of four inter-dependent levels, layers or branches of semiotics: pragmatics, semantics, syntax, and empirics. These four layers serve to connect the social world on the one hand with the physical or technical world on the other.
|
88 |
+
|
89 |
+
Pragmatics is concerned with the purpose of communication. Pragmatics links the issue of signs with the context within which signs are used. The focus of pragmatics is on the intentions of living agents underlying communicative behaviour. In other words, pragmatics link language to action.
|
90 |
+
|
91 |
+
Semantics is concerned with the meaning of a message conveyed in a communicative act. Semantics considers the content of communication. Semantics is the study of the meaning of signs - the association between signs and behaviour. Semantics can be considered as the study of the link between symbols and their referents or concepts – particularly the way that signs relate to human behavior.
|
92 |
+
|
93 |
+
Syntax is concerned with the formalism used to represent a message. Syntax as an area studies the form of communication in terms of the logic and grammar of sign systems. Syntax is devoted to the study of the form rather than the content of signs and sign-systems.
|
94 |
+
|
95 |
+
Nielsen (2008) discusses the relationship between semiotics and information in relation to dictionaries. He introduces the concept of lexicographic information costs and refers to the effort a user of a dictionary must make to first find, and then understand data so that they can generate information.
|
96 |
+
|
97 |
+
Communication normally exists within the context of some social situation. The social situation sets the context for the intentions conveyed (pragmatics) and the form of communication. In a communicative situation intentions are expressed through messages that comprise collections of inter-related signs taken from a language mutually understood by the agents involved in the communication. Mutual understanding implies that agents involved understand the chosen language in terms of its agreed syntax (syntactics) and semantics. The sender codes the message in the language and sends the message as signals along some communication channel (empirics). The chosen communication channel has inherent properties that determine outcomes such as the speed at which communication can take place, and over what distance.
|
en/2735.html.txt
ADDED
@@ -0,0 +1,98 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Computer science is the study of computation and information.[1][2] Computer science deals with theory of computation, algorithms, computational problems and the design of computer systems hardware, software and applications.[3][4] Computer science addresses both human-made and natural information processes, such as communication, control, perception, learning and intelligence especially in human-made computing systems and machines.[5][6][7] According to Peter Denning, the fundamental question underlying computer science is, What can be automated?[8][5]
|
4 |
+
|
5 |
+
Its fields can be divided into theoretical and practical disciplines. Computational complexity theory is highly abstract, while computer graphics and computational geometry emphasizes real-world applications. Algorithmics is called the heart of computer science.[9] Programming language theory considers approaches to the description of computational processes, while software engineering involves the use of programming languages and complex systems. Computer architecture and computer engineering deals with construction of computer components and computer-controlled equipment.[5][10] Human–computer interaction considers the challenges in making computers useful, usable, and accessible. Artificial intelligence aims to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, motion planning, learning, and communication found in humans and animals.
|
6 |
+
|
7 |
+
The earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment.
|
8 |
+
|
9 |
+
Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623.[13] In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner.[14] Leibniz may be considered the first computer scientist and information theorist, for, among other reasons, documenting the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry[note 1] when he invented his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which eventually gave him the idea of the first programmable mechanical calculator, his Analytical Engine.[15] He started developing this machine in 1834, and "in less than two years, he had sketched out many of the salient features of the modern computer".[16] "A crucial step was the adoption of a punched card system derived from the Jacquard loom"[16] making it infinitely programmable.[note 2] In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, which is considered to be the first published algorithm ever specifically tailored for implementation on a computer.[17] Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information; eventually his company became part of IBM. Following Babbage, although unaware of his earlier work, Percy Ludgate in 1909 published [18] the 2nd of the only two designs for mechanical analytical engines in history. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card equipment and was also in the calculator business[19] to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit. When the machine was finished, some hailed it as "Babbage's dream come true".[20]
|
10 |
+
|
11 |
+
During the 1940s, with the development of new and more powerful computing machines such as the Atanasoff–Berry computer and ENIAC, the term computer came to refer to the machines rather than their human predecessors.[21] As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City. The renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world.[22] Ultimately, the close relationship between IBM and the university was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946.[23] Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s.[5][24] The world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science department in the United States was formed at Purdue University in 1962.[25] Since practical computers became available, many applications of computing have become distinct areas of study in their own rights.
|
12 |
+
|
13 |
+
Although many initially believed it was impossible that computers themselves could actually be a scientific field of study, in the late fifties it gradually became accepted among the greater academic population.[26][27] It is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM (short for International Business Machines) released the IBM 704[28] and later the IBM 709[29] computers, which were widely used during the exploration period of such devices. "Still, working with the IBM [computer] was frustrating […] if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again".[26] During the late 1950s, the computer science discipline was very much in its developmental stages, and such issues were commonplace.[27]
|
14 |
+
|
15 |
+
The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947.[30][31] In 1953, the University of Manchester built the first transistorized computer, called the Transistor Computer.[32] However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications.[33] The metal–oxide–silicon field-effect transistor (MOSFET, or MOS transistor) was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959.[34][35] It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses.[33] The MOSFET made it possible to build high-density integrated circuit chips,[36][37] leading to what is known as the computer revolution[38] or microcomputer revolution.[39]
|
16 |
+
|
17 |
+
Time has seen significant improvements in the usability and effectiveness of computing technology.[40] Modern society has seen a significant shift in the demographics which make use of computer technology; usage has shifted from being mostly exclusive to experts and professionals, to a near-ubiquitous user base. Initially, computers were quite costly, and some degree of humanitarian aid was needed for efficient use—in part from professional computer operators. As computer adoption became more widespread and affordable, less human assistance was needed for common usage.
|
18 |
+
|
19 |
+
Although first proposed in 1956,[27] the term "computer science" appears in a 1959 article in Communications of the ACM,[41]
|
20 |
+
in which Louis Fein argues for the creation of a Graduate School in Computer Sciences analogous to the creation of Harvard Business School in 1921,[42] justifying the name by arguing that, like management science, the subject is applied and interdisciplinary in nature, while having the characteristics typical of an academic discipline.[41]
|
21 |
+
His efforts, and those of others such as numerical analyst George Forsythe, were rewarded: universities went on to create such departments, starting with Purdue in 1962.[43] Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed.[44] Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy,[45] to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. An alternative term, also proposed by Naur, is data science; this is now used for a multi-disciplinary field of data analysis, including statistics and databases.
|
22 |
+
|
23 |
+
In the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications of the ACM—turingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist.[46] Three months later in the same journal, comptologist was suggested, followed next year by hypologist.[47] The term computics has also been suggested.[48] In Europe, terms derived from contracted translations of the expression "automatic information" (e.g. "informazione automatica" in Italian) or "information and mathematics" are often used, e.g. informatique (French), Informatik (German), informatica (Italian, Dutch), informática (Spanish, Portuguese), informatika (Slavic languages and Hungarian) or pliroforiki (πληροφορική, which means informatics) in Greek. Similar words have also been adopted in the UK (as in the School of Informatics of the University of Edinburgh).[49]
|
24 |
+
"In the U.S., however, informatics is linked with applied computing, or computing in the context of another domain."[50]
|
25 |
+
|
26 |
+
A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger Dijkstra, states that "computer science is no more about computers than astronomy is about telescopes."[note 3] The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been much cross-fertilization of ideas between the various computer-related disciplines. Computer science research also often intersects other disciplines, such as philosophy, cognitive science, linguistics, mathematics, physics, biology, statistics, and logic.
|
27 |
+
|
28 |
+
Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science.[5] Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel, Alan Turing, John von Neumann, Rózsa Péter and Alonzo Church and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra.[27]
|
29 |
+
|
30 |
+
The relationship between Computer Science and Software Engineering is a contentious issue, which is further muddied by disputes over what the term "Software Engineering" means, and how computer science is defined.[51] David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.[52]
|
31 |
+
|
32 |
+
The academic, political, and funding aspects of computer science tend to depend on whether a department formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research.
|
33 |
+
|
34 |
+
A number of computer scientists have argued for the distinction of three separate paradigms in computer science. Peter Wegner argued that those paradigms are science, technology, and mathematics.[53] Peter Denning's working group argued that they are theory, abstraction (modeling), and design.[54] Amnon H. Eden described them as the "rationalist paradigm" (which treats computer science as a branch of mathematics, which is prevalent in theoretical computer science, and mainly employs deductive reasoning), the "technocratic paradigm" (which might be found in engineering approaches, most prominently in software engineering), and the "scientific paradigm" (which approaches computer-related artifacts from the empirical perspective of natural sciences, identifiable in some branches of artificial intelligence).[55]
|
35 |
+
Computer science focuses on methods involved in design, specification, programming, verification, implementation and testing of human-made computing systems.[56]
|
36 |
+
|
37 |
+
Computer science is no more about computers than astronomy is about telescopes.
|
38 |
+
|
39 |
+
As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software.[57][58]
|
40 |
+
CSAB, formerly called Computing Sciences Accreditation Board—which is made up of representatives of the Association for Computing Machinery (ACM), and the IEEE Computer Society (IEEE CS)[59]—identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, human–computer interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science.[57]
|
41 |
+
|
42 |
+
Theoretical Computer Science is mathematical and abstract in spirit, but it derives its motivation from the practical and everyday computation. Its aim is to understand the nature of computation and, as a consequence of this understanding, provide more efficient methodologies. All studies related to mathematical, logic and formal concepts and methods could be considered as theoretical computer science, provided that the motivation is clearly drawn from the field of computing.
|
43 |
+
|
44 |
+
According to Peter Denning, the fundamental question underlying computer science is, "What can be automated?"[5] Theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a multitude of computational problems.
|
45 |
+
|
46 |
+
The famous P = NP? problem, one of the Millennium Prize Problems,[60] is an open problem in the theory of computation.
|
47 |
+
|
48 |
+
Information theory, closely related to probability and statistics, is related to the quantification of information. This was developed by Claude Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data.[61]
|
49 |
+
Coding theory is the study of the properties of codes (systems for converting information from one form to another) and their fitness for a specific application. Codes are used for data compression, cryptography, error detection and correction, and more recently also for network coding. Codes are studied for the purpose of designing efficient and reliable data transmission methods.
|
50 |
+
[62]
|
51 |
+
|
52 |
+
Data structures and algorithms are the studies of commonly used computational methods and their computational efficiency.
|
53 |
+
|
54 |
+
Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, and linguistics. It is an active research area, with numerous dedicated academic journals.
|
55 |
+
|
56 |
+
Formal methods are a particular kind of mathematically based technique for the specification, development and verification of software and hardware systems.[63] The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. They form an important theoretical underpinning for software engineering, especially where safety or security is involved. Formal methods are a useful adjunct to software testing since they help avoid errors and can also give a framework for testing. For industrial use, tool support is required. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity and life-critical systems, where safety or security is of utmost importance. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.
|
57 |
+
|
58 |
+
Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory.[64] The field often involves disciplines of computer engineering and electrical engineering, selecting and interconnecting hardware components to create computers that meet functional, performance, and cost goals.
|
59 |
+
|
60 |
+
Computer performance analysis is the study of work flowing through computers with the general goals of improving throughput, controlling response time, using resources efficiently, eliminating bottlenecks, and predicting performance under anticipated peak loads.[65]
|
61 |
+
Benchmarks are used to compare the performance of systems carrying different chips and/or system architectures.[66]
|
62 |
+
|
63 |
+
Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other.[67] A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi and the Parallel Random Access Machine model.[68] When multiple computers are connected in a network while using concurrency, this is known as a distributed system. Computers within that distributed system have their own private memory, and information can be exchanged to achieve common goals.[69]
|
64 |
+
|
65 |
+
This branch of computer science aims to manage networks between computers worldwide.
|
66 |
+
|
67 |
+
Computer security is a branch of computer technology with an objective of protecting information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users. Cryptography is the practice and study of hiding (encryption) and therefore deciphering (decryption) information. Modern cryptography is largely related to computer science, for many encryption and decryption algorithms are based on their computational complexity.
|
68 |
+
|
69 |
+
A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, through database models and query languages.
|
70 |
+
|
71 |
+
Computer graphics is the study of digital visual contents and involves the synthesis and manipulation of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and is heavily applied in the fields of special effects and video games.
|
72 |
+
|
73 |
+
Research that develops theories, principles, and guidelines for user interface designers, so they can create satisfactory user experiences with desktop, laptop, and mobile devices.
|
74 |
+
|
75 |
+
Scientific computing (or computational science) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. A major usage of scientific computing is simulation of various processes, including computational fluid dynamics, physical, electrical, and electronic systems and circuits, as well as societies and social situations (notably war games) along with their habitats, among many others. Modern computers enable optimization of such designs as complete aircraft. Notable in electrical and electronic circuit design are SPICE,[70] as well as software for physical realization of new (or modified) designs. The latter includes essential design software for integrated circuits.[citation needed]
|
76 |
+
|
77 |
+
|
78 |
+
|
79 |
+
Artificial intelligence (AI) aims to or is required to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning, and communication found in humans and animals. From its origins in cybernetics and in the Dartmouth Conference (1956), artificial intelligence research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of software development, which require computational understanding. The starting point in the late 1940s was Alan Turing's question "Can computers think?", and the question remains effectively unanswered, although the Turing test is still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data.
|
80 |
+
|
81 |
+
|
82 |
+
|
83 |
+
Software engineering is the study of designing, implementing, and modifying the software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software—it doesn't just deal with the creation or manufacture of new software, but its internal arrangement and maintenance.
|
84 |
+
|
85 |
+
The philosopher of computing Bill Rapaport noted three Great Insights of Computer Science:[71]
|
86 |
+
|
87 |
+
Programming languages can be used to accomplish different tasks in different ways. Common programming paradigms include:
|
88 |
+
|
89 |
+
Many languages offer support for multiple paradigms, making the distinction more a matter of style than of technical capabilities.[77]
|
90 |
+
|
91 |
+
Conferences are important events for computer science research. During these conferences, researchers from the public and private sectors present their recent work and meet. Unlike in most other academic fields, in computer science, the prestige of conference papers is greater than that of journal publications.[78][79] One proposed explanation for this is the quick development of this relatively new field requires rapid review and distribution of results, a task better handled by conferences than by journals.[80]
|
92 |
+
|
93 |
+
Computer Science, known by its near synonyms, Computing, Computer Studies, Information Technology (IT) and Information and Computing Technology (ICT), has been taught in UK schools since the days of batch processing, mark sensitive cards and paper tape but usually to a select few students.[81] In 1981, the BBC produced a micro-computer and classroom network and Computer Studies became common for GCE O level students (11–16-year-old), and Computer Science to A level students. Its importance was recognised, and it became a compulsory part of the National Curriculum, for Key Stage 3 & 4. In September 2014 it became an entitlement for all pupils over the age of 4.[82]
|
94 |
+
|
95 |
+
In the US, with 14,000 school districts deciding the curriculum, provision was fractured.[83] According to a 2010 report by the Association for Computing Machinery (ACM) and Computer Science Teachers Association (CSTA), only 14 out of 50 states have adopted significant education standards for high school computer science.[84]
|
96 |
+
|
97 |
+
Israel, New Zealand, and South Korea have included computer science in their national secondary education curricula,[85][86] and several others are following.[87]
|
98 |
+
|
en/2736.html.txt
ADDED
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Justice, in its broadest sense is the principle that people receive that which they deserve; with the interpretation of what then constitutes "deserving" being impacted upon by numerous fields, with many differing viewpoints and perspectives, including the concepts of moral correctness based on ethics, rationality, law, religion, equity and fairness.
|
4 |
+
|
5 |
+
Consequently, the application of justice differs in every culture. Early theories of justice were set out by the Ancient Greek philosophers Plato in his work The Republic, and Aristotle in his Nicomachean Ethics. Throughout history various theories have been established. Advocates of divine command theory argue that justice issues from God. In the 1600s, theorists like John Locke argued for the theory of natural law. Thinkers in the social contract tradition argued that justice is derived from the mutual agreement of everyone concerned. In the 1800s, utilitarian thinkers including John Stuart Mill argued that justice is based on the best outcomes for the greatest number of people. Theories of distributive justice concern what is to be distributed, between whom they are to be distributed, and what is the proper distribution. Egalitarians argued that justice can only exist within the coordinates of equality. John Rawls used a social contract argument to show that justice, and especially distributive justice, is a form of fairness. Property rights theorists (like Robert Nozick) also take a consequentialist view of distributive justice and argue that property rights-based justice maximizes the overall wealth of an economic system. Theories of retributive justice are concerned with punishment for wrongdoing. Restorative justice (also sometimes called "reparative justice") is an approach to justice that focuses on the needs of victims and offenders.
|
6 |
+
|
7 |
+
In his dialogue Republic, Plato uses Socrates to argue for justice that covers both the just person and the just City State. Justice is a proper, harmonious relationship between the warring parts of the person or city. Hence, Plato's definition of justice is that justice is the having and doing of what is one's own. A just man is a man in just the right place, doing his best and giving the precise equivalent of what he has received. This applies both at the individual level and at the universal level. A person's soul has three parts – reason, spirit and desire. Similarly, a city has three parts – Socrates uses the parable of the chariot to illustrate his point: a chariot works as a whole because the two horses' power is directed by the charioteer. Lovers of wisdom – philosophers, in one sense of the term – should rule because only they understand what is good. If one is ill, one goes to a medic rather than a farmer, because the medic is expert in the subject of health. Similarly, one should trust one's city to an expert in the subject of the good, not to a mere politician who tries to gain power by giving people what they want, rather than what's good for them. Socrates uses the parable of the ship to illustrate this point: the unjust city is like a ship in open ocean, crewed by a powerful but drunken captain (the common people), a group of untrustworthy advisors who try to manipulate the captain into giving them power over the ship's course (the politicians), and a navigator (the philosopher) who is the only one who knows how to get the ship to port. For Socrates, the only way the ship will reach its destination – the good – is if the navigator takes charge.[2]
|
8 |
+
|
9 |
+
Advocates of divine command theory argue that justice, and indeed the whole of morality, is the authoritative command of God. Murder is wrong and must be punished, for instance, because God says it so. Some versions of the theory assert that God must be obeyed because of the nature of his relationship with humanity, others assert that God must be obeyed because he is goodness itself, and thus doing what he says would be best for everyone.
|
10 |
+
|
11 |
+
A meditation on the Divine command theory by Plato can be found in his dialogue, Euthyphro. Called the Euthyphro dilemma, it goes as follows: "Is what is morally good commanded by God because it is morally good, or is it morally good because it is commanded by God?" The implication is that if the latter is true, then justice is beyond mortal understanding; if the former is true, then morality exists independently from God, and is therefore subject to the judgment of mortals. A response, popularized in two contexts by Immanuel Kant and C. S. Lewis, is that it is deductively valid to argue that the existence of an objective morality implies the existence of God and vice versa.
|
12 |
+
|
13 |
+
For advocates of the theory that justice is part of natural law (e.g., John Locke), it involves the system of consequences that naturally derives from any action or choice. In this, it is similar to the laws of physics: in the same way as the Third of Newton's laws of Motion requires that for every action there must be an equal and opposite reaction, justice requires according individuals or groups what they actually deserve, merit, or are entitled to.[citation needed] Justice, on this account, is a universal and absolute concept: laws, principles, religions, etc., are merely attempts to codify that concept, sometimes with results that entirely contradict the true nature of justice.
|
14 |
+
|
15 |
+
In Republic by Plato, the character Thrasymachus argues that justice is the interest of the strong – merely a name for what the powerful or cunning ruler has imposed on the people.
|
16 |
+
|
17 |
+
Advocates of the social contract agree that justice is derived from the mutual agreement of everyone concerned; or, in many versions, from what they would agree to under hypothetical conditions including equality and absence of bias. This account is considered further below, under 'Justice as fairness'. The absence of bias refers to an equal ground for all people concerned in a disagreement (or trial in some cases).[citation needed]
|
18 |
+
|
19 |
+
According to utilitarian thinkers including John Stuart Mill, justice is not as fundamental as we often think. Rather, it is derived from the more basic standard of rightness, consequentialism: what is right is what has the best consequences (usually measured by the total or average welfare caused). So, the proper principles of justice are those that tend to have the best consequences. These rules may turn out to be familiar ones such as keeping contracts; but equally, they may not, depending on the facts about real consequences. Either way, what is important is those consequences, and justice is important, if at all, only as derived from that fundamental standard. Mill tries to explain our mistaken belief that justice is overwhelmingly important by arguing that it derives from two natural human tendencies: our desire to retaliate against those who hurt us, or the feeling of self-defense and our ability to put ourselves imaginatively in another's place, sympathy. So, when we see someone harmed, we project ourselves into their situation and feel a desire to retaliate on their behalf. If this process is the source of our feelings about justice, that ought to undermine our confidence in them.[3]
|
20 |
+
|
21 |
+
Theories of distributive justice need to answer three questions:
|
22 |
+
|
23 |
+
Distributive justice theorists generally do not answer questions of who has the right to enforce a particular favored distribution. On the other hand, property rights theorists argue that there is no "favored distribution." Rather, distribution should be based simply on whatever distribution results from lawful interactions or transactions (that is, transactions which are not illicit).
|
24 |
+
|
25 |
+
This section describes some widely held theories of distributive justice, and their attempts to answer these questions.
|
26 |
+
|
27 |
+
Social justice is concerned with the just relationship between individuals and their society, often considering how privileges, opportunities, and wealth ought to be distributed among individuals.[4] Social justice is also associated with social mobility, especially the ease with which individuals and families may move between social strata.[5] Social justice is distinct from cosmopolitanism, which is the idea that all people belong to a single global community with a shared morality.[6] Social justice is also distinct from egalitarianism, which is the idea that all people are equal in terms of status, value, or rights, as social justice theories do not all require equality.[7] For example, sociologist George C. Homans suggested that the root of the concept of justice is that each person should receive rewards that are proportional to their contributions.[8][9] Economist Friedrich Hayek argued that the concept of social justice was meaningless, saying that justice is a result of individual behavior and unpredictable market forces.[10] Social justice is closely related to the concept of relational justice, which is concerned with the just relationship with individuals who possess features in common such as nationality, or who are engaged in cooperation or negotiation.[11][12]
|
28 |
+
|
29 |
+
In his A Theory of Justice, John Rawls used a social contract argument to show that justice, and especially distributive justice, is a form of fairness: an impartial distribution of goods. Rawls asks us to imagine ourselves behind a veil of ignorance that denies us all knowledge of our personalities, social statuses, moral characters, wealth, talents and life plans, and then asks what theory of justice we would choose to govern our society when the veil is lifted, if we wanted to do the best that we could for ourselves. We don't know who in particular we are, and therefore can't bias the decision in our own favour. So, the decision-in-ignorance models fairness, because it excludes selfish bias. Rawls argues that each of us would reject the utilitarian theory of justice that we should maximize welfare (see below) because of the risk that we might turn out to be someone whose own good is sacrificed for greater benefits for others. Instead, we would endorse Rawls's two principles of justice:
|
30 |
+
|
31 |
+
This imagined choice justifies these principles as the principles of justice for us, because we would agree to them in a fair decision procedure. Rawls's theory distinguishes two kinds of goods – (1) the good of liberty rights and (2) social and economic goods, i.e. wealth, income and power – and applies different distributions to them – equality between citizens for (1), equality unless inequality improves the position of the worst off for (2).
|
32 |
+
|
33 |
+
In one sense, theories of distributive justice may assert that everyone should get what they deserve. Theories disagree on the meaning of what is "deserved". The main distinction is between theories that argue the basis of just deserts ought to be held equally by everyone, and therefore derive egalitarian accounts of distributive justice – and theories that argue the basis of just deserts is unequally distributed on the basis of, for instance, hard work, and therefore derive accounts of distributive justice by which some should have more than others.
|
34 |
+
|
35 |
+
According to meritocratic theories, goods, especially wealth and social status, should be distributed to match individual merit, which is usually understood as some combination of talent and hard work. According to needs-based theories, goods, especially such basic goods as food, shelter and medical care, should be distributed to meet individuals' basic needs for them. Marxism is a needs-based theory, expressed succinctly in Marx's slogan "from each according to his ability, to each according to his need".[14] According to contribution-based theories, goods should be distributed to match an individual's contribution to the overall social good.
|
36 |
+
|
37 |
+
In Anarchy, State, and Utopia, Robert Nozick argues that distributive justice is not a matter of the whole distribution matching an ideal pattern, but of each individual entitlement having the right kind of history. It is just that a person has some good (especially, some property right) if and only if they came to have it by a history made up entirely of events of two kinds:
|
38 |
+
|
39 |
+
If the chain of events leading up to the person having something meets this criterion, they are entitled to it: that they possess it is just, and what anyone else does or doesn't have or need is irrelevant.
|
40 |
+
|
41 |
+
On the basis of this theory of distributive justice, Nozick argues that all attempts to redistribute goods according to an ideal pattern, without the consent of their owners, are theft. In particular, redistributive taxation is theft.
|
42 |
+
|
43 |
+
Some property rights theorists (like Nozick) also take a consequentialist view of distributive justice and argue that property rights based justice also has the effect of maximizing the overall wealth of an economic system. They explain that voluntary (non-coerced) transactions always have a property called Pareto efficiency. The result is that the world is better off in an absolute sense and no one is worse off. Such consequentialist property rights theorists argue that respecting property rights maximizes the number of Pareto efficient transactions in the world and minimized the number of non-Pareto efficient transactions in the world (i.e. transactions where someone is made worse off). The result is that the world will have generated the greatest total benefit from the limited, scarce resources available in the world. Further, this will have been accomplished without taking anything away from anyone unlawfully.
|
44 |
+
|
45 |
+
According to the utilitarian, justice requires the maximization of the total or average welfare across all relevant individuals.[15] This may require sacrifice of some for the good of others, so long as everyone's good is taken impartially into account. Utilitarianism, in general, argues that the standard of justification for actions, institutions, or the whole world, is impartial welfare consequentialism, and only indirectly, if at all, to do with rights, property, need, or any other non-utilitarian criterion. These other criteria might be indirectly important, to the extent that human welfare involves them. But even then, such demands as human rights would only be elements in the calculation of overall welfare, not uncrossable barriers to action.
|
46 |
+
|
47 |
+
Theories of retributive justice are concerned with punishment for wrongdoing, and need to answer three questions:
|
48 |
+
|
49 |
+
This section considers the two major accounts of retributive justice, and their answers to these questions. Utilitarian theories look forward to the future consequences of punishment, while retributive theories look back to particular acts of wrongdoing, and attempt to balance them with deserved punishment.
|
50 |
+
|
51 |
+
According to the utilitarian, justice requires the maximization of the total or average welfare across all relevant individuals. Punishment fights crime in three ways:
|
52 |
+
|
53 |
+
So, the reason for punishment is the maximization of welfare, and punishment should be of whomever, and of whatever form and severity, are needed to meet that goal. This may sometimes justify punishing the innocent, or inflicting disproportionately severe punishments, when that will have the best consequences overall (perhaps executing a few suspected shoplifters live on television would be an effective deterrent to shoplifting, for instance). It also suggests that punishment might turn out never to be right, depending on the facts about what actual consequences it has.[16]
|
54 |
+
|
55 |
+
The retributivist will think consequentialism is mistaken. If someone does something wrong we must respond by punishing for the committed action itself, regardless of what outcomes punishment produces. Wrongdoing must be balanced or made good in some way, and so the criminal deserves to be punished. It says that all guilty people, and only guilty people, deserve appropriate punishment. This matches some strong intuitions about just punishment: that it should be proportional to the crime, and that it should be of only and all of the guilty.[citation needed] However, it is sometimes argued that retributivism is merely revenge in disguise.[17] However, there are differences between retribution and revenge: the former is impartial and has a scale of appropriateness, whereas the latter is personal and potentially unlimited in scale.[citation needed]
|
56 |
+
|
57 |
+
Restorative justice (also sometimes called "reparative justice") is an approach to justice that focuses on the needs of victims and offenders, instead of satisfying abstract legal principles or punishing the offender. Victims take an active role in the process, while offenders are encouraged to take responsibility for their actions, "to repair the harm they've done – by apologizing, returning stolen money, or community service". It is based on a theory of justice that considers crime and wrongdoing to be an offense against an individual or community rather than the state. Restorative justice that fosters dialogue between victim and offender shows the highest rates of victim satisfaction and offender accountability.[18]
|
58 |
+
|
59 |
+
Some modern philosophers have argued that Utilitarian and Retributive theories are not mutually exclusive. For example, Andrew von Hirsch, in his 1976 book Doing Justice, suggested that we have a moral obligation to punish greater crimes more than lesser ones.[19] However, so long as we adhere to that constraint then utilitarian ideals would play a significant secondary role.
|
60 |
+
|
61 |
+
It has been argued[20] that 'systematic' or 'programmatic' political and moral philosophy in the West begins, in Plato's Republic, with the question, 'What is Justice?'[21] According to most contemporary theories of justice, justice is overwhelmingly important: John Rawls claims that "Justice is the first virtue of social institutions, as truth is of systems of thought."[22] In classical approaches, evident from Plato through to Rawls, the concept of 'justice' is always construed in logical or 'etymological' opposition to the concept of injustice. Such approaches cite various examples of injustice, as problems which a theory of justice must overcome. A number of post-World War II approaches do, however, challenge that seemingly obvious dualism between those two concepts.[23] Justice can be thought of as distinct from benevolence, charity, prudence, mercy, generosity, or compassion, although these dimensions are regularly understood to also be interlinked. Justice is the concept of cardinal virtues, of which it is one. Metaphysical justice has often been associated with concepts of fate, reincarnation or Divine Providence, i.e., with a life in accordance with a cosmic plan. The association of justice with fairness is thus historically and culturally inalienable.[24]
|
62 |
+
|
63 |
+
Law raises important and complex issues concerning equality, fairness, and justice. There is an old saying that 'All are equal before the law'. The belief in equality before the law is called legal egalitarianism. In criticism of this belief, the author Anatole France said in 1894, "In its majestic equality, the law forbids rich and poor alike to sleep under bridges, beg in the streets, and steal loaves of bread."[25] With this saying, France illustrated the fundamental shortcoming of a theory of legal equality that remains blind to social inequality; the same law applied to all may have disproportionately harmful effects on the least powerful.
|
64 |
+
|
65 |
+
Equality before the law is one of the basic principles of classical liberalism.[26][27] Classical liberalism calls for equality before the law, not for equality of outcome.[26] Classical liberalism opposes pursuing group rights at the expense of individual rights.[27]
|
66 |
+
|
67 |
+
Jews, Muslims and Christians traditionally believe that justice is a present, real, right, and, specifically, governing concept along with mercy, and that justice is ultimately derived from and held by God. According to the Bible, such institutions as the Mosaic Law were created by God to require the Israelites to live by and apply His standards of justice.
|
68 |
+
|
69 |
+
The Hebrew Bible describes God as saying about the Judeo-Christian patriarch Abraham: "No, for I have chosen him, that he may charge his children and his household after him to keep the way of the Lord by doing righteousness and justice;...." (Genesis 18:19, NRSV). The Psalmist describes God as having "Righteousness and justice [as] the foundation of [His] throne;...." (Psalms 89:14, NRSV).
|
70 |
+
|
71 |
+
The New Testament also describes God and Jesus Christ as having and displaying justice, often in comparison with God displaying and supporting mercy (Matthew 5:7).
|
72 |
+
|
73 |
+
In criminal law, a sentence forms the final explicit act of a judge-ruled process, and also the symbolic principal act connected to his function. The sentence can generally involve a decree of imprisonment, a fine and/or other punishments against a defendant convicted of a crime. Laws may specify the range of penalties that can be imposed for various offenses, and sentencing guidelines sometimes regulate what punishment within those ranges can be imposed given a certain set of offense and offender characteristics. The most common purposes of sentencing in legal theory are:
|
74 |
+
|
75 |
+
In civil cases the decision is usually known as a verdict, or judgment, rather than a sentence. Civil cases are settled primarily by means of monetary compensation for harm done ("damages") and orders intended to prevent future harm (for example injunctions). Under some legal systems an award of damages involves some scope for retribution, denunciation and deterrence, by means of additional categories of damages beyond simple compensation, covering a punitive effect, social disapprobation, and potentially, deterrence, and occasionally disgorgement (forfeit of any gain, even if no loss was caused to the other party).
|
76 |
+
|
77 |
+
Evolutionary ethics and an argued evolution of morality suggest evolutionary bases for the concept of justice. Biosocial criminology research argues that human perceptions of what is appropriate criminal justice are based on how to respond to crimes in the ancestral small-group environment and that these responses may not always be appropriate for today's societies.
|
78 |
+
|
79 |
+
Studies at UCLA in 2008 have indicated that reactions to fairness are "wired" into the brain and that, "Fairness is activating the same part of the brain that responds to food in rats... This is consistent with the notion that being treated fairly satisfies a basic need".[28] Research conducted in 2003 at Emory University involving capuchin monkeys demonstrated that other cooperative animals also possess such a sense and that "inequity aversion may not be uniquely human".[29]
|
80 |
+
|
81 |
+
In a world where people are interconnected but they disagree, institutions are required to instantiate ideals of justice. These institutions may be justified by their approximate instantiation of justice, or they may be deeply unjust when compared with ideal standards – consider the institution of slavery. Justice is an ideal the world fails to live up to, sometimes due to deliberate opposition to justice despite understanding, which could be disastrous. The question of institutive justice raises issues of legitimacy, procedure, codification and interpretation, which are considered by legal theorists and by philosophers of law.[citation needed]
|
en/2737.html.txt
ADDED
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A flood is an overflow of water that submerges land that is usually dry.[1] In the sense of "flowing water", the word may also be applied to the inflow of the tide. Floods are an area of study of the discipline hydrology and are of significant concern in agriculture, civil engineering and public health.
|
4 |
+
|
5 |
+
Flooding may occur as an overflow of water from water bodies, such as a river, lake, or ocean, in which the water overtops or breaks levees, resulting in some of that water escaping its usual boundaries,[2] or it may occur due to an accumulation of rainwater on saturated ground in an areal flood. While the size of a lake or other body of water will vary with seasonal changes in precipitation and snow melt, these changes in size are unlikely to be considered significant unless they flood property or drown domestic animals.
|
6 |
+
|
7 |
+
Floods can also occur in rivers when the flow rate exceeds the capacity of the river channel, particularly at bends or meanders in the waterway. Floods often cause damage to homes and businesses if they are in the natural flood plains of rivers. While riverine flood damage can be eliminated by moving away from rivers and other bodies of water, people have traditionally lived and worked by rivers because the land is usually flat and fertile and because rivers provide easy travel and access to commerce and industry.
|
8 |
+
|
9 |
+
Some floods develop slowly, while others can develop in just a few minutes and without visible signs of rain. Additionally, floods can be local, impacting a neighborhood or community, or very large, affecting entire river basins.
|
10 |
+
|
11 |
+
The word "flood" comes from the Old English flod, a word common to Germanic languages (compare German Flut, Dutch vloed from the same root as is seen in flow, float; also compare with Latin fluctus, flumen).
|
12 |
+
|
13 |
+
Floods can happen on flat or low-lying areas when water is supplied by rainfall or snowmelt more rapidly than it can either infiltrate or run off. The excess accumulates in place, sometimes to hazardous depths. Surface soil can become saturated, which effectively stops infiltration, where the water table is shallow, such as a floodplain, or from intense rain from one or a series of storms. Infiltration also is slow to negligible through frozen ground, rock, concrete, paving, or roofs. Areal flooding begins in flat areas like floodplains and in local depressions not connected to a stream channel, because the velocity of overland flow depends on the surface slope. Endorheic basins may experience areal flooding during periods when precipitation exceeds evaporation.[3]
|
14 |
+
|
15 |
+
Floods occur in all types of river and stream channels, from the smallest ephemeral streams in humid zones to normally-dry channels in arid climates to the world's largest rivers. When overland flow occurs on tilled fields, it can result in a muddy flood where sediments are picked up by run off and carried as suspended matter or bed load. Localized flooding may be caused or exacerbated by drainage obstructions such as landslides, ice, debris, or beaver dams.
|
16 |
+
|
17 |
+
Slow-rising floods most commonly occur in large rivers with large catchment areas. The increase in flow may be the result of sustained rainfall, rapid snow melt, monsoons, or tropical cyclones. However, large rivers may have rapid flooding events in areas with dry climate, since they may have large basins but small river channels and rainfall can be very intense in smaller areas of those basins.
|
18 |
+
|
19 |
+
Rapid flooding events, including flash floods, more often occur on smaller rivers, rivers with steep valleys, rivers that flow for much of their length over impermeable terrain, or normally-dry channels. The cause may be localized convective precipitation (intense thunderstorms) or sudden release from an upstream impoundment created behind a dam, landslide, or glacier. In one instance, a flash flood killed eight people enjoying the water on a Sunday afternoon at a popular waterfall in a narrow canyon. Without any observed rainfall, the flow rate increased from about 50 to 1,500 cubic feet per second (1.4 to 42 m3/s) in just one minute.[4] Two larger floods occurred at the same site within a week, but no one was at the waterfall on those days. The deadly flood resulted from a thunderstorm over part of the drainage basin, where steep, bare rock slopes are common and the thin soil was already saturated.
|
20 |
+
|
21 |
+
Flash floods are the most common flood type in normally-dry channels in arid zones, known as arroyos in the southwest United States and many other names elsewhere. In that setting, the first flood water to arrive is depleted as it wets the sandy stream bed. The leading edge of the flood thus advances more slowly than later and higher flows. As a result, the rising limb of the hydrograph becomes ever quicker as the flood moves downstream, until the flow rate is so great that the depletion by wetting soil becomes insignificant.
|
22 |
+
|
23 |
+
Flooding in estuaries is commonly caused by a combination of storm surges caused by winds and low barometric pressure and large waves meeting high upstream river flows.
|
24 |
+
|
25 |
+
Coastal areas may be flooded by storm surges combining with high tides and large wave events at sea, resulting in waves over-topping flood defenses or in severe cases by tsunami or tropical cyclones. A storm surge, from either a tropical cyclone or an extratropical cyclone, falls within this category. Research from the NHC (National Hurricane Center) explains: "Storm surge is an additional rise of water generated by a storm, over and above the predicted astronomical tides. Storm surge should not be confused with storm tide, which is defined as the water level rise due to the combination of storm surge and the astronomical tide. This rise in water level can cause extreme flooding in coastal areas particularly when storm surge coincides with spring tide, resulting in storm tides reaching up to 20 feet or more in some cases."[5]
|
26 |
+
|
27 |
+
Urban flooding is the inundation of land or property in a built environment, particularly in more densely populated areas, caused by rainfall overwhelming the capacity of drainage systems, such as storm sewers. Although sometimes triggered by events such as flash flooding or snowmelt, urban flooding is a condition, characterized by its repetitive and systemic impacts on communities, that can happen regardless of whether or not affected communities are located within designated floodplains or near any body of water.[6] Aside from potential overflow of rivers and lakes, snowmelt, stormwater or water released from damaged water mains may accumulate on property and in public rights-of-way, seep through building walls and floors, or backup into buildings through sewer pipes, toilets and sinks.
|
28 |
+
|
29 |
+
In urban areas, flood effects can be exacerbated by existing paved streets and roads, which increase the speed of flowing water. Impervious surfaces prevent rainfall from infiltrating into the ground, thereby causing a higher surface run-off that may be in excess of local drainage capacity.[7]
|
30 |
+
|
31 |
+
The flood flow in urbanized areas constitutes a hazard to both the population and infrastructure. Some recent catastrophes include the inundations of Nîmes (France) in 1998 and Vaison-la-Romaine (France) in 1992, the flooding of New Orleans (USA) in 2005, and the flooding in Rockhampton, Bundaberg, Brisbane during the 2010–2011 summer in Queensland (Australia). Flood flows in urban environments have been studied relatively recently despite many centuries of flood events.[8] Some recent research has considered the criteria for safe evacuation of individuals in flooded areas.[9]
|
32 |
+
|
33 |
+
Catastrophic riverine flooding is usually associated with major infrastructure failures such as the collapse of a dam, but they may also be caused by drainage channel modification from a landslide, earthquake or volcanic eruption. Examples include outburst floods and lahars. Tsunamis can cause catastrophic coastal flooding, most commonly resulting from undersea earthquakes.
|
34 |
+
|
35 |
+
The amount, location, and timing of water reaching a drainage channel from natural precipitation and controlled or uncontrolled reservoir releases determines the flow at downstream locations. Some precipitation evaporates, some slowly percolates through soil, some may be temporarily sequestered as snow or ice, and some may produce rapid runoff from surfaces including rock, pavement, roofs, and saturated or frozen ground. The fraction of incident precipitation promptly reaching a drainage channel has been observed from nil for light rain on dry, level ground to as high as 170 percent for warm rain on accumulated snow.[10]
|
36 |
+
|
37 |
+
Most precipitation records are based on a measured depth of water received within a fixed time interval. Frequency of a precipitation threshold of interest may be determined from the number of measurements exceeding that threshold value within the total time period for which observations are available. Individual data points are converted to intensity by dividing each measured depth by the period of time between observations. This intensity will be less than the actual peak intensity if the duration of the rainfall event was less than the fixed time interval for which measurements are reported. Convective precipitation events (thunderstorms) tend to produce shorter duration storm events than orographic precipitation. Duration, intensity, and frequency of rainfall events are important to flood prediction. Short duration precipitation is more significant to flooding within small drainage basins.[11]
|
38 |
+
|
39 |
+
The most important upslope factor in determining flood magnitude is the land area of the watershed upstream of the area of interest. Rainfall intensity is the second most important factor for watersheds of less than approximately 30 square miles or 80 square kilometres. The main channel slope is the second most important factor for larger watersheds. Channel slope and rainfall intensity become the third most important factors for small and large watersheds, respectively.[12]
|
40 |
+
|
41 |
+
Time of Concentration is the time required for runoff from the most distant point of the upstream drainage area to reach the point of the drainage channel controlling flooding of the area of interest. The time of concentration defines the critical duration of peak rainfall for the area of interest.[13] The critical duration of intense rainfall might be only a few minutes for roof and parking lot drainage structures, while cumulative rainfall over several days would be critical for river basins.
|
42 |
+
|
43 |
+
Water flowing downhill ultimately encounters downstream conditions slowing movement. The final limitation in coastal flooding lands is often the ocean or some coastal flooding bars which form natural lakes. In flooding low lands, elevation changes such as tidal fluctuations are significant determinants of coastal and estuarine flooding. Less predictable events like tsunamis and storm surges may also cause elevation changes in large bodies of water. Elevation of flowing water is controlled by the geometry of the flow channel and, especially, by depth of channel, speed of flow and amount of sediments in it[12] Flow channel restrictions like bridges and canyons tend to control water elevation above the restriction. The actual control point for any given reach of the drainage may change with changing water elevation, so a closer point may control for lower water levels until a more distant point controls at higher water levels.
|
44 |
+
|
45 |
+
Effective flood channel geometry may be changed by growth of vegetation, accumulation of ice or debris, or construction of bridges, buildings, or levees within the flood channel.
|
46 |
+
|
47 |
+
Extreme flood events often result from coincidence such as unusually intense, warm rainfall melting heavy snow pack, producing channel obstructions from floating ice, and releasing small impoundments like beaver dams.[14] Coincident events may cause extensive flooding to be more frequent than anticipated from simplistic statistical prediction models considering only precipitation runoff flowing within unobstructed drainage channels.[15] Debris modification of channel geometry is common when heavy flows move uprooted woody vegetation and flood-damaged structures and vehicles, including boats and railway equipment. Recent field measurements during the 2010–11 Queensland floods showed that any criterion solely based upon the flow velocity, water depth or specific momentum cannot account for the hazards caused by velocity and water depth fluctuations.[8] These considerations ignore further the risks associated with large debris entrained by the flow motion.[9]
|
48 |
+
|
49 |
+
Some researchers have mentioned the storage effect in urban areas with transportation corridors created by cut and fill. Culverted fills may be converted to impoundments if the culverts become blocked by debris, and flow may be diverted along streets. Several studies have looked into the flow patterns and redistribution in streets during storm events and the implication on flood modelling.[16]
|
50 |
+
|
51 |
+
The primary effects of flooding include loss of life and damage to buildings and other structures, including bridges, sewerage systems, roadways, and canals.
|
52 |
+
|
53 |
+
Floods also frequently damage power transmission and sometimes power generation, which then has knock-on effects caused by the loss of power. This includes loss of drinking water treatment and water supply, which may result in loss of drinking water or severe water contamination. It may also cause the loss of sewage disposal facilities. Lack of clean water combined with human sewage in the flood waters raises the risk of waterborne diseases, which can include typhoid, giardia, cryptosporidium, cholera and many other diseases depending upon the location of the flood.
|
54 |
+
|
55 |
+
Damage to roads and transport infrastructure may make it difficult to mobilize aid to those affected or to provide emergency health treatment.
|
56 |
+
|
57 |
+
Flood waters typically inundate farm land, making the land unworkable and preventing crops from being planted or harvested, which can lead to shortages of food both for humans and farm animals. Entire harvests for a country can be lost in extreme flood circumstances. Some tree species may not survive prolonged flooding of their root systems.[17]
|
58 |
+
|
59 |
+
Economic hardship due to a temporary decline in tourism, rebuilding costs, or food shortages leading to price increases is a common after-effect of severe flooding. The impact on those affected may cause psychological damage to those affected, in particular where deaths, serious injuries and loss of property occur.
|
60 |
+
|
61 |
+
Urban flooding can cause chronically wet houses, leading to the growth of indoor mold and resulting in adverse health effects, particularly respiratory symptoms.[18] Urban flooding also has significant economic implications for affected neighborhoods. In the United States, industry experts estimate that wet basements can lower property values by 10–25 percent and are cited among the top reasons for not purchasing a home.[19] According to the U.S. Federal Emergency Management Agency (FEMA), almost 40 percent of small businesses never reopen their doors following a flooding disaster.[20] In the United States, insurance is available against flood damage to both homes and businesses.[21]
|
62 |
+
|
63 |
+
Floods (in particular more frequent or smaller floods) can also bring many benefits, such as recharging ground water, making soil more fertile and increasing nutrients in some soils. Flood waters provide much needed water resources in arid and semi-arid regions where precipitation can be very unevenly distributed throughout the year and kills pests in the farming land. Freshwater floods particularly play an important role in maintaining ecosystems in river corridors and are a key factor in maintaining floodplain biodiversity.[22] Flooding can spread nutrients to lakes and rivers, which can lead to increased biomass and improved fisheries for a few years.
|
64 |
+
|
65 |
+
For some fish species, an inundated floodplain may form a highly suitable location for spawning with few predators and enhanced levels of nutrients or food.[23] Fish, such as the weather fish, make use of floods in order to reach new habitats. Bird populations may also profit from the boost in food production caused by flooding.[24]
|
66 |
+
|
67 |
+
Periodic flooding was essential to the well-being of ancient communities along the Tigris-Euphrates Rivers, the Nile River, the Indus River, the Ganges and the Yellow River among others. The viability of hydropower, a renewable source of energy, is also higher in flood prone regions.
|
68 |
+
|
69 |
+
In the United States, the National Weather Service gives out the advice "Turn Around, Don't Drown" for floods; that is, it recommends that people get out of the area of a flood, rather than trying to cross it. At the most basic level, the best defense against floods is to seek higher ground for high-value uses while balancing the foreseeable risks with the benefits of occupying flood hazard zones.[25]:22–23 Critical community-safety facilities, such as hospitals, emergency-operations centers, and police, fire, and rescue services, should be built in areas least at risk of flooding. Structures, such as bridges, that must unavoidably be in flood hazard areas should be designed to withstand flooding. Areas most at risk for flooding could be put to valuable uses that could be abandoned temporarily as people retreat to safer areas when a flood is imminent.
|
70 |
+
|
71 |
+
Planning for flood safety involves many aspects of analysis and engineering, including:
|
72 |
+
|
73 |
+
Each topic presents distinct yet related questions with varying scope and scale in time, space, and the people involved. Attempts to understand and manage the mechanisms at work in floodplains have been made for at least six millennia.[26][page needed]
|
74 |
+
|
75 |
+
In the United States, the Association of State Floodplain Managers works to promote education, policies, and activities that mitigate current and future losses, costs, and human suffering caused by flooding and to protect the natural and beneficial functions of floodplains – all without causing adverse impacts.[27] A portfolio of best practice examples for disaster mitigation in the United States is available from the Federal Emergency Management Agency.[28]
|
76 |
+
|
77 |
+
In many countries around the world, waterways prone to floods are often carefully managed. Defenses such as detention basins, levees,[29] bunds, reservoirs, and weirs are used to prevent waterways from overflowing their banks. When these defenses fail, emergency measures such as sandbags or portable inflatable tubes are often used to try to stem flooding. Coastal flooding has been addressed in portions of Europe and the Americas with coastal defenses, such as sea walls, beach nourishment, and barrier islands.
|
78 |
+
|
79 |
+
In the riparian zone near rivers and streams, erosion control measures can be taken to try to slow down or reverse the natural forces that cause many waterways to meander over long periods of time. Flood controls, such as dams, can be built and maintained over time to try to reduce the occurrence and severity of floods as well. In the United States, the U.S. Army Corps of Engineers maintains a network of such flood control dams.
|
80 |
+
|
81 |
+
In areas prone to urban flooding, one solution is the repair and expansion of man-made sewer systems and stormwater infrastructure. Another strategy is to reduce impervious surfaces in streets, parking lots and buildings through natural drainage channels, porous paving, and wetlands (collectively known as green infrastructure or sustainable urban drainage systems (SUDS)). Areas identified as flood-prone can be converted into parks and playgrounds that can tolerate occasional flooding. Ordinances can be adopted to require developers to retain stormwater on site and require buildings to be elevated, protected by floodwalls and levees, or designed to withstand temporary inundation. Property owners can also invest in solutions themselves, such as re-landscaping their property to take the flow of water away from their building and installing rain barrels, sump pumps, and check valves.
|
82 |
+
|
83 |
+
A series of annual maximum flow rates in a stream reach can be analyzed statistically to estimate the 100-year flood and floods of other recurrence intervals there. Similar estimates from many sites in a hydrologically similar region can be related to measurable characteristics of each drainage basin to allow indirect estimation of flood recurrence intervals for stream reaches without sufficient data for direct analysis.
|
84 |
+
|
85 |
+
Physical process models of channel reaches are generally well understood and will calculate the depth and area of inundation for given channel conditions and a specified flow rate, such as for use in floodplain mapping and flood insurance. Conversely, given the observed inundation area of a recent flood and the channel conditions, a model can calculate the flow rate. Applied to various potential channel configurations and flow rates, a reach model can contribute to selecting an optimum design for a modified channel. Various reach models are available as of 2015, either 1D models (flood levels measured in the channel) or 2D models (variable flood depths measured across the extent of a floodplain). HEC-RAS,[30] the Hydraulic Engineering Center model, is among the most popular software, if only because it is available free of charge. Other models such as TUFLOW[31] combine 1D and 2D components to derive flood depths across both river channels and the entire floodplain.
|
86 |
+
|
87 |
+
Physical process models of complete drainage basins are even more complex. Although many processes are well understood at a point or for a small area, others are poorly understood at all scales, and process interactions under normal or extreme climatic conditions may be unknown. Basin models typically combine land-surface process components (to estimate how much rainfall or snowmelt reaches a channel) with a series of reach models. For example, a basin model can calculate the runoff hydrograph that might result from a 100-year storm, although the recurrence interval of a storm is rarely equal to that of the associated flood. Basin models are commonly used in flood forecasting and warning, as well as in analysis of the effects of land use change and climate change.
|
88 |
+
|
89 |
+
Anticipating floods before they occur allows for precautions to be taken and people to be warned[32] so that they can be prepared in advance for flooding conditions. For example, farmers can remove animals from low-lying areas and utility services can put in place emergency provisions to re-route services if needed. Emergency services can also make provisions to have enough resources available ahead of time to respond to emergencies as they occur. People can evacuate areas to be flooded.
|
90 |
+
|
91 |
+
In order to make the most accurate flood forecasts for waterways, it is best to have a long time-series of historical data that relates stream flows to measured past rainfall events.[33] Coupling this historical information with real-time knowledge about volumetric capacity in catchment areas, such as spare capacity in reservoirs, ground-water levels, and the degree of saturation of area aquifers is also needed in order to make the most accurate flood forecasts.
|
92 |
+
|
93 |
+
Radar estimates of rainfall and general weather forecasting techniques are also important components of good flood forecasting. In areas where good quality data is available, the intensity and height of a flood can be predicted with fairly good accuracy and plenty of lead time. The output of a flood forecast is typically a maximum expected water level and the likely time of its arrival at key locations along a waterway,[34] and it also may allow for the computation of the likely statistical return period of a flood. In many developed countries, urban areas at risk of flooding are protected against a 100-year flood – that is a flood that has a probability of around 63% of occurring in any 100-year period of time.
|
94 |
+
|
95 |
+
According to the U.S. National Weather Service (NWS) Northeast River Forecast Center (RFC) in Taunton, Massachusetts, a rule of thumb for flood forecasting in urban areas is that it takes at least 1 inch (25 mm) of rainfall in around an hour's time in order to start significant ponding of water on impermeable surfaces. Many NWS RFCs routinely issue Flash Flood Guidance and Headwater Guidance, which indicate the general amount of rainfall that would need to fall in a short period of time in order to cause flash flooding or flooding on larger water basins.[35]
|
96 |
+
|
97 |
+
In the United States, an integrated approach to real-time hydrologic computer modelling utilizes observed data from the U.S. Geological Survey (USGS),[36] various cooperative observing networks,[37] various automated weather sensors, the NOAA National Operational Hydrologic Remote Sensing Center (NOHRSC),[38] various hydroelectric companies, etc. combined with quantitative precipitation forecasts (QPF) of expected rainfall and/or snow melt to generate daily or as-needed hydrologic forecasts.[34] The NWS also cooperates with Environment Canada on hydrologic forecasts that affect both the US and Canada, like in the area of the Saint Lawrence Seaway.
|
98 |
+
|
99 |
+
The Global Flood Monitoring System, "GFMS," a computer tool which maps flood conditions worldwide, is available online. Users anywhere in the world can use GFMS to determine when floods may occur in their area. GFMS uses precipitation data from NASA's Earth observing satellites and the Global Precipitation Measurement satellite, "GPM." Rainfall data from GPM is combined with a land surface model that incorporates vegetation cover, soil type, and terrain to determine how much water is soaking into the ground, and how much water is flowing into streamflow.
|
100 |
+
|
101 |
+
Users can view statistics for rainfall, streamflow, water depth, and flooding every 3 hours, at each 12 kilometer gridpoint on a global map. Forecasts for these parameters are 5 days into the future. Users can zoom in to see inundation maps (areas estimated to be covered with water) in 1 kilometer resolution.[39][40]
|
102 |
+
|
103 |
+
Below is a list of the deadliest floods worldwide, showing events with death tolls at or above 100,000 individuals.
|
104 |
+
|
105 |
+
Flood myths (great, civilization-destroying floods) are widespread in many cultures.
|
106 |
+
|
107 |
+
Flood events in the form of divine retribution have also been described in religious texts. As a prime example, the Genesis flood narrative plays a prominent role in Judaism, Christianity and Islam.
|
en/2738.html.txt
ADDED
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A flood is an overflow of water that submerges land that is usually dry.[1] In the sense of "flowing water", the word may also be applied to the inflow of the tide. Floods are an area of study of the discipline hydrology and are of significant concern in agriculture, civil engineering and public health.
|
4 |
+
|
5 |
+
Flooding may occur as an overflow of water from water bodies, such as a river, lake, or ocean, in which the water overtops or breaks levees, resulting in some of that water escaping its usual boundaries,[2] or it may occur due to an accumulation of rainwater on saturated ground in an areal flood. While the size of a lake or other body of water will vary with seasonal changes in precipitation and snow melt, these changes in size are unlikely to be considered significant unless they flood property or drown domestic animals.
|
6 |
+
|
7 |
+
Floods can also occur in rivers when the flow rate exceeds the capacity of the river channel, particularly at bends or meanders in the waterway. Floods often cause damage to homes and businesses if they are in the natural flood plains of rivers. While riverine flood damage can be eliminated by moving away from rivers and other bodies of water, people have traditionally lived and worked by rivers because the land is usually flat and fertile and because rivers provide easy travel and access to commerce and industry.
|
8 |
+
|
9 |
+
Some floods develop slowly, while others can develop in just a few minutes and without visible signs of rain. Additionally, floods can be local, impacting a neighborhood or community, or very large, affecting entire river basins.
|
10 |
+
|
11 |
+
The word "flood" comes from the Old English flod, a word common to Germanic languages (compare German Flut, Dutch vloed from the same root as is seen in flow, float; also compare with Latin fluctus, flumen).
|
12 |
+
|
13 |
+
Floods can happen on flat or low-lying areas when water is supplied by rainfall or snowmelt more rapidly than it can either infiltrate or run off. The excess accumulates in place, sometimes to hazardous depths. Surface soil can become saturated, which effectively stops infiltration, where the water table is shallow, such as a floodplain, or from intense rain from one or a series of storms. Infiltration also is slow to negligible through frozen ground, rock, concrete, paving, or roofs. Areal flooding begins in flat areas like floodplains and in local depressions not connected to a stream channel, because the velocity of overland flow depends on the surface slope. Endorheic basins may experience areal flooding during periods when precipitation exceeds evaporation.[3]
|
14 |
+
|
15 |
+
Floods occur in all types of river and stream channels, from the smallest ephemeral streams in humid zones to normally-dry channels in arid climates to the world's largest rivers. When overland flow occurs on tilled fields, it can result in a muddy flood where sediments are picked up by run off and carried as suspended matter or bed load. Localized flooding may be caused or exacerbated by drainage obstructions such as landslides, ice, debris, or beaver dams.
|
16 |
+
|
17 |
+
Slow-rising floods most commonly occur in large rivers with large catchment areas. The increase in flow may be the result of sustained rainfall, rapid snow melt, monsoons, or tropical cyclones. However, large rivers may have rapid flooding events in areas with dry climate, since they may have large basins but small river channels and rainfall can be very intense in smaller areas of those basins.
|
18 |
+
|
19 |
+
Rapid flooding events, including flash floods, more often occur on smaller rivers, rivers with steep valleys, rivers that flow for much of their length over impermeable terrain, or normally-dry channels. The cause may be localized convective precipitation (intense thunderstorms) or sudden release from an upstream impoundment created behind a dam, landslide, or glacier. In one instance, a flash flood killed eight people enjoying the water on a Sunday afternoon at a popular waterfall in a narrow canyon. Without any observed rainfall, the flow rate increased from about 50 to 1,500 cubic feet per second (1.4 to 42 m3/s) in just one minute.[4] Two larger floods occurred at the same site within a week, but no one was at the waterfall on those days. The deadly flood resulted from a thunderstorm over part of the drainage basin, where steep, bare rock slopes are common and the thin soil was already saturated.
|
20 |
+
|
21 |
+
Flash floods are the most common flood type in normally-dry channels in arid zones, known as arroyos in the southwest United States and many other names elsewhere. In that setting, the first flood water to arrive is depleted as it wets the sandy stream bed. The leading edge of the flood thus advances more slowly than later and higher flows. As a result, the rising limb of the hydrograph becomes ever quicker as the flood moves downstream, until the flow rate is so great that the depletion by wetting soil becomes insignificant.
|
22 |
+
|
23 |
+
Flooding in estuaries is commonly caused by a combination of storm surges caused by winds and low barometric pressure and large waves meeting high upstream river flows.
|
24 |
+
|
25 |
+
Coastal areas may be flooded by storm surges combining with high tides and large wave events at sea, resulting in waves over-topping flood defenses or in severe cases by tsunami or tropical cyclones. A storm surge, from either a tropical cyclone or an extratropical cyclone, falls within this category. Research from the NHC (National Hurricane Center) explains: "Storm surge is an additional rise of water generated by a storm, over and above the predicted astronomical tides. Storm surge should not be confused with storm tide, which is defined as the water level rise due to the combination of storm surge and the astronomical tide. This rise in water level can cause extreme flooding in coastal areas particularly when storm surge coincides with spring tide, resulting in storm tides reaching up to 20 feet or more in some cases."[5]
|
26 |
+
|
27 |
+
Urban flooding is the inundation of land or property in a built environment, particularly in more densely populated areas, caused by rainfall overwhelming the capacity of drainage systems, such as storm sewers. Although sometimes triggered by events such as flash flooding or snowmelt, urban flooding is a condition, characterized by its repetitive and systemic impacts on communities, that can happen regardless of whether or not affected communities are located within designated floodplains or near any body of water.[6] Aside from potential overflow of rivers and lakes, snowmelt, stormwater or water released from damaged water mains may accumulate on property and in public rights-of-way, seep through building walls and floors, or backup into buildings through sewer pipes, toilets and sinks.
|
28 |
+
|
29 |
+
In urban areas, flood effects can be exacerbated by existing paved streets and roads, which increase the speed of flowing water. Impervious surfaces prevent rainfall from infiltrating into the ground, thereby causing a higher surface run-off that may be in excess of local drainage capacity.[7]
|
30 |
+
|
31 |
+
The flood flow in urbanized areas constitutes a hazard to both the population and infrastructure. Some recent catastrophes include the inundations of Nîmes (France) in 1998 and Vaison-la-Romaine (France) in 1992, the flooding of New Orleans (USA) in 2005, and the flooding in Rockhampton, Bundaberg, Brisbane during the 2010–2011 summer in Queensland (Australia). Flood flows in urban environments have been studied relatively recently despite many centuries of flood events.[8] Some recent research has considered the criteria for safe evacuation of individuals in flooded areas.[9]
|
32 |
+
|
33 |
+
Catastrophic riverine flooding is usually associated with major infrastructure failures such as the collapse of a dam, but they may also be caused by drainage channel modification from a landslide, earthquake or volcanic eruption. Examples include outburst floods and lahars. Tsunamis can cause catastrophic coastal flooding, most commonly resulting from undersea earthquakes.
|
34 |
+
|
35 |
+
The amount, location, and timing of water reaching a drainage channel from natural precipitation and controlled or uncontrolled reservoir releases determines the flow at downstream locations. Some precipitation evaporates, some slowly percolates through soil, some may be temporarily sequestered as snow or ice, and some may produce rapid runoff from surfaces including rock, pavement, roofs, and saturated or frozen ground. The fraction of incident precipitation promptly reaching a drainage channel has been observed from nil for light rain on dry, level ground to as high as 170 percent for warm rain on accumulated snow.[10]
|
36 |
+
|
37 |
+
Most precipitation records are based on a measured depth of water received within a fixed time interval. Frequency of a precipitation threshold of interest may be determined from the number of measurements exceeding that threshold value within the total time period for which observations are available. Individual data points are converted to intensity by dividing each measured depth by the period of time between observations. This intensity will be less than the actual peak intensity if the duration of the rainfall event was less than the fixed time interval for which measurements are reported. Convective precipitation events (thunderstorms) tend to produce shorter duration storm events than orographic precipitation. Duration, intensity, and frequency of rainfall events are important to flood prediction. Short duration precipitation is more significant to flooding within small drainage basins.[11]
|
38 |
+
|
39 |
+
The most important upslope factor in determining flood magnitude is the land area of the watershed upstream of the area of interest. Rainfall intensity is the second most important factor for watersheds of less than approximately 30 square miles or 80 square kilometres. The main channel slope is the second most important factor for larger watersheds. Channel slope and rainfall intensity become the third most important factors for small and large watersheds, respectively.[12]
|
40 |
+
|
41 |
+
Time of Concentration is the time required for runoff from the most distant point of the upstream drainage area to reach the point of the drainage channel controlling flooding of the area of interest. The time of concentration defines the critical duration of peak rainfall for the area of interest.[13] The critical duration of intense rainfall might be only a few minutes for roof and parking lot drainage structures, while cumulative rainfall over several days would be critical for river basins.
|
42 |
+
|
43 |
+
Water flowing downhill ultimately encounters downstream conditions slowing movement. The final limitation in coastal flooding lands is often the ocean or some coastal flooding bars which form natural lakes. In flooding low lands, elevation changes such as tidal fluctuations are significant determinants of coastal and estuarine flooding. Less predictable events like tsunamis and storm surges may also cause elevation changes in large bodies of water. Elevation of flowing water is controlled by the geometry of the flow channel and, especially, by depth of channel, speed of flow and amount of sediments in it[12] Flow channel restrictions like bridges and canyons tend to control water elevation above the restriction. The actual control point for any given reach of the drainage may change with changing water elevation, so a closer point may control for lower water levels until a more distant point controls at higher water levels.
|
44 |
+
|
45 |
+
Effective flood channel geometry may be changed by growth of vegetation, accumulation of ice or debris, or construction of bridges, buildings, or levees within the flood channel.
|
46 |
+
|
47 |
+
Extreme flood events often result from coincidence such as unusually intense, warm rainfall melting heavy snow pack, producing channel obstructions from floating ice, and releasing small impoundments like beaver dams.[14] Coincident events may cause extensive flooding to be more frequent than anticipated from simplistic statistical prediction models considering only precipitation runoff flowing within unobstructed drainage channels.[15] Debris modification of channel geometry is common when heavy flows move uprooted woody vegetation and flood-damaged structures and vehicles, including boats and railway equipment. Recent field measurements during the 2010–11 Queensland floods showed that any criterion solely based upon the flow velocity, water depth or specific momentum cannot account for the hazards caused by velocity and water depth fluctuations.[8] These considerations ignore further the risks associated with large debris entrained by the flow motion.[9]
|
48 |
+
|
49 |
+
Some researchers have mentioned the storage effect in urban areas with transportation corridors created by cut and fill. Culverted fills may be converted to impoundments if the culverts become blocked by debris, and flow may be diverted along streets. Several studies have looked into the flow patterns and redistribution in streets during storm events and the implication on flood modelling.[16]
|
50 |
+
|
51 |
+
The primary effects of flooding include loss of life and damage to buildings and other structures, including bridges, sewerage systems, roadways, and canals.
|
52 |
+
|
53 |
+
Floods also frequently damage power transmission and sometimes power generation, which then has knock-on effects caused by the loss of power. This includes loss of drinking water treatment and water supply, which may result in loss of drinking water or severe water contamination. It may also cause the loss of sewage disposal facilities. Lack of clean water combined with human sewage in the flood waters raises the risk of waterborne diseases, which can include typhoid, giardia, cryptosporidium, cholera and many other diseases depending upon the location of the flood.
|
54 |
+
|
55 |
+
Damage to roads and transport infrastructure may make it difficult to mobilize aid to those affected or to provide emergency health treatment.
|
56 |
+
|
57 |
+
Flood waters typically inundate farm land, making the land unworkable and preventing crops from being planted or harvested, which can lead to shortages of food both for humans and farm animals. Entire harvests for a country can be lost in extreme flood circumstances. Some tree species may not survive prolonged flooding of their root systems.[17]
|
58 |
+
|
59 |
+
Economic hardship due to a temporary decline in tourism, rebuilding costs, or food shortages leading to price increases is a common after-effect of severe flooding. The impact on those affected may cause psychological damage to those affected, in particular where deaths, serious injuries and loss of property occur.
|
60 |
+
|
61 |
+
Urban flooding can cause chronically wet houses, leading to the growth of indoor mold and resulting in adverse health effects, particularly respiratory symptoms.[18] Urban flooding also has significant economic implications for affected neighborhoods. In the United States, industry experts estimate that wet basements can lower property values by 10–25 percent and are cited among the top reasons for not purchasing a home.[19] According to the U.S. Federal Emergency Management Agency (FEMA), almost 40 percent of small businesses never reopen their doors following a flooding disaster.[20] In the United States, insurance is available against flood damage to both homes and businesses.[21]
|
62 |
+
|
63 |
+
Floods (in particular more frequent or smaller floods) can also bring many benefits, such as recharging ground water, making soil more fertile and increasing nutrients in some soils. Flood waters provide much needed water resources in arid and semi-arid regions where precipitation can be very unevenly distributed throughout the year and kills pests in the farming land. Freshwater floods particularly play an important role in maintaining ecosystems in river corridors and are a key factor in maintaining floodplain biodiversity.[22] Flooding can spread nutrients to lakes and rivers, which can lead to increased biomass and improved fisheries for a few years.
|
64 |
+
|
65 |
+
For some fish species, an inundated floodplain may form a highly suitable location for spawning with few predators and enhanced levels of nutrients or food.[23] Fish, such as the weather fish, make use of floods in order to reach new habitats. Bird populations may also profit from the boost in food production caused by flooding.[24]
|
66 |
+
|
67 |
+
Periodic flooding was essential to the well-being of ancient communities along the Tigris-Euphrates Rivers, the Nile River, the Indus River, the Ganges and the Yellow River among others. The viability of hydropower, a renewable source of energy, is also higher in flood prone regions.
|
68 |
+
|
69 |
+
In the United States, the National Weather Service gives out the advice "Turn Around, Don't Drown" for floods; that is, it recommends that people get out of the area of a flood, rather than trying to cross it. At the most basic level, the best defense against floods is to seek higher ground for high-value uses while balancing the foreseeable risks with the benefits of occupying flood hazard zones.[25]:22–23 Critical community-safety facilities, such as hospitals, emergency-operations centers, and police, fire, and rescue services, should be built in areas least at risk of flooding. Structures, such as bridges, that must unavoidably be in flood hazard areas should be designed to withstand flooding. Areas most at risk for flooding could be put to valuable uses that could be abandoned temporarily as people retreat to safer areas when a flood is imminent.
|
70 |
+
|
71 |
+
Planning for flood safety involves many aspects of analysis and engineering, including:
|
72 |
+
|
73 |
+
Each topic presents distinct yet related questions with varying scope and scale in time, space, and the people involved. Attempts to understand and manage the mechanisms at work in floodplains have been made for at least six millennia.[26][page needed]
|
74 |
+
|
75 |
+
In the United States, the Association of State Floodplain Managers works to promote education, policies, and activities that mitigate current and future losses, costs, and human suffering caused by flooding and to protect the natural and beneficial functions of floodplains – all without causing adverse impacts.[27] A portfolio of best practice examples for disaster mitigation in the United States is available from the Federal Emergency Management Agency.[28]
|
76 |
+
|
77 |
+
In many countries around the world, waterways prone to floods are often carefully managed. Defenses such as detention basins, levees,[29] bunds, reservoirs, and weirs are used to prevent waterways from overflowing their banks. When these defenses fail, emergency measures such as sandbags or portable inflatable tubes are often used to try to stem flooding. Coastal flooding has been addressed in portions of Europe and the Americas with coastal defenses, such as sea walls, beach nourishment, and barrier islands.
|
78 |
+
|
79 |
+
In the riparian zone near rivers and streams, erosion control measures can be taken to try to slow down or reverse the natural forces that cause many waterways to meander over long periods of time. Flood controls, such as dams, can be built and maintained over time to try to reduce the occurrence and severity of floods as well. In the United States, the U.S. Army Corps of Engineers maintains a network of such flood control dams.
|
80 |
+
|
81 |
+
In areas prone to urban flooding, one solution is the repair and expansion of man-made sewer systems and stormwater infrastructure. Another strategy is to reduce impervious surfaces in streets, parking lots and buildings through natural drainage channels, porous paving, and wetlands (collectively known as green infrastructure or sustainable urban drainage systems (SUDS)). Areas identified as flood-prone can be converted into parks and playgrounds that can tolerate occasional flooding. Ordinances can be adopted to require developers to retain stormwater on site and require buildings to be elevated, protected by floodwalls and levees, or designed to withstand temporary inundation. Property owners can also invest in solutions themselves, such as re-landscaping their property to take the flow of water away from their building and installing rain barrels, sump pumps, and check valves.
|
82 |
+
|
83 |
+
A series of annual maximum flow rates in a stream reach can be analyzed statistically to estimate the 100-year flood and floods of other recurrence intervals there. Similar estimates from many sites in a hydrologically similar region can be related to measurable characteristics of each drainage basin to allow indirect estimation of flood recurrence intervals for stream reaches without sufficient data for direct analysis.
|
84 |
+
|
85 |
+
Physical process models of channel reaches are generally well understood and will calculate the depth and area of inundation for given channel conditions and a specified flow rate, such as for use in floodplain mapping and flood insurance. Conversely, given the observed inundation area of a recent flood and the channel conditions, a model can calculate the flow rate. Applied to various potential channel configurations and flow rates, a reach model can contribute to selecting an optimum design for a modified channel. Various reach models are available as of 2015, either 1D models (flood levels measured in the channel) or 2D models (variable flood depths measured across the extent of a floodplain). HEC-RAS,[30] the Hydraulic Engineering Center model, is among the most popular software, if only because it is available free of charge. Other models such as TUFLOW[31] combine 1D and 2D components to derive flood depths across both river channels and the entire floodplain.
|
86 |
+
|
87 |
+
Physical process models of complete drainage basins are even more complex. Although many processes are well understood at a point or for a small area, others are poorly understood at all scales, and process interactions under normal or extreme climatic conditions may be unknown. Basin models typically combine land-surface process components (to estimate how much rainfall or snowmelt reaches a channel) with a series of reach models. For example, a basin model can calculate the runoff hydrograph that might result from a 100-year storm, although the recurrence interval of a storm is rarely equal to that of the associated flood. Basin models are commonly used in flood forecasting and warning, as well as in analysis of the effects of land use change and climate change.
|
88 |
+
|
89 |
+
Anticipating floods before they occur allows for precautions to be taken and people to be warned[32] so that they can be prepared in advance for flooding conditions. For example, farmers can remove animals from low-lying areas and utility services can put in place emergency provisions to re-route services if needed. Emergency services can also make provisions to have enough resources available ahead of time to respond to emergencies as they occur. People can evacuate areas to be flooded.
|
90 |
+
|
91 |
+
In order to make the most accurate flood forecasts for waterways, it is best to have a long time-series of historical data that relates stream flows to measured past rainfall events.[33] Coupling this historical information with real-time knowledge about volumetric capacity in catchment areas, such as spare capacity in reservoirs, ground-water levels, and the degree of saturation of area aquifers is also needed in order to make the most accurate flood forecasts.
|
92 |
+
|
93 |
+
Radar estimates of rainfall and general weather forecasting techniques are also important components of good flood forecasting. In areas where good quality data is available, the intensity and height of a flood can be predicted with fairly good accuracy and plenty of lead time. The output of a flood forecast is typically a maximum expected water level and the likely time of its arrival at key locations along a waterway,[34] and it also may allow for the computation of the likely statistical return period of a flood. In many developed countries, urban areas at risk of flooding are protected against a 100-year flood – that is a flood that has a probability of around 63% of occurring in any 100-year period of time.
|
94 |
+
|
95 |
+
According to the U.S. National Weather Service (NWS) Northeast River Forecast Center (RFC) in Taunton, Massachusetts, a rule of thumb for flood forecasting in urban areas is that it takes at least 1 inch (25 mm) of rainfall in around an hour's time in order to start significant ponding of water on impermeable surfaces. Many NWS RFCs routinely issue Flash Flood Guidance and Headwater Guidance, which indicate the general amount of rainfall that would need to fall in a short period of time in order to cause flash flooding or flooding on larger water basins.[35]
|
96 |
+
|
97 |
+
In the United States, an integrated approach to real-time hydrologic computer modelling utilizes observed data from the U.S. Geological Survey (USGS),[36] various cooperative observing networks,[37] various automated weather sensors, the NOAA National Operational Hydrologic Remote Sensing Center (NOHRSC),[38] various hydroelectric companies, etc. combined with quantitative precipitation forecasts (QPF) of expected rainfall and/or snow melt to generate daily or as-needed hydrologic forecasts.[34] The NWS also cooperates with Environment Canada on hydrologic forecasts that affect both the US and Canada, like in the area of the Saint Lawrence Seaway.
|
98 |
+
|
99 |
+
The Global Flood Monitoring System, "GFMS," a computer tool which maps flood conditions worldwide, is available online. Users anywhere in the world can use GFMS to determine when floods may occur in their area. GFMS uses precipitation data from NASA's Earth observing satellites and the Global Precipitation Measurement satellite, "GPM." Rainfall data from GPM is combined with a land surface model that incorporates vegetation cover, soil type, and terrain to determine how much water is soaking into the ground, and how much water is flowing into streamflow.
|
100 |
+
|
101 |
+
Users can view statistics for rainfall, streamflow, water depth, and flooding every 3 hours, at each 12 kilometer gridpoint on a global map. Forecasts for these parameters are 5 days into the future. Users can zoom in to see inundation maps (areas estimated to be covered with water) in 1 kilometer resolution.[39][40]
|
102 |
+
|
103 |
+
Below is a list of the deadliest floods worldwide, showing events with death tolls at or above 100,000 individuals.
|
104 |
+
|
105 |
+
Flood myths (great, civilization-destroying floods) are widespread in many cultures.
|
106 |
+
|
107 |
+
Flood events in the form of divine retribution have also been described in religious texts. As a prime example, the Genesis flood narrative plays a prominent role in Judaism, Christianity and Islam.
|
en/2739.html.txt
ADDED
@@ -0,0 +1,113 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Steel is an alloy of iron with typically a few percent of carbon to improve its strength and fracture resistance compared to iron. Many other additional elements may be present or added. Stainless steels that are corrosion and oxidation resistant need typically an additional 11% chromium. Because of its high tensile strength and low cost, steel is best used in buildings, infrastructure, tools, ships, trains, cars, machines, electrical appliances, and weapons. Iron is the base metal of steel and it can take on two crystalline forms (allotropic forms): body centred cubic and face-centred cubic. These forms depend on temperature. In the body-centred cubic arrangement, there is an iron atom in the centre and eight atoms at the vertices of each cubic unit cell; in the face-centred cubic, there is one atom at the centre of each of the six faces of the cubic unit cell and eight atoms at its vertices. It is the interaction of the allotropes of iron with the alloying elements, primarily carbon, that gives steel and cast iron their range of unique properties.
|
4 |
+
|
5 |
+
In pure iron, the crystal structure has relatively little resistance to the iron atoms slipping past one another, and so pure iron is quite ductile, or soft and easily formed. In steel, small amounts of carbon, other elements, and inclusions within the iron act as hardening agents that prevent the movement of dislocations.
|
6 |
+
|
7 |
+
The carbon in typical steel alloys may contribute up to 2.14% of its weight. Varying the amount of carbon and many other alloying elements, as well as controlling their chemical and physical makeup in the final steel (either as solute elements, or as precipitated phases), slows the movement of those dislocations that make pure iron ductile, and thus controls and enhances its qualities. These qualities include the hardness, quenching behaviour, need for annealing, tempering behaviour, yield strength, and tensile strength of the resulting steel. The increase in steel's strength compared to pure iron is possible only by reducing iron's ductility.
|
8 |
+
|
9 |
+
Steel was produced in bloomery furnaces for thousands of years, but its large-scale, industrial use began only after more efficient production methods were devised in the 17th century, with the introduction of the blast furnace and production of crucible steel. This was followed by the open-hearth furnace and then the Bessemer process in England in the mid-19th
|
10 |
+
century. With the invention of the Bessemer process, a new era of mass-produced steel began. Mild steel replaced wrought iron.
|
11 |
+
|
12 |
+
Further refinements in the process, such as basic oxygen steelmaking (BOS), largely replaced earlier methods by further lowering the cost of production and increasing the quality of the final product. Today, steel is one of the most common manmade materials in the world, with more than 1.6 billion tons produced annually. Modern steel is generally identified by various grades defined by assorted standards organisations.
|
13 |
+
|
14 |
+
The noun steel originates from the Proto-Germanic adjective stahliją or stakhlijan (made of steel), which is related to stahlaz or stahliją (standing firm).[1]
|
15 |
+
|
16 |
+
The carbon content of steel is between 0.002% and 2.14% by weight for plain carbon steel (iron–carbon alloys).[citation needed] Too little carbon content leaves (pure) iron quite soft, ductile, and weak. Carbon contents higher than those of steel make a brittle alloy commonly called pig iron. Alloy steel is steel to which other alloying elements have been intentionally added to modify the characteristics of steel. Common alloying elements include: manganese, nickel, chromium, molybdenum, boron, titanium, vanadium, tungsten, cobalt, and niobium.[2] In contrast, cast iron does undergo eutectic reaction. Additional elements, most frequently considered undesirable, are also important in steel: phosphorus, sulfur, silicon, and traces of oxygen, nitrogen, and copper.
|
17 |
+
|
18 |
+
Plain carbon-iron alloys with a higher than 2.1% carbon content are known as cast iron. With modern steelmaking techniques such as powder metal forming, it is possible to make very high-carbon (and other alloy material) steels, but such are not common. Cast iron is not malleable even when hot, but it can be formed by casting as it has a lower melting point than steel and good castability properties.[2] Certain compositions of cast iron, while retaining the economies of melting and casting, can be heat treated after casting to make malleable iron or ductile iron objects. Steel is distinguishable from wrought iron (now largely obsolete), which may contain a small amount of carbon but large amounts of slag.
|
19 |
+
|
20 |
+
Iron is commonly found in the Earth's crust in the form of an ore, usually an iron oxide, such as magnetite or hematite. Iron is extracted from iron ore by removing the oxygen through its combination with a preferred chemical partner such as carbon which is then lost to the atmosphere as carbon dioxide. This process, known as smelting, was first applied to metals with lower melting points, such as tin, which melts at about 250 °C (482 °F), and copper, which melts at about 1,100 °C (2,010 °F), and the combination, bronze, which has a melting point lower than 1,083 °C (1,981 °F). In comparison, cast iron melts at about 1,375 °C (2,507 °F).[3] Small quantities of iron were smelted in ancient times, in the solid state, by heating the ore in a charcoal fire and then welding the clumps together with a hammer and in the process squeezing out the impurities. With care, the carbon content could be controlled by moving it around in the fire. Unlike copper and tin, liquid or solid iron dissolves carbon quite readily.
|
21 |
+
|
22 |
+
All of these temperatures could be reached with ancient methods used since the Bronze Age. Since the oxidation rate of iron increases rapidly beyond 800 °C (1,470 °F), it is important that smelting take place in a low-oxygen environment. Smelting, using carbon to reduce iron oxides, results in an alloy (pig iron) that retains too much carbon to be called steel.[3] The excess carbon and other impurities are removed in a subsequent step.
|
23 |
+
|
24 |
+
Other materials are often added to the iron/carbon mixture to produce steel with desired properties. Nickel and manganese in steel add to its tensile strength and make the austenite form of the iron-carbon solution more stable, chromium increases hardness and melting temperature, and vanadium also increases hardness while making it less prone to metal fatigue.[4]
|
25 |
+
|
26 |
+
To inhibit corrosion, at least 11% chromium is added to steel so that a hard oxide forms on the metal surface; this is known as stainless steel. Tungsten slows the formation of cementite, keeping carbon in the iron matrix and allowing martensite to preferentially form at slower quench rates, resulting in high speed steel. On the other hand, sulfur, nitrogen, and phosphorus are considered contaminants that make steel more brittle and are removed from the steel melt during processing.[4]
|
27 |
+
|
28 |
+
The density of steel varies based on the alloying constituents but usually ranges between 7,750 and 8,050 kg/m3 (484 and 503 lb/cu ft), or 7.75 and 8.05 g/cm3 (4.48 and 4.65 oz/cu in).[5]
|
29 |
+
|
30 |
+
Even in a narrow range of concentrations of mixtures of carbon and iron that make a steel, a number of different metallurgical structures, with very different properties can form. Understanding such properties is essential to making quality steel. At room temperature, the most stable form of pure iron is the body-centered cubic (BCC) structure called alpha iron or α-iron. It is a fairly soft metal that can dissolve only a small concentration of carbon, no more than 0.005% at 0 °C (32 °F) and 0.021 wt% at 723 °C (1,333 °F). The inclusion of carbon in alpha iron is called ferrite. At 910 °C, pure iron transforms into a face-centered cubic (FCC) structure, called gamma iron or γ-iron. The inclusion of carbon in gamma iron is called austenite. The more open FCC structure of austenite can dissolve considerably more carbon, as much as 2.1%[6] (38 times that of ferrite) carbon at 1,148 °C (2,098 °F), which reflects the upper carbon content of steel, beyond which is cast iron.[7] When carbon moves out of solution with iron, it forms a very hard, but brittle material called cementite (Fe3C).
|
31 |
+
|
32 |
+
When steels with exactly 0.8% carbon (known as a eutectoid steel), are cooled, the austenitic phase (FCC) of the mixture attempts to revert to the ferrite phase (BCC). The carbon no longer fits within the FCC austenite structure, resulting in an excess of carbon. One way for carbon to leave the austenite is for it to precipitate out of solution as cementite, leaving behind a surrounding phase of BCC iron called ferrite with a small percentage of carbon in solution. The two, ferrite and cementite, precipitate simultaneously producing a layered structure called pearlite, named for its resemblance to mother of pearl. In a hypereutectoid composition (greater than 0.8% carbon), the carbon will first precipitate out as large inclusions of cementite at the austenite grain boundaries until the percentage of carbon in the grains has decreased to the eutectoid composition (0.8% carbon), at which point the pearlite structure forms. For steels that have less than 0.8% carbon (hypoeutectoid), ferrite will first form within the grains until the remaining composition rises to 0.8% of carbon, at which point the pearlite structure will form. No large inclusions of cementite will form at the boundaries in hypoeuctoid steel.[8] The above assumes that the cooling process is very slow, allowing enough time for the carbon to migrate.
|
33 |
+
|
34 |
+
As the rate of cooling is increased the carbon will have less time to migrate to form carbide at the grain boundaries but will have increasingly large amounts of pearlite of a finer and finer structure within the grains; hence the carbide is more widely dispersed and acts to prevent slip of defects within those grains, resulting in hardening of the steel. At the very high cooling rates produced by quenching, the carbon has no time to migrate but is locked within the face-centered austenite and forms martensite. Martensite is a highly strained and stressed, supersaturated form of carbon and iron and is exceedingly hard but brittle. Depending on the carbon content, the martensitic phase takes different forms. Below 0.2% carbon, it takes on a ferrite BCC crystal form, but at higher carbon content it takes a body-centered tetragonal (BCT) structure. There is no thermal activation energy for the transformation from austenite to martensite.[clarification needed] Moreover, there is no compositional change so the atoms generally retain their same neighbors.[9]
|
35 |
+
|
36 |
+
Martensite has a lower density (it expands during the cooling) than does austenite, so that the transformation between them results in a change of volume. In this case, expansion occurs. Internal stresses from this expansion generally take the form of compression on the crystals of martensite and tension on the remaining ferrite, with a fair amount of shear on both constituents. If quenching is done improperly, the internal stresses can cause a part to shatter as it cools. At the very least, they cause internal work hardening and other microscopic imperfections. It is common for quench cracks to form when steel is water quenched, although they may not always be visible.[10]
|
37 |
+
|
38 |
+
There are many types of heat treating processes available to steel. The most common are annealing, quenching, and tempering. Heat treatment is effective on compositions above the eutectoid composition (hypereutectoid) of 0.8% carbon. Hypoeutectoid steel does not benefit from heat treatment.
|
39 |
+
|
40 |
+
Annealing is the process of heating the steel to a sufficiently high temperature to relieve local internal stresses. It does not create a general softening of the product but only locally relieves strains and stresses locked up within the material. Annealing goes through three phases: recovery, recrystallization, and grain growth. The temperature required to anneal a particular steel depends on the type of annealing to be achieved and the alloying constituents.[11]
|
41 |
+
|
42 |
+
Quenching involves heating the steel to create the austenite phase then quenching it in water or oil. This rapid cooling results in a hard but brittle martensitic structure.[9] The steel is then tempered, which is just a specialized type of annealing, to reduce brittleness. In this application the annealing (tempering) process transforms some of the martensite into cementite, or spheroidite and hence it reduces the internal stresses and defects. The result is a more ductile and fracture-resistant steel.[12]
|
43 |
+
|
44 |
+
When iron is smelted from its ore, it contains more carbon than is desirable. To become steel, it must be reprocessed to reduce the carbon to the correct amount, at which point other elements can be added. In the past, steel facilities would cast the raw steel product into ingots which would be stored until use in further refinement processes that resulted in the finished product. In modern facilities, the initial product is close to the final composition and is continuously cast into long slabs, cut and shaped into bars and extrusions and heat-treated to produce a final product. Today, approximately 96% of steel is continuously cast, while only 4% is produced as ingots.[13]
|
45 |
+
|
46 |
+
The ingots are then heated in a soaking pit and hot rolled into slabs, billets, or blooms. Slabs are hot or cold rolled into sheet metal or plates. Billets are hot or cold rolled into bars, rods, and wire. Blooms are hot or cold rolled into structural steel, such as I-beams and rails. In modern steel mills these processes often occur in one assembly line, with ore coming in and finished steel products coming out.[14] Sometimes after a steel's final rolling, it is heat treated for strength; however, this is relatively rare.[15]
|
47 |
+
|
48 |
+
Steel was known in antiquity and was produced in bloomeries and crucibles.[16][17]
|
49 |
+
|
50 |
+
The earliest known production of steel is seen in pieces of ironware excavated from an archaeological site in Anatolia (Kaman-Kalehöyük) and are nearly 4,000 years old, dating from 1800 BC.[18][19] Horace identifies steel weapons such as the falcata in the Iberian Peninsula, while Noric steel was used by the Roman military.[20]
|
51 |
+
|
52 |
+
The reputation of Seric iron of South India (wootz steel) grew considerably in the rest of the world.[17] Metal production sites in Sri Lanka employed wind furnaces driven by the monsoon winds, capable of producing high-carbon steel. Large-scale Wootz steel production in Tamilakam using crucibles and carbon sources such as the plant Avāram occurred by the sixth century BC, the pioneering precursor to modern steel production and metallurgy.[16][17]
|
53 |
+
|
54 |
+
The Chinese of the Warring States period (403–221 BC) had quench-hardened steel,[21] while Chinese of the Han dynasty (202 BC – 220 AD) created steel by melting together wrought iron with cast iron, gaining an ultimate product of a carbon-intermediate steel by the 1st century AD.[22][23]
|
55 |
+
|
56 |
+
There is evidence that carbon steel was made in Western Tanzania by the ancestors of the Haya people as early as 2,000 years ago by a complex process of "pre-heating" allowing temperatures inside a furnace to reach 1300 to 1400 °C.[24][25][26][27][28][29]
|
57 |
+
|
58 |
+
Evidence of the earliest production of high carbon steel in India are found in Kodumanal in Tamil Nadu, the Golconda area in Andhra Pradesh and Karnataka, and in the Samanalawewa areas of Sri Lanka.[30] This came to be known as Wootz steel, produced in South India by about sixth century BC and exported globally.[31][32] The steel technology existed prior to 326 BC in the region as they are mentioned in literature of Sangam Tamil, Arabic and Latin as the finest steel in the world exported to the Romans, Egyptian, Chinese and Arab worlds at that time – what they called Seric Iron.[33] A 200 BC Tamil trade guild in Tissamaharama, in the South East of Sri Lanka, brought with them some of the oldest iron and steel artifacts and production processes to the island from the classical period.[34][35][36] The Chinese and locals in Anuradhapura, Sri Lanka had also adopted the production methods of creating Wootz steel from the Chera Dynasty Tamils of South India by the 5th century AD.[37][38] In Sri Lanka, this early steel-making method employed a unique wind furnace, driven by the monsoon winds, capable of producing high-carbon steel.[39][40] Since the technology was acquired from the Tamilians from South India,[citation needed] the origin of steel technology in India can be conservatively estimated at 400–500 BC.[31][40]
|
59 |
+
|
60 |
+
The manufacture of what came to be called Wootz, or Damascus steel, famous for its durability and ability to hold an edge, may have been taken by the Arabs from Persia, who took it from India. It was originally created from a number of different materials including various trace elements, apparently ultimately from the writings of Zosimos of Panopolis. In 327 BC, Alexander the Great was rewarded by the defeated King Porus, not with gold or silver but with 30 pounds of steel.[41] Recent studies have suggested that carbon nanotubes were included in its structure, which might explain some of its legendary qualities, though given the technology of that time, such qualities were produced by chance rather than by design.[42] Natural wind was used where the soil containing iron was heated by the use of wood. The ancient Sinhalese managed to extract a ton of steel for every 2 tons of soil,[39] a remarkable feat at the time. One such furnace was found in Samanalawewa and archaeologists were able to produce steel as the ancients did.[39][43]
|
61 |
+
|
62 |
+
Crucible steel, formed by slowly heating and cooling pure iron and carbon (typically in the form of charcoal) in a crucible, was produced in Merv by the 9th to 10th century AD.[32] In the 11th century, there is evidence of the production of steel in Song China using two techniques: a "berganesque" method that produced inferior, inhomogeneous steel, and a precursor to the modern Bessemer process that used partial decarbonization via repeated forging under a cold blast.[44]
|
63 |
+
|
64 |
+
Since the 17th century, the first step in European steel production has been the smelting of iron ore into pig iron in a blast furnace.[45] Originally employing charcoal, modern methods use coke, which has proven more economical.[46][47][48]
|
65 |
+
|
66 |
+
In these processes pig iron was refined (fined) in a finery forge to produce bar iron, which was then used in steel-making.[45]
|
67 |
+
|
68 |
+
The production of steel by the cementation process was described in a treatise published in Prague in 1574 and was in use in Nuremberg from 1601. A similar process for case hardening armor and files was described in a book published in Naples in 1589. The process was introduced to England in about 1614 and used to produce such steel by Sir Basil Brooke at Coalbrookdale during the 1610s.[49]
|
69 |
+
|
70 |
+
The raw material for this process were bars of iron. During the 17th century it was realized that the best steel came from oregrounds iron of a region north of Stockholm, Sweden. This was still the usual raw material source in the 19th century, almost as long as the process was used.[50][51]
|
71 |
+
|
72 |
+
Crucible steel is steel that has been melted in a crucible rather than having been forged, with the result that it is more homogeneous. Most previous furnaces could not reach high enough temperatures to melt the steel. The early modern crucible steel industry resulted from the invention of Benjamin Huntsman in the 1740s. Blister steel (made as above) was melted in a crucible or in a furnace, and cast (usually) into ingots.[51][52]
|
73 |
+
|
74 |
+
The modern era in steelmaking began with the introduction of Henry Bessemer's Bessemer process in 1855, the raw material for which was pig iron.[53] His method let him produce steel in large quantities cheaply, thus mild steel came to be used for most purposes for which wrought iron was formerly used.[54] The Gilchrist-Thomas process (or basic Bessemer process) was an improvement to the Bessemer process, made by lining the converter with a basic material to remove phosphorus.
|
75 |
+
|
76 |
+
Another 19th-century steelmaking process was the Siemens-Martin process, which complemented the Bessemer process.[51] It consisted of co-melting bar iron (or steel scrap) with pig iron.
|
77 |
+
|
78 |
+
These methods of steel production were rendered obsolete by the Linz-Donawitz process of basic oxygen steelmaking (BOS), developed in 1952,[55] and other oxygen steel making methods. Basic oxygen steelmaking is superior to previous steelmaking methods because the oxygen pumped into the furnace limited impurities, primarily nitrogen, that previously had entered from the air used,[56] and because, with respect to the open-hearth process, the same quantity of steel from a BOS process is manufactured in one-twelfth the time.[55] Today, electric arc furnaces (EAF) are a common method of reprocessing scrap metal to create new steel. They can also be used for converting pig iron to steel, but they use a lot of electrical energy (about 440 kWh per metric ton), and are thus generally only economical when there is a plentiful supply of cheap electricity.[57]
|
79 |
+
|
80 |
+
The steel industry is often considered an indicator of economic progress, because of the critical role played by steel in infrastructural and overall economic development.[58] In 1980, there were more than 500,000 U.S. steelworkers. By 2000, the number of steelworkers fell to 224,000.[59]
|
81 |
+
|
82 |
+
The economic boom in China and India caused a massive increase in the demand for steel. Between 2000 and 2005, world steel demand increased by 6%. Since 2000, several Indian[60] and Chinese steel firms have risen to prominence,[according to whom?] such as Tata Steel (which bought Corus Group in 2007), Baosteel Group and Shagang Group. As of 2017[update], though, ArcelorMittal is the world's largest steel producer.[61] In 2005, the British Geological Survey stated China was the top steel producer with about one-third of the world share; Japan, Russia, and the US followed respectively.[62]
|
83 |
+
|
84 |
+
In 2008, steel began trading as a commodity on the London Metal Exchange. At the end of 2008, the steel industry faced a sharp downturn that led to many cut-backs.[63]
|
85 |
+
|
86 |
+
Steel is one of the world's most-recycled materials, with a recycling rate of over 60% globally;[64] in the United States alone, over 82,000,000 metric tons (81,000,000 long tons; 90,000,000 short tons) were recycled in the year 2008, for an overall recycling rate of 83%.[65]
|
87 |
+
|
88 |
+
As more steel is produced than is scrapped, the amount of recycled raw materials is about 40% of the total of steel produced - in 2016, 1,628,000,000 tonnes (1.602×109 long tons; 1.795×109 short tons) of crude steel was produced globally, with 630,000,000 tonnes (620,000,000 long tons; 690,000,000 short tons) recycled.[66]
|
89 |
+
|
90 |
+
Modern steels are made with varying combinations of alloy metals to fulfill many purposes.[4] Carbon steel, composed simply of iron and carbon, accounts for 90% of steel production.[2] Low alloy steel is alloyed with other elements, usually molybdenum, manganese, chromium, or nickel, in amounts of up to 10% by weight to improve the hardenability of thick sections.[2] High strength low alloy steel has small additions (usually < 2% by weight) of other elements, typically 1.5% manganese, to provide additional strength for a modest price increase.[67]
|
91 |
+
|
92 |
+
Recent Corporate Average Fuel Economy (CAFE) regulations have given rise to a new variety of steel known as Advanced High Strength Steel (AHSS). This material is both strong and ductile so that vehicle structures can maintain their current safety levels while using less material. There are several commercially available grades of AHSS, such as dual-phase steel, which is heat-treated to contain both a ferritic and martensitic microstructure to produce formable, high strength steel.[68] Transformation Induced Plasticity (TRIP) steel involves special alloying and heat treatments to stabilize amounts of austenite at room temperature in normally austenite-free low-alloy ferritic steels. By applying strain, the austenite undergoes a phase transition to martensite without the addition of heat.[69] Twinning Induced Plasticity (TWIP) steel uses a specific type of strain to increase the effectiveness of work hardening on the alloy.[70]
|
93 |
+
|
94 |
+
Carbon Steels are often galvanized, through hot-dip or electroplating in zinc for protection against rust.[71]
|
95 |
+
|
96 |
+
Stainless steels contain a minimum of 11% chromium, often combined with nickel, to resist corrosion. Some stainless steels, such as the ferritic stainless steels are magnetic, while others, such as the austenitic, are nonmagnetic.[72] Corrosion-resistant steels are abbreviated as CRES.
|
97 |
+
|
98 |
+
Some more modern steels include tool steels, which are alloyed with large amounts of tungsten and cobalt or other elements to maximize solution hardening. This also allows the use of precipitation hardening and improves the alloy's temperature resistance.[2] Tool steel is generally used in axes, drills, and other devices that need a sharp, long-lasting cutting edge. Other special-purpose alloys include weathering steels such as Cor-ten, which weather by acquiring a stable, rusted surface, and so can be used un-painted.[73] Maraging steel is alloyed with nickel and other elements, but unlike most steel contains little carbon (0.01%). This creates a very strong but still malleable steel.[74]
|
99 |
+
|
100 |
+
Eglin steel uses a combination of over a dozen different elements in varying amounts to create a relatively low-cost steel for use in bunker buster weapons. Hadfield steel (after Sir Robert Hadfield) or manganese steel contains 12–14% manganese which when abraded strain-hardens to form a very hard skin which resists wearing. Examples include tank tracks, bulldozer blade edges and cutting blades on the jaws of life.[75]
|
101 |
+
|
102 |
+
Most of the more commonly used steel alloys are categorized into various grades by standards organizations. For example, the Society of Automotive Engineers has a series of grades defining many types of steel.[76] The American Society for Testing and Materials has a separate set of standards, which define alloys such as A36 steel, the most commonly used structural steel in the United States.[77] The JIS also define series of steel grades that are being used extensively in Japan as well as in developing countries.
|
103 |
+
|
104 |
+
Iron and steel are used widely in the construction of roads, railways, other infrastructure, appliances, and buildings. Most large modern structures, such as stadiums and skyscrapers, bridges, and airports, are supported by a steel skeleton. Even those with a concrete structure employ steel for reinforcing. In addition, it sees widespread use in major appliances and cars. Despite growth in usage of aluminium, it is still the main material for car bodies. Steel is used in a variety of other construction materials, such as bolts, nails and screws and other household products and cooking utensils.[78]
|
105 |
+
|
106 |
+
Other common applications include shipbuilding, pipelines, mining, offshore construction, aerospace, white goods (e.g. washing machines), heavy equipment such as bulldozers, office furniture, steel wool, tool and armour in the form of personal vests or vehicle armour (better known as rolled homogeneous armour in this role).
|
107 |
+
|
108 |
+
Before the introduction of the Bessemer process and other modern production techniques, steel was expensive and was only used where no cheaper alternative existed, particularly for the cutting edge of knives, razors, swords, and other items where a hard, sharp edge was needed. It was also used for springs, including those used in clocks and watches.[51]
|
109 |
+
|
110 |
+
With the advent of speedier and thriftier production methods, steel has become easier to obtain and much cheaper. It has replaced wrought iron for a multitude of purposes. However, the availability of plastics in the latter part of the 20th century allowed these materials to replace steel in some applications due to their lower fabrication cost and weight.[79] Carbon fiber is replacing steel in some cost insensitive applications such as sports equipment and high end automobiles.
|
111 |
+
|
112 |
+
Steel manufactured after World War II became contaminated with radionuclides by nuclear weapons testing. Low-background steel, steel manufactured prior to 1945, is used for certain radiation-sensitive applications such as Geiger counters and radiation shielding.
|
113 |
+
|
en/274.html.txt
ADDED
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
An antibiotic is a type of antimicrobial substance active against bacteria. It is the most important type of antibacterial agent for fighting bacterial infections, and antibiotic medications are widely used in the treatment and prevention of such infections.[1][2] They may either kill or inhibit the growth of bacteria. A limited number of antibiotics also possess antiprotozoal activity.[3][4] Antibiotics are not effective against viruses such as the common cold or influenza [5]; drugs which inhibit viruses are termed antiviral drugs or antivirals rather than antibiotics.
|
4 |
+
|
5 |
+
Sometimes, the term antibiotic—literally "opposing life", from the Greek roots ἀντι anti, "against" and βίος bios, "life"—is broadly used to refer to any substance used against microbes, but in the usual medical usage, antibiotics (such as penicillin) are those produced naturally (by one microorganism fighting another), whereas nonantibiotic antibacterials (such as sulfonamides and antiseptics) are fully synthetic. However, both classes have the same goal of killing or preventing the growth of microorganisms, and both are included in antimicrobial chemotherapy. "Antibacterials" include antiseptic drugs, antibacterial soaps, and chemical disinfectants, whereas antibiotics are an important class of antibacterials used more specifically in medicine[6] and sometimes in livestock feed.
|
6 |
+
|
7 |
+
Antibiotics have been used since ancient times. Many civilizations used topical application of mouldy bread, with many references to its beneficial effects arising from ancient Egypt, Nubia, China, Serbia, Greece, and Rome. The first person to directly document the use of moulds to treat infections was John Parkinson (1567–1650). Antibiotics revolutionized medicine in the 20th century. Alexander Fleming (1881–1955) discovered modern day penicillin in 1928, the widespread use of which proved significantly beneficial during wartime. However, the effectiveness and easy access to antibiotics have also led to their overuse[7] and some bacteria have evolved resistance to them.[1][8][9][10] The World Health Organization has classified antimicrobial resistance as a widespread "serious threat [that] is no longer a prediction for the future, it is happening right now in every region of the world and has the potential to affect anyone, of any age, in any country".[11]
|
8 |
+
|
9 |
+
Antibiotics are used to treat or prevent bacterial infections,[12] and sometimes protozoan infections. (Metronidazole is effective against a number of parasitic diseases). When an infection is suspected of being responsible for an illness but the responsible pathogen has not been identified, an empiric therapy is adopted.[13] This involves the administration of a broad-spectrum antibiotic based on the signs and symptoms presented and is initiated pending laboratory results that can take several days.[12][13]
|
10 |
+
|
11 |
+
When the responsible pathogenic microorganism is already known or has been identified, definitive therapy can be started. This will usually involve the use of a narrow-spectrum antibiotic. The choice of antibiotic given will also be based on its cost. Identification is critically important as it can reduce the cost and toxicity of the antibiotic therapy and also reduce the possibility of the emergence of antimicrobial resistance.[13] To avoid surgery, antibiotics may be given for non-complicated acute appendicitis.[14]
|
12 |
+
|
13 |
+
Antibiotics may be given as a preventive measure and this is usually limited to at-risk populations such as those with a weakened immune system (particularly in HIV cases to prevent pneumonia), those taking immunosuppressive drugs, cancer patients, and those having surgery.[12] Their use in surgical procedures is to help prevent infection of incisions. They have an important role in dental antibiotic prophylaxis where their use may prevent bacteremia and consequent infective endocarditis. Antibiotics are also used to prevent infection in cases of neutropenia particularly cancer-related.[15][16]
|
14 |
+
|
15 |
+
There are many different routes of administration for antibiotic treatment. Antibiotics are usually taken by mouth. In more severe cases, particularly deep-seated systemic infections, antibiotics can be given intravenously or by injection.[1][13] Where the site of infection is easily accessed, antibiotics may be given topically in the form of eye drops onto the conjunctiva for conjunctivitis or ear drops for ear infections and acute cases of swimmer's ear. Topical use is also one of the treatment options for some skin conditions including acne and cellulitis.[17] Advantages of topical application include achieving high and sustained concentration of antibiotic at the site of infection; reducing the potential for systemic absorption and toxicity, and total volumes of antibiotic required are reduced, thereby also reducing the risk of antibiotic misuse.[18] Topical antibiotics applied over certain types of surgical wounds have been reported to reduce the risk of surgical site infections.[19] However, there are certain general causes for concern with topical administration of antibiotics. Some systemic absorption of the antibiotic may occur; the quantity of antibiotic applied is difficult to accurately dose, and there is also the possibility of local hypersensitivity reactions or contact dermatitis occurring.[18]. It is recommended to administer antibiotics as soon as possible, especially in life-threatening infections. Many emergency departments stock antibiotics for this purpose.[20]
|
16 |
+
|
17 |
+
Antibiotic consumption varies widely between countries. The WHO report on surveillance of antibiotic consumption’ published in 2018 analysed 2015 data from 65 countries. As measured in defined daily doses per 1,000 inhabitants per day. Mongolia had the highest consumption with a rate of 64.4. Burundi had the lowest at 4.4. Amoxicillin and amoxicillin/clavulanic acid were the most frequently consumed.[21]
|
18 |
+
|
19 |
+
Antibiotics are screened for any negative effects before their approval for clinical use, and are usually considered safe and well tolerated. However, some antibiotics have been associated with a wide extent of adverse side effects ranging from mild to very severe depending on the type of antibiotic used, the microbes targeted, and the individual patient.[22][23] Side effects may reflect the pharmacological or toxicological properties of the antibiotic or may involve hypersensitivity or allergic reactions.[4] Adverse effects range from fever and nausea to major allergic reactions, including photodermatitis and anaphylaxis.[24] Safety profiles of newer drugs are often not as well established as for those that have a long history of use.[22]
|
20 |
+
|
21 |
+
Common side-effects include diarrhea, resulting from disruption of the species composition in the intestinal flora, resulting, for example, in overgrowth of pathogenic bacteria, such as Clostridium difficile.[25] Antibacterials can also affect the vaginal flora, and may lead to overgrowth of yeast species of the genus Candida in the vulvo-vaginal area.[26] Additional side effects can result from interaction with other drugs, such as the possibility of tendon damage from the administration of a quinolone antibiotic with a systemic corticosteroid.[27]
|
22 |
+
|
23 |
+
Some antibiotics may also damage the mitochondrion, a bacteria-derived organelle found in eukaryotic, including human, cells. Mitochondrial damage cause oxidative stress in cells and has been suggested as a mechanism for side effects from fluoroquinolones.[28] They are also known to affect chloroplasts.[29]
|
24 |
+
|
25 |
+
Exposure to antibiotics early in life is associated with increased body mass in humans and mouse models.[30] Early life is a critical period for the establishment of the intestinal microbiota and for metabolic development.[31] Mice exposed to subtherapeutic antibiotic treatment – with either penicillin, vancomycin, or chlortetracycline had altered composition of the gut microbiota as well as its metabolic capabilities.[32] One study has reported that mice given low-dose penicillin (1 μg/g body weight) around birth and throughout the weaning process had an increased body mass and fat mass, accelerated growth, and increased hepatic expression of genes involved in adipogenesis, compared to control mice.[33] In addition, penicillin in combination with a high-fat diet increased fasting insulin levels in mice.[33] However, it is unclear whether or not antibiotics cause obesity in humans. Studies have found a correlation between early exposure of antibiotics (<6 months) and increased body mass (at 10 and 20 months).[34] Another study found that the type of antibiotic exposure was also significant with the highest risk of being overweight in those given macrolides compared to penicillin and cephalosporin.[35] Therefore, there is correlation between antibiotic exposure in early life and obesity in humans, but whether or not there is a causal relationship remains unclear. Although there is a correlation between antibiotic use in early life and obesity, the effect of antibiotics on obesity in humans needs to be weighed against the beneficial effects of clinically indicated treatment with antibiotics in infancy.[31]
|
26 |
+
|
27 |
+
There are few well-controlled studies on whether antibiotic use increases the risk of oral contraceptive failure.[36] The majority of studies indicate antibiotics do not interfere with birth control pills,[37] such as clinical studies that suggest the failure rate of contraceptive pills caused by antibiotics is very low (about 1%).[38] Situations that may increase the risk of oral contraceptive failure include non-compliance (missing taking the pill), vomiting, or diarrhea. Gastrointestinal disorders or interpatient variability in oral contraceptive absorption affecting ethinylestradiol serum levels in the blood.[36] Women with menstrual irregularities may be at higher risk of failure and should be advised to use backup contraception during antibiotic treatment and for one week after its completion. If patient-specific risk factors for reduced oral contraceptive efficacy are suspected, backup contraception is recommended.[36]
|
28 |
+
|
29 |
+
In cases where antibiotics have been suggested to affect the efficiency of birth control pills, such as for the broad-spectrum antibiotic rifampicin, these cases may be due to an increase in the activities of hepatic liver enzymes' causing increased breakdown of the pill's active ingredients.[37] Effects on the intestinal flora, which might result in reduced absorption of estrogens in the colon, have also been suggested, but such suggestions have been inconclusive and controversial.[39][40] Clinicians have recommended that extra contraceptive measures be applied during therapies using antibiotics that are suspected to interact with oral contraceptives.[37] More studies on the possible interactions between antibiotics and birth control pills (oral contraceptives) are required as well as careful assessment of patient-specific risk factors for potential oral contractive pill failure prior to dismissing the need for backup contraception.[36]
|
30 |
+
|
31 |
+
Interactions between alcohol and certain antibiotics may occur and may cause side effects and decreased effectiveness of antibiotic therapy.[41][42] While moderate alcohol consumption is unlikely to interfere with many common antibiotics, there are specific types of antibiotics, with which alcohol consumption may cause serious side effects.[43] Therefore, potential risks of side effects and effectiveness depend on the type of antibiotic administered.[44]
|
32 |
+
|
33 |
+
Antibiotics such as metronidazole, tinidazole, cephamandole, latamoxef, cefoperazone, cefmenoxime, and furazolidone, cause a disulfiram-like chemical reaction with alcohol by inhibiting its breakdown by acetaldehyde dehydrogenase, which may result in vomiting, nausea, and shortness of breath.[43] In addition, the efficacy of doxycycline and erythromycin succinate may be reduced by alcohol consumption.[45] Other effects of alcohol on antibiotic activity include altered activity of the liver enzymes that break down the antibiotic compound.[46]
|
34 |
+
|
35 |
+
The successful outcome of antimicrobial therapy with antibacterial compounds depends on several factors. These include host defense mechanisms, the location of infection, and the pharmacokinetic and pharmacodynamic properties of the antibacterial.[47] A bactericidal activity of antibacterials may depend on the bacterial growth phase, and it often requires ongoing metabolic activity and division of bacterial cells.[48] These findings are based on laboratory studies, and in clinical settings have also been shown to eliminate bacterial infection.[47][49] Since the activity of antibacterials depends frequently on its concentration,[50] in vitro characterization of antibacterial activity commonly includes the determination of the minimum inhibitory concentration and minimum bactericidal concentration of an antibacterial.[47][51]
|
36 |
+
To predict clinical outcome, the antimicrobial activity of an antibacterial is usually combined with its pharmacokinetic profile, and several pharmacological parameters are used as markers of drug efficacy.[52]
|
37 |
+
|
38 |
+
In important infectious diseases, including tuberculosis, combination therapy (i.e., the concurrent application of two or more antibiotics) has been used to delay or prevent the emergence of resistance. In acute bacterial infections, antibiotics as part of combination therapy are prescribed for their synergistic effects to improve treatment outcome as the combined effect of both antibiotics is better than their individual effect.[53][54] Methicillin-resistant Staphylococcus aureus infections may be treated with a combination therapy of fusidic acid and rifampicin.[53] Antibiotics used in combination may also be antagonistic and the combined effects of the two antibiotics may be less than if the individual antibiotic was given as part of a monotherapy.[53] For example, chloramphenicol and tetracyclines are antagonists to penicillins and U.S.s. However, this can vary depending on the species of bacteria.[55] In general, combinations of a bacteriostatic antibiotic and bactericidal antibiotic are antagonistic.[53][54]
|
39 |
+
|
40 |
+
Antibiotics are commonly classified based on their mechanism of action, chemical structure, or spectrum of activity. Most target bacterial functions or growth processes.[56] Those that target the bacterial cell wall (penicillins and cephalosporins) or the cell membrane (polymyxins), or interfere with essential bacterial enzymes (rifamycins, lipiarmycins, quinolones, and sulfonamides) have bactericidal activities. Protein synthesis inhibitors (macrolides, lincosamides, and tetracyclines) are usually bacteriostatic (with the exception of bactericidal aminoglycosides).[57] Further categorization is based on their target specificity. "Narrow-spectrum" antibiotics target specific types of bacteria, such as gram-negative or gram-positive, whereas broad-spectrum antibiotics affect a wide range of bacteria. Following a 40-year break in discovering new classes of antibacterial compounds, four new classes of antibiotics have been brought into clinical use in the late 2000s and early 2010s: cyclic lipopeptides (such as daptomycin), glycylcyclines (such as tigecycline), oxazolidinones (such as linezolid), and lipiarmycins (such as fidaxomicin).[58][59]
|
41 |
+
|
42 |
+
With advances in medicinal chemistry, most modern antibacterials are semisynthetic modifications of various natural compounds.[60] These include, for example, the beta-lactam antibiotics, which include the penicillins (produced by fungi in the genus Penicillium), the cephalosporins, and the carbapenems. Compounds that are still isolated from living organisms are the aminoglycosides, whereas other antibacterials—for example, the sulfonamides, the quinolones, and the oxazolidinones—are produced solely by chemical synthesis.[60] Many antibacterial compounds are relatively small molecules with a molecular weight of less than 1000 daltons.[61]
|
43 |
+
|
44 |
+
Since the first pioneering efforts of Howard Florey and Chain in 1939, the importance of antibiotics, including antibacterials, to medicine has led to intense research into producing antibacterials at large scales. Following screening of antibacterials against a wide range of bacteria, production of the active compounds is carried out using fermentation, usually in strongly aerobic conditions.[citation needed]
|
45 |
+
|
46 |
+
The emergence of resistance of bacteria to antibiotics is a common phenomenon. Emergence of resistance often reflects evolutionary processes that take place during antibiotic therapy. The antibiotic treatment may select for bacterial strains with physiologically or genetically enhanced capacity to survive high doses of antibiotics. Under certain conditions, it may result in preferential growth of resistant bacteria, while growth of susceptible bacteria is inhibited by the drug.[62] For example, antibacterial selection for strains having previously acquired antibacterial-resistance genes was demonstrated in 1943 by the Luria–Delbrück experiment.[63] Antibiotics such as penicillin and erythromycin, which used to have a high efficacy against many bacterial species and strains, have become less effective, due to the increased resistance of many bacterial strains.[64]
|
47 |
+
|
48 |
+
Resistance may take the form of biodegredation of pharmaceuticals, such as sulfamethazine-degrading soil bacteria introduced to sulfamethazine through medicated pig feces.[65]
|
49 |
+
The survival of bacteria often results from an inheritable resistance,[66] but the growth of resistance to antibacterials also occurs through horizontal gene transfer. Horizontal transfer is more likely to happen in locations of frequent antibiotic use.[67]
|
50 |
+
|
51 |
+
Antibacterial resistance may impose a biological cost, thereby reducing fitness of resistant strains, which can limit the spread of antibacterial-resistant bacteria, for example, in the absence of antibacterial compounds. Additional mutations, however, may compensate for this fitness cost and can aid the survival of these bacteria.[68]
|
52 |
+
|
53 |
+
Paleontological data show that both antibiotics and antibiotic resistance are ancient compounds and mechanisms.[69] Useful antibiotic targets are those for which mutations negatively impact bacterial reproduction or viability.[70]
|
54 |
+
|
55 |
+
Several molecular mechanisms of antibacterial resistance exist. Intrinsic antibacterial resistance may be part of the genetic makeup of bacterial strains.[71][72] For example, an antibiotic target may be absent from the bacterial genome. Acquired resistance results from a mutation in the bacterial chromosome or the acquisition of extra-chromosomal DNA.[71] Antibacterial-producing bacteria have evolved resistance mechanisms that have been shown to be similar to, and may have been transferred to, antibacterial-resistant strains.[73][74] The spread of antibacterial resistance often occurs through vertical transmission of mutations during growth and by genetic recombination of DNA by horizontal genetic exchange.[66] For instance, antibacterial resistance genes can be exchanged between different bacterial strains or species via plasmids that carry these resistance genes.[66][75] Plasmids that carry several different resistance genes can confer resistance to multiple antibacterials.[75] Cross-resistance to several antibacterials may also occur when a resistance mechanism encoded by a single gene conveys resistance to more than one antibacterial compound.[75]
|
56 |
+
|
57 |
+
Antibacterial-resistant strains and species, sometimes referred to as "superbugs", now contribute to the emergence of diseases that were for a while well controlled. For example, emergent bacterial strains causing tuberculosis that are resistant to previously effective antibacterial treatments pose many therapeutic challenges. Every year, nearly half a million new cases of multidrug-resistant tuberculosis (MDR-TB) are estimated to occur worldwide.[76] For example, NDM-1 is a newly identified enzyme conveying bacterial resistance to a broad range of beta-lactam antibacterials.[77] The United Kingdom's Health Protection Agency has stated that "most isolates with NDM-1 enzyme are resistant to all standard intravenous antibiotics for treatment of severe infections."[78] On 26 May 2016 an E coli bacteria "superbug" was identified in the United States resistant to colistin, "the last line of defence" antibiotic.[79][80]
|
58 |
+
|
59 |
+
Per The ICU Book "The first rule of antibiotics is to try not to use them, and the second rule is try not to use too many of them."[81] Inappropriate antibiotic treatment and overuse of antibiotics have contributed to the emergence of antibiotic-resistant bacteria. Self-prescribing of antibiotics is an example of misuse.[82] Many antibiotics are frequently prescribed to treat symptoms or diseases that do not respond to antibiotics or that are likely to resolve without treatment. Also, incorrect or suboptimal antibiotics are prescribed for certain bacterial infections.[22][82] The overuse of antibiotics, like penicillin and erythromycin, has been associated with emerging antibiotic resistance since the 1950s.[64][83] Widespread usage of antibiotics in hospitals has also been associated with increases in bacterial strains and species that no longer respond to treatment with the most common antibiotics.[83]
|
60 |
+
|
61 |
+
Common forms of antibiotic misuse include excessive use of prophylactic antibiotics in travelers and failure of medical professionals to prescribe the correct dosage of antibiotics on the basis of the patient's weight and history of prior use. Other forms of misuse include failure to take the entire prescribed course of the antibiotic, incorrect dosage and administration, or failure to rest for sufficient recovery. Inappropriate antibiotic treatment, for example, is their prescription to treat viral infections such as the common cold. One study on respiratory tract infections found "physicians were more likely to prescribe antibiotics to patients who appeared to expect them".[84] Multifactorial interventions aimed at both physicians and patients can reduce inappropriate prescription of antibiotics.[85][86] The lack of rapid point of care diagnostic tests, particularly in resource-limited settings is considered as one of the drivers of antibiotic misuse.[87]
|
62 |
+
|
63 |
+
Several organizations concerned with antimicrobial resistance are lobbying to eliminate the unnecessary use of antibiotics.[82] The issues of misuse and overuse of antibiotics have been addressed by the formation of the US Interagency Task Force on Antimicrobial Resistance. This task force aims to actively address antimicrobial resistance, and is coordinated by the US Centers for Disease Control and Prevention, the Food and Drug Administration (FDA), and the National Institutes of Health, as well as other US agencies.[88] A non-governmental organization campaign group is Keep Antibiotics Working.[89] In France, an "Antibiotics are not automatic" government campaign started in 2002 and led to a marked reduction of unnecessary antibiotic prescriptions, especially in children.[90]
|
64 |
+
|
65 |
+
The emergence of antibiotic resistance has prompted restrictions on their use in the UK in 1970 (Swann report 1969), and the European Union has banned the use of antibiotics as growth-promotional agents since 2003.[91] Moreover, several organizations (including the World Health Organization, the National Academy of Sciences, and the U.S. Food and Drug Administration) have advocated restricting the amount of antibiotic use in food animal production.[92] However, commonly there are delays in regulatory and legislative actions to limit the use of antibiotics, attributable partly to resistance against such regulation by industries using or selling antibiotics, and to the time required for research to test causal links between their use and resistance to them. Two federal bills (S.742[93] and H.R. 2562[94]) aimed at phasing out nontherapeutic use of antibiotics in US food animals were proposed, but have not passed.[93][94] These bills were endorsed by public health and medical organizations, including the American Holistic Nurses' Association, the American Medical Association, and the American Public Health Association.[95]
|
66 |
+
|
67 |
+
Despite pledges by food companies and restaurants to reduce or eliminate meat that comes from animals treated with antibiotics, the purchase of antibiotics for use on farm animals has been increasing every year.[96]
|
68 |
+
|
69 |
+
There has been extensive use of antibiotics in animal husbandry. In the United States, the question of emergence of antibiotic-resistant bacterial strains due to use of antibiotics in livestock was raised by the US Food and Drug Administration (FDA) in 1977. In March 2012, the United States District Court for the Southern District of New York, ruling in an action brought by the Natural Resources Defense Council and others, ordered the FDA to revoke approvals for the use of antibiotics in livestock, which violated FDA regulations.[97]
|
70 |
+
|
71 |
+
Before the early 20th century, treatments for infections were based primarily on medicinal folklore. Mixtures with antimicrobial properties that were used in treatments of infections were described over 2,000 years ago.[98] Many ancient cultures, including the ancient Egyptians and ancient Greeks, used specially selected mold and plant materials to treat infections.[99][100] Nubian mummies studied in the 1990s were found to contain significant levels of Tetracycline. The beer brewed at that time was conjectured to have been the source.[101]
|
72 |
+
|
73 |
+
The use of antibiotics in modern medicine began with the discovery of synthetic antibiotics derived from dyes.[56][102][103][104][105]
|
74 |
+
|
75 |
+
Synthetic antibiotic chemotherapy as a science and development of antibacterials began in Germany with Paul Ehrlich in the late 1880s.[56] Ehrlich noted certain dyes would color human, animal, or bacterial cells, whereas others did not. He then proposed the idea that it might be possible to create chemicals that would act as a selective drug that would bind to and kill bacteria without harming the human host. After screening hundreds of dyes against various organisms, in 1907, he discovered a medicinally useful drug, the first synthetic antibacterial organoarsenic compound salvarsan,[56][102][103] now called arsphenamine.
|
76 |
+
|
77 |
+
This heralded the era of antibacterial treatment that was begun with the discovery of a series of arsenic-derived synthetic antibiotics by both Alfred Bertheim and Ehrlich in 1907.[104][105] Ehrlich and Bertheim had experimented with various chemicals derived from dyes to treat trypanosomiasis in mice and spirochaeta infection in rabbits. While their early compounds were too toxic, Ehrlich and Sahachiro Hata, a Japanese bacteriologist working with Erlich in the quest for a drug to treat syphilis, achieved success with the 606th compound in their series of experiments. In 1910 Ehrlich and Hata announced their discovery, which they called drug "606", at the Congress for Internal Medicine at Wiesbaden.[106] The Hoechst company began to market the compound toward the end of 1910 under the name Salvarsan, now known as arsphenamine.[106] The drug was used to treat syphilis in the first half of the 20th century. In 1908, Ehrlich received the Nobel Prize in Physiology or Medicine for his contributions to immunology.[107] Hata was nominated for the Nobel Prize in Chemistry in 1911 and for the Nobel Prize in Physiology or Medicine in 1912 and 1913.[108]
|
78 |
+
|
79 |
+
The first sulfonamide and the first systemically active antibacterial drug, Prontosil, was developed by a research team led by Gerhard Domagk in 1932 or 1933 at the Bayer Laboratories of the IG Farben conglomerate in Germany,[105][109][103] for which Domagk received the 1939 Nobel Prize in Physiology or Medicine.[110] Sulfanilamide, the active drug of Prontosil, was not patentable as it had already been in use in the dye industry for some years.[109] Prontosil had a relatively broad effect against Gram-positive cocci, but not against enterobacteria. Research was stimulated apace by its success. The discovery and development of this sulfonamide drug opened the era of antibacterials.[111][112]
|
80 |
+
|
81 |
+
Observations about the growth of some microorganisms inhibiting the growth of other microorganisms have been reported since the late 19th century. These observations of antibiosis between microorganisms led to the discovery of natural antibacterials. Louis Pasteur observed, "if we could intervene in the antagonism observed between some bacteria, it would offer perhaps the greatest hopes for therapeutics".[113]
|
82 |
+
|
83 |
+
In 1874, physician Sir William Roberts noted that cultures of the mold Penicillium glaucum that is used in the making of some types of blue cheese did not display bacterial contamination.[114] In 1876, physicist John Tyndall also contributed to this field.[115] Pasteur conducted research showing that Bacillus anthracis would not grow in the presence of the related mold Penicillium notatum.
|
84 |
+
|
85 |
+
In 1895 Vincenzo Tiberio, Italian physician, published a paper on the antibacterial power of some extracts of mold.[116]
|
86 |
+
|
87 |
+
In 1897, doctoral student Ernest Duchesne submitted a dissertation, "Contribution à l'étude de la concurrence vitale chez les micro-organismes: antagonisme entre les moisissures et les microbes" (Contribution to the study of vital competition in micro-organisms: antagonism between molds and microbes),[117] the first known scholarly work to consider the therapeutic capabilities of molds resulting from their anti-microbial activity. In his thesis, Duchesne proposed that bacteria and molds engage in a perpetual battle for survival. Duchesne observed that E. coli was eliminated by Penicillium glaucum when they were both grown in the same culture. He also observed that when he inoculated laboratory animals with lethal doses of typhoid bacilli together with Penicillium glaucum, the animals did not contract typhoid. Unfortunately Duchesne's army service after getting his degree prevented him from doing any further research.[118] Duchesne died of tuberculosis, a disease now treated by antibiotics.[118]
|
88 |
+
|
89 |
+
In 1928, Sir Alexander Fleming postulated the existence of penicillin, a molecule produced by certain molds that kills or stops the growth of certain kinds of bacteria. Fleming was working on a culture of disease-causing bacteria when he noticed the spores of a green mold, Penicillium chrysogenum, in one of his culture plates. He observed that the presence of the mold killed or prevented the growth of the bacteria.[119] Fleming postulated that the mold must secrete an antibacterial substance, which he named penicillin in 1928. Fleming believed that its antibacterial properties could be exploited for chemotherapy. He initially characterized some of its biological properties, and attempted to use a crude preparation to treat some infections, but he was unable to pursue its further development without the aid of trained chemists.[120][121]
|
90 |
+
|
91 |
+
Ernst Chain, Howard Florey and Edward Abraham succeeded in purifying the first penicillin, penicillin G, in 1942, but it did not become widely available outside the Allied military before 1945. Later, Norman Heatley developed the back extraction technique for efficiently purifying penicillin in bulk. The chemical structure of penicillin was first proposed by Abraham in 1942[122] and then later confirmed by Dorothy Crowfoot Hodgkin in 1945. Purified penicillin displayed potent antibacterial activity against a wide range of bacteria and had low toxicity in humans. Furthermore, its activity was not inhibited by biological constituents such as pus, unlike the synthetic sulfonamides. (see below) The development of penicillin led to renewed interest in the search for antibiotic compounds with similar efficacy and safety.[123] For their successful development of penicillin, which Fleming had accidentally discovered but could not develop himself, as a therapeutic drug, Chain and Florey shared the 1945 Nobel Prize in Medicine with Fleming.[124]
|
92 |
+
|
93 |
+
Florey credited Rene Dubos with pioneering the approach of deliberately and systematically searching for antibacterial compounds, which had led to the discovery of gramicidin and had revived Florey's research in penicillin.[125] In 1939, coinciding with the start of World War II, Dubos had reported the discovery of the first naturally derived antibiotic, tyrothricin, a compound of 20% gramicidin and 80% tyrocidine, from Bacillus brevis. It was one of the first commercially manufactured antibiotics and was very effective in treating wounds and ulcers during World War II.[125] Gramicidin, however, could not be used systemically because of toxicity. Tyrocidine also proved too toxic for systemic usage. Research results obtained during that period were not shared between the Axis and the Allied powers during World War II and limited access during the Cold War.[126]
|
94 |
+
|
95 |
+
During the mid-20th century, the number of new antibiotic substances introduced for medical use increased significantly. From 1935 to 1968, 12 new classes were launched. However, after this, the number of new classes dropped markedly, with only two new classes introduced between 1969 and 2003.[127]
|
96 |
+
|
97 |
+
The term 'antibiosis', meaning "against life", was introduced by the French bacteriologist Jean Paul Vuillemin as a descriptive name of the phenomenon exhibited by these early antibacterial drugs.[56][128][129] Antibiosis was first described in 1877 in bacteria when Louis Pasteur and Robert Koch observed that an airborne bacillus could inhibit the growth of Bacillus anthracis.[128][130] These drugs were later renamed antibiotics by Selman Waksman, an American microbiologist, in 1942.[56][128][131]
|
98 |
+
|
99 |
+
The term antibiotic was first used in 1942 by Selman Waksman and his collaborators in journal articles to describe any substance produced by a microorganism that is antagonistic to the growth of other microorganisms in high dilution.[128][131] This definition excluded substances that kill bacteria but that are not produced by microorganisms (such as gastric juices and hydrogen peroxide). It also excluded synthetic antibacterial compounds such as the sulfonamides. In current usage, the term "antibiotic" is applied to any medication that kills bacteria or inhibits their growth, regardless of whether that medication is produced by a microorganism or not.[132][133]
|
100 |
+
|
101 |
+
The term "antibiotic" derives from anti + βιωτικός (biōtikos), "fit for life, lively",[134] which comes from βίωσις (biōsis), "way of life",[135] and that from βίος (bios), "life".[46][136] The term "antibacterial" derives from Greek ἀντί (anti), "against"[137] + βακτήριον (baktērion), diminutive of βακτηρία (baktēria), "staff, cane",[138] because the first bacteria to be discovered were rod-shaped.[139]
|
102 |
+
|
103 |
+
The increase in bacterial strains that are resistant to conventional antibacterial therapies together with decreasing number of new antibiotics currently being developed in the drug pipeline has prompted the development of bacterial disease treatment strategies that are alternatives to conventional antibacterials.[140][141] Non-compound approaches (that is, products other than classical antibacterial agents) that target bacteria or approaches that target the host including phage therapy and vaccines are also being investigated to combat the problem.[142]
|
104 |
+
|
105 |
+
One strategy to address bacterial drug resistance is the discovery and application of compounds that modify resistance to common antibacterials. Resistance modifying agents are capable of partly or completely suppressing bacterial resistance mechanisms.[143] For example, some resistance-modifying agents may inhibit multidrug resistance mechanisms, such as drug efflux from the cell, thus increasing the susceptibility of bacteria to an antibacterial.[143][144] Targets include:
|
106 |
+
|
107 |
+
Metabolic stimuli such as sugar can help eradicate a certain type of antibiotic-tolerant bacteria by keeping their metabolism active.[146]
|
108 |
+
|
109 |
+
Vaccines rely on immune modulation or augmentation. Vaccination either excites or reinforces the immune competence of a host to ward off infection, leading to the activation of macrophages, the production of antibodies, inflammation, and other classic immune reactions. Antibacterial vaccines have been responsible for a drastic reduction in global bacterial diseases.[147] Vaccines made from attenuated whole cells or lysates have been replaced largely by less reactogenic, cell-free vaccines consisting of purified components, including capsular polysaccharides and their conjugates, to protein carriers, as well as inactivated toxins (toxoids) and proteins.[148]
|
110 |
+
|
111 |
+
Phage therapy is another method for treating antibiotic-resistant strains of bacteria. Phage therapy infects pathogenic bacteria with their own viruses. Bacteriophages and their host ranges are extremely specific for certain bacteria, thus, unlike antibiotics, they do not disturb the host organism and intestinal microflora.[149] Bacteriophages, also known simply as phages, infect and can kill bacteria and affect bacterial growth primarily during lytic cycles.[149][150] Phages insert their DNA into the bacterium, where it is transcribed and used to make new phages, after which the cell will lyse, releasing new phage that are able to infect and destroy further bacteria of the same strain.[150] The high specificity of phage protects "good" bacteria from destruction.
|
112 |
+
|
113 |
+
Some disadvantages to the use of bacteriophages also exist, however. Bacteriophages may harbour virulence factors or toxic genes in their genomes and, prior to use, it may be prudent to identify genes with similarity to known virulence factors or toxins by genomic sequencing. In addition, the oral and IV administration of phages for the eradication of bacterial infections poses a much higher safety risk than topical application. Also, there is the additional concern of uncertain immune responses to these large antigenic cocktails.
|
114 |
+
|
115 |
+
There are considerable regulatory hurdles that must be cleared for such therapies.[149] Despite numerous challenges, the use of bacteriophages as a replacement for antimicrobial agents against MDR pathogens that no longer respond to conventional antibiotics, remains an attractive option.[149][151]
|
116 |
+
|
117 |
+
Plants are an important source of antimicrobial compounds and traditional healers have long used plants to prevent or cure infectious diseases.[152][153] There is a recent renewed interest into the use of natural products for the identification of new members of the 'antibiotic-ome' (defined as natural products with antibiotic activity), and their application in antibacterial drug discovery in the genomics era.[140][154] Phytochemicals are the active biological component of plants and some phytochemicals including tannins, alkaloids, terpenoids, and flavonoids possess antimicrobial activity.[152][155][156] Some antioxidant dietary supplements also contain phytochemicals (polyphenols), such as grape seed extract, and demonstrate in vitro anti-bacterial properties.[157][158][159] Phytochemicals are able to inhibit peptidoglycan synthesis, damage microbial membrane structures, modify bacterial membrane surface hydrophobicity and also modulate quorum sensing.[155] With increasing antibiotic resistance in recent years, the potential of new plant-derived antibiotics is under investigation.[154]
|
118 |
+
|
119 |
+
Both the WHO and the Infectious Disease Society of America reported that the weak antibiotic pipeline does not match bacteria's increasing ability to develop resistance.[160][161] The Infectious Disease Society of America report noted that the number of new antibiotics approved for marketing per year had been declining and identified seven antibiotics against the Gram-negative bacilli currently in phase 2 or phase 3 clinical trials. However, these drugs did not address the entire spectrum of resistance of Gram-negative bacilli.[162][163] According to the WHO fifty one new therapeutic entities - antibiotics (including combinations), are in phase 1-3 clinical trials as of May 2017.[160] Recent entries in the clinical pipeline targeting multidrug-resistant Gram-positive pathogens has improved the treatment options due to marketing approval of new antibiotic classes, the oxazolidinones and cyclic lipopeptides. However, resistance to these antibiotics is certainly likely to occur, the need for the development new antibiotics against those pathogens still remains a high priority.[164][160] Recent drugs in development that target Gram-negative bacteria have focused on re-working existing drugs to target specific microorganisms or specific types of resistance.[160]
|
120 |
+
|
121 |
+
A few antibiotics have received marketing authorization in the last seven years. The cephalosporin ceftaroline and the lipoglycopeptides oritavancin and telavancin for the treatment of acute bacterial skin and skin structure infection and community-acquired bacterial pneumonia.[165] The lipoglycopeptide dalbavancin and the oxazolidinone tedizolid has also been approved for use for the treatment of acute bacterial skin and skin structure infection. The first in a new class of narrow spectrum macrocyclic antibiotics, fidaxomicin, has been approved for the treatment of C. difficile colitis.[165] New cephalosporin-lactamase inhibitor combinations also approved include ceftazidime-avibactam and ceftolozane-avibactam for complicated urinary tract infection and intra-abdominal infection.[165]
|
122 |
+
|
123 |
+
Streptomyces research is expected to provide new antibiotics,[166][167] including treatment against MRSA and infections resistant to commonly used medication. Efforts of John Innes Centre and universities in the UK, supported by BBSRC, resulted in the creation of spin-out companies, for example Novacta Biosystems, which has designed the type-b lantibiotic-based compound NVB302 (in phase 1) to treat Clostridium difficile infections.[168]
|
124 |
+
Possible improvements include clarification of clinical trial regulations by FDA. Furthermore, appropriate economic incentives could persuade pharmaceutical companies to invest in this endeavor.[163] In the US, the Antibiotic Development to Advance Patient Treatment (ADAPT) Act was introduced with the aim of fast tracking the drug development of antibiotics to combat the growing threat of 'superbugs'. Under this Act, FDA can approve antibiotics and antifungals treating life-threatening infections based on smaller clinical trials. The CDC will monitor the use of antibiotics and the emerging resistance, and publish the data. The FDA antibiotics labeling process, 'Susceptibility Test Interpretive Criteria for Microbial Organisms' or 'breakpoints', will provide accurate data to healthcare professionals.[169] According to Allan Coukell, senior director for health programs at The Pew Charitable Trusts, "By allowing drug developers to rely on smaller datasets, and clarifying FDA's authority to tolerate a higher level of uncertainty for these drugs when making a risk/benefit calculation, ADAPT would make the clinical trials more feasible."[170]
|
en/2740.html.txt
ADDED
@@ -0,0 +1,306 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
See text.
|
6 |
+
|
7 |
+
Insects or Insecta (from Latin insectum) are hexapod invertebrates and the largest group within the arthropod phylum. Definitions and circumscriptions vary; usually, insects comprise a class within the Arthropoda. As used here, the term Insecta is synonymous with Ectognatha. Insects have a chitinous exoskeleton, a three-part body (head, thorax and abdomen), three pairs of jointed legs, compound eyes and one pair of antennae. Insects are the most diverse group of animals; they include more than a million described species and represent more than half of all known living organisms.[2][3] The total number of extant species is estimated at between six and ten million;[2][4][5] potentially over 90% of the animal life forms on Earth are insects.[5][6] Insects may be found in nearly all environments, although only a small number of species reside in the oceans, which are dominated by another arthropod group, crustaceans.
|
8 |
+
|
9 |
+
Nearly all insects hatch from eggs. Insect growth is constrained by the inelastic exoskeleton and development involves a series of molts. The immature stages often differ from the adults in structure, habit and habitat, and can include a passive pupal stage in those groups that undergo four-stage metamorphosis. Insects that undergo three-stage metamorphosis lack a pupal stage and adults develop through a series of nymphal stages.[7] The higher level relationship of the insects is unclear. Fossilized insects of enormous size have been found from the Paleozoic Era, including giant dragonflies with wingspans of 55 to 70 cm (22 to 28 in). The most diverse insect groups appear to have coevolved with flowering plants.
|
10 |
+
|
11 |
+
Adult insects typically move about by walking, flying, or sometimes swimming. As it allows for rapid yet stable movement, many insects adopt a tripedal gait in which they walk with their legs touching the ground in alternating triangles, composed of the front and rear on one side with the middle on the other side. Insects are the only invertebrates to have evolved flight, and all flying insects derive from one common ancestor. Many insects spend at least part of their lives under water, with larval adaptations that include gills, and some adult insects are aquatic and have adaptations for swimming. Some species, such as water striders, are capable of walking on the surface of water. Insects are mostly solitary, but some, such as certain bees, ants and termites, are social and live in large, well-organized colonies. Some insects, such as earwigs, show maternal care, guarding their eggs and young. Insects can communicate with each other in a variety of ways. Male moths can sense the pheromones of female moths over great distances. Other species communicate with sounds: crickets stridulate, or rub their wings together, to attract a mate and repel other males. Lampyrid beetles communicate with light.
|
12 |
+
|
13 |
+
Humans regard certain insects as pests, and attempt to control them using insecticides, and a host of other techniques. Some insects damage crops by feeding on sap, leaves, fruits, or wood. Some species are parasitic, and may vector diseases. Some insects perform complex ecological roles; blow-flies, for example, help consume carrion but also spread diseases. Insect pollinators are essential to the life cycle of many flowering plant species on which most organisms, including humans, are at least partly dependent; without them, the terrestrial portion of the biosphere would be devastated.[8] Many insects are considered ecologically beneficial as predators and a few provide direct economic benefit. Silkworms produce silk and honey bees produce honey and both have been domesticated by humans. Insects are consumed as food in 80% of the world's nations, by people in roughly 3000 ethnic groups.[9][10] Human activities also have effects on insect biodiversity.
|
14 |
+
|
15 |
+
The word "insect" comes from the Latin word insectum, meaning "with a notched or divided body", or literally "cut into", from the neuter singular perfect passive participle of insectare, "to cut into, to cut up", from in- "into" and secare "to cut";[11] because insects appear "cut into" three sections. A calque of Greek ἔντομον [éntomon], "cut into sections", Pliny the Elder introduced the Latin designation as a loan-translation of the Greek word ἔντομος (éntomos) or "insect" (as in entomology), which was Aristotle's term for this class of life, also in reference to their "notched" bodies. "Insect" first appears documented in English in 1601 in Holland's translation of Pliny. Translations of Aristotle's term also form the usual word for "insect" in Welsh (trychfil, from trychu "to cut" and mil, "animal"), Serbo-Croatian (zareznik, from rezati, "to cut"), Russian (насекомое nasekomoje, from seč'/-sekat', "to cut"), etc.[11][12]
|
16 |
+
|
17 |
+
The precise definition of the taxon Insecta and the equivalent English name "insect" varies; three alternative definitions are shown in the table.
|
18 |
+
|
19 |
+
In the broadest circumscription, Insecta sensu lato consists of all hexapods.[13][14] Traditionally, insects defined in this way were divided into "Apterygota" (the first five groups in the table)—the wingless insects—and Pterygota—the winged and secondarily wingless insects.[15] However, modern phylogenetic studies have shown that "Apterygota" is not monophyletic,[16] and so does not form a good taxon. A narrower circumscription restricts insects to those hexapods with external mouthparts, and comprises only the last three groups in the table. In this sense, Insecta sensu stricto is equivalent to Ectognatha.[13][16] In the narrowest circumscription, insects are restricted to hexapods that are either winged or descended from winged ancestors. Insecta sensu strictissimo is then equivalent to Pterygota.[17] For the purposes of this article, the middle definition is used; insects consist of two wingless taxa, Archaeognatha (jumping bristletails) and Zygentoma (silverfish), plus the winged or secondarily wingless Pterygota.
|
20 |
+
|
21 |
+
Hexapoda (Insecta, Collembola, Diplura, Protura)
|
22 |
+
|
23 |
+
Crustacea (crabs, shrimp, isopods, etc.)
|
24 |
+
|
25 |
+
Pauropoda
|
26 |
+
|
27 |
+
Diplopoda (millipedes)
|
28 |
+
|
29 |
+
Chilopoda (centipedes)
|
30 |
+
|
31 |
+
Symphyla
|
32 |
+
|
33 |
+
Arachnida (spiders, scorpions, mites, ticks, etc.)
|
34 |
+
|
35 |
+
Eurypterida (sea scorpions: extinct)
|
36 |
+
|
37 |
+
Xiphosura (horseshoe crabs)
|
38 |
+
|
39 |
+
Pycnogonida (sea spiders)
|
40 |
+
|
41 |
+
†Trilobites (extinct)
|
42 |
+
|
43 |
+
A phylogenetic tree of the arthropods and related groups[18]
|
44 |
+
|
45 |
+
The evolutionary relationship of insects to other animal groups remains unclear.
|
46 |
+
|
47 |
+
Although traditionally grouped with millipedes and centipedes—possibly on the basis of convergent adaptations to terrestrialisation[19]—evidence has emerged favoring closer evolutionary ties with crustaceans. In the Pancrustacea theory, insects, together with Entognatha, Remipedia, and Cephalocarida, make up a natural clade labeled Miracrustacea.[20]
|
48 |
+
|
49 |
+
Insects form a single clade, closely related to crustaceans and myriapods.[21]
|
50 |
+
|
51 |
+
Other terrestrial arthropods, such as centipedes, millipedes, scorpions, spiders, woodlice, mites, and ticks are sometimes confused with insects since their body plans can appear similar, sharing (as do all arthropods) a jointed exoskeleton. However, upon closer examination, their features differ significantly; most noticeably, they do not have the six-legged characteristic of adult insects.[22]
|
52 |
+
|
53 |
+
The higher-level phylogeny of the arthropods continues to be a matter of debate and research. In 2008, researchers at Tufts University uncovered what they believe is the world's oldest known full-body impression of a primitive flying insect, a 300-million-year-old specimen from the Carboniferous period.[23] The oldest definitive insect fossil is the Devonian Rhyniognatha hirsti, from the 396-million-year-old Rhynie chert. It may have superficially resembled a modern-day silverfish insect. This species already possessed dicondylic mandibles (two articulations in the mandible), a feature associated with winged insects, suggesting that wings may already have evolved at this time. Thus, the first insects probably appeared earlier, in the Silurian period.[1][24]
|
54 |
+
|
55 |
+
Four super radiations of insects have occurred: beetles (from about 300 million years ago), flies (from about 250 million years ago), moths and wasps (both from about 150 million years ago).[25] These four groups account for the majority of described species. The flies and moths along with the fleas evolved from the Mecoptera.
|
56 |
+
|
57 |
+
The origins of insect flight remain obscure, since the earliest winged insects currently known appear to have been capable fliers. Some extinct insects had an additional pair of winglets attaching to the first segment of the thorax, for a total of three pairs. As of 2009, no evidence suggests the insects were a particularly successful group of animals before they evolved to have wings.[26]
|
58 |
+
|
59 |
+
Late Carboniferous and Early Permian insect orders include both extant groups, their stem groups,[27] and a number of Paleozoic groups, now extinct. During this era, some giant dragonfly-like forms reached wingspans of 55 to 70 cm (22 to 28 in), making them far larger than any living insect. This gigantism may have been due to higher atmospheric oxygen levels that allowed increased respiratory efficiency relative to today. The lack of flying vertebrates could have been another factor. Most extinct orders of insects developed during the Permian period that began around 270 million years ago. Many of the early groups became extinct during the Permian-Triassic extinction event, the largest mass extinction in the history of the Earth, around 252 million years ago.[28]
|
60 |
+
|
61 |
+
The remarkably successful Hymenoptera appeared as long as 146 million years ago in the Cretaceous period, but achieved their wide diversity more recently in the Cenozoic era, which began 66 million years ago. A number of highly successful insect groups evolved in conjunction with flowering plants, a powerful illustration of coevolution.[29]
|
62 |
+
|
63 |
+
Many modern insect genera developed during the Cenozoic. Insects from this period on are often found preserved in amber, often in perfect condition. The body plan, or morphology, of such specimens is thus easily compared with modern species. The study of fossilized insects is called paleoentomology.
|
64 |
+
|
65 |
+
Archaeognatha (Hump-backed/jumping bristletails)
|
66 |
+
|
67 |
+
Zygentoma (silverfish, firebrats, fishmoths)
|
68 |
+
|
69 |
+
†Carbotriplurida
|
70 |
+
|
71 |
+
†Bojophlebiidae
|
72 |
+
|
73 |
+
Odonatoptera (Dragonflies)
|
74 |
+
|
75 |
+
Panephemeroptera (Mayflies)
|
76 |
+
|
77 |
+
Zoraptera (Angel insects)
|
78 |
+
|
79 |
+
Dermaptera (earwigs)
|
80 |
+
|
81 |
+
Plecoptera (stoneflies)
|
82 |
+
|
83 |
+
Orthoptera (grasshoppers, crickets, katydids)
|
84 |
+
|
85 |
+
Mantodea (praying mantises)
|
86 |
+
|
87 |
+
Blattodea (cockroaches & termites)
|
88 |
+
|
89 |
+
Grylloblattodea (ice-crawlers)
|
90 |
+
|
91 |
+
Mantophasmatodea (gladiators)
|
92 |
+
|
93 |
+
Phasmatodea (Stick insects)
|
94 |
+
|
95 |
+
Embioptera (Web spinners)
|
96 |
+
|
97 |
+
Psocodea (Book lice, barkice & sucking lice)
|
98 |
+
|
99 |
+
Hemiptera (true bugs)
|
100 |
+
|
101 |
+
Thysanoptera (Thrips)
|
102 |
+
|
103 |
+
Hymenoptera (sawflies, wasps, bees, ants)
|
104 |
+
|
105 |
+
Strepsiptera
|
106 |
+
|
107 |
+
Coleoptera (Beetles)
|
108 |
+
|
109 |
+
Rhaphidioptera
|
110 |
+
|
111 |
+
Neuroptera (Lacewings)
|
112 |
+
|
113 |
+
Megaloptera
|
114 |
+
|
115 |
+
Lepidoptera (Butterflies & moths)
|
116 |
+
|
117 |
+
Trichoptera (Caddisflies)
|
118 |
+
|
119 |
+
Diptera (True flies)
|
120 |
+
|
121 |
+
Nannomecoptera
|
122 |
+
|
123 |
+
Mecoptera (scorpionflies)
|
124 |
+
|
125 |
+
Neomecoptera (winter scorpionflies)
|
126 |
+
|
127 |
+
Siphonaptera (Fleas)
|
128 |
+
|
129 |
+
A cladogram based on the works of Sroka, Staniczek & Bechly 2014,[30] Prokop et al. 2017[31] & Wipfler et al. 2019.[32]
|
130 |
+
|
131 |
+
Cladogram of living insect groups,[33] with numbers of species in each group.[5] The Apterygota, Palaeoptera, and Exopterygota are possibly paraphyletic groups.
|
132 |
+
|
133 |
+
Traditional morphology-based or appearance-based systematics have usually given the Hexapoda the rank of superclass,[34]:180 and identified four groups within it: insects (Ectognatha), springtails (Collembola), Protura, and Diplura, the latter three being grouped together as the Entognatha on the basis of internalized mouth parts. Supraordinal relationships have undergone numerous changes with the advent of methods based on evolutionary history and genetic data. A recent theory is that the Hexapoda are polyphyletic (where the last common ancestor was not a member of the group), with the entognath classes having separate evolutionary histories from the Insecta.[35] Many of the traditional appearance-based taxa have been shown to be paraphyletic, so rather than using ranks like subclass, superorder, and infraorder, it has proved better to use monophyletic groupings (in which the last common ancestor is a member of the group). The following represents the best-supported monophyletic groupings for the Insecta.
|
134 |
+
|
135 |
+
Insects can be divided into two groups historically treated as subclasses: wingless insects, known as Apterygota, and winged insects, known as Pterygota. The Apterygota consist of the primitively wingless order of the silverfish (Zygentoma). Archaeognatha make up the Monocondylia based on the shape of their mandibles, while Zygentoma and Pterygota are grouped together as Dicondylia. The Zygentoma themselves possibly are not monophyletic, with the family Lepidotrichidae being a sister group to the Dicondylia (Pterygota and the remaining Zygentoma).[36][37]
|
136 |
+
|
137 |
+
Paleoptera and Neoptera are the winged orders of insects differentiated by the presence of hardened body parts called sclerites, and in the Neoptera, muscles that allow their wings to fold flatly over the abdomen. Neoptera can further be divided into incomplete metamorphosis-based (Polyneoptera and Paraneoptera) and complete metamorphosis-based groups. It has proved difficult to clarify the relationships between the orders in Polyneoptera because of constant new findings calling for revision of the taxa. For example, the Paraneoptera have turned out to be more closely related to the Endopterygota than to the rest of the Exopterygota. The recent molecular finding that the traditional louse orders Mallophaga and Anoplura are derived from within Psocoptera has led to the new taxon Psocodea.[38] Phasmatodea and Embiidina have been suggested to form the Eukinolabia.[39] Mantodea, Blattodea, and Isoptera are thought to form a monophyletic group termed Dictyoptera.[40]
|
138 |
+
|
139 |
+
The Exopterygota likely are paraphyletic in regard to the Endopterygota. Matters that have incurred controversy include Strepsiptera and Diptera grouped together as Halteria based on a reduction of one of the wing pairs—a position not well-supported in the entomological community.[41] The Neuropterida are often lumped or split on the whims of the taxonomist. Fleas are now thought to be closely related to boreid mecopterans.[42] Many questions remain in the basal relationships among endopterygote orders, particularly the Hymenoptera.
|
140 |
+
|
141 |
+
The study of the classification or taxonomy of any insect is called systematic entomology. If one works with a more specific order or even a family, the term may also be made specific to that order or family, for example systematic dipterology.
|
142 |
+
|
143 |
+
Insects are prey for a variety of organisms, including terrestrial vertebrates. The earliest vertebrates on land existed 400 million years ago and were large amphibious piscivores. Through gradual evolutionary change, insectivory was the next diet type to evolve.[43]
|
144 |
+
|
145 |
+
Insects were among the earliest terrestrial herbivores and acted as major selection agents on plants.[29] Plants evolved chemical defenses against this herbivory and the insects, in turn, evolved mechanisms to deal with plant toxins. Many insects make use of these toxins to protect themselves from their predators. Such insects often advertise their toxicity using warning colors.[44] This successful evolutionary pattern has also been used by mimics. Over time, this has led to complex groups of coevolved species. Conversely, some interactions between plants and insects, like pollination, are beneficial to both organisms. Coevolution has led to the development of very specific mutualisms in such systems.
|
146 |
+
|
147 |
+
Estimates on the total number of insect species, or those within specific orders, often vary considerably. Globally, averages of these estimates suggest there are around 1.5 million beetle species and 5.5 million insect species, with about 1 million insect species currently found and described.[45]
|
148 |
+
|
149 |
+
Between 950,000–1,000,000 of all described species are insects, so over 50% of all described eukaryotes (1.8 million) are insects (see illustration). With only 950,000 known non-insects, if the actual number of insects is 5.5 million, they may represent over 80% of the total. As only about 20,000 new species of all organisms are described each year, most insect species may remain undescribed, unless the rate of species descriptions greatly increases. Of the 24 orders of insects, four dominate in terms of numbers of described species; at least 670,000 identified species belong to Coleoptera, Diptera, Hymenoptera or Lepidoptera.
|
150 |
+
|
151 |
+
As of 2017, at least 66 insect species extinctions had been recorded in the previous 500 years, which generally occurred on oceanic islands.[47] Declines in insect abundance have been attributed to artificial lighting,[48] land use changes such as urbanization or agricultural use,[49][50] pesticide use,[51] and invasive species.[52] Studies summarized in a 2019 review suggested a large proportion of insect species are threatened with extinction in the 21st century.[53] Though ecologist Manu Sanders notes the 2019 review was biased by mostly excluding data showing increases or stability in insect population, with the studies limited to specific geographic areas and specific groups of species.[54] A larger meta-study published in 2020, analyzing data from 166 long-term surveys, suggested that populations of terrestrial insects are decreasing by about 9% per decade.[55][56] Claims of pending mass insect extinctions or "insect apocalypse" based on a subset of these studies have been popularized in news reports, but often extrapolate beyond the study data or hyperbolize study findings.[57] Other areas have shown increases in some insect species, although trends in most regions are currently unknown. It is difficult to assess long-term trends in insect abundance or diversity because historical measurements are generally not known for many species. Robust data to assess at-risk areas or species is especially lacking for arctic and tropical regions and a majority of the southern hemisphere.[57]
|
152 |
+
|
153 |
+
Insects have segmented bodies supported by exoskeletons, the hard outer covering made mostly of chitin. The segments of the body are organized into three distinctive but interconnected units, or tagmata: a head, a thorax and an abdomen.[58] The head supports a pair of sensory antennae, a pair of compound eyes, zero to three simple eyes (or ocelli) and three sets of variously modified appendages that form the mouthparts. The thorax is made up of three segments: the prothorax, mesothorax and the metathorax. Each thoracic segment supports one pair of legs. The meso- and metathoracic segments may each have a pair of wings, depending on the insect. The abdomen consists of eleven segments, though in a few species of insects, these segments may be fused together or reduced in size. The abdomen also contains most of the digestive, respiratory, excretory and reproductive internal structures.[34]:22–48 Considerable variation and many adaptations in the body parts of insects occur, especially wings, legs, antenna and mouthparts.
|
154 |
+
|
155 |
+
The head is enclosed in a hard, heavily sclerotized, unsegmented, exoskeletal head capsule, or epicranium, which contains most of the sensing organs, including the antennae, ocellus or eyes, and the mouthparts. Of all the insect orders, Orthoptera displays the most features found in other insects, including the sutures and sclerites.[59] Here, the vertex, or the apex (dorsal region), is situated between the compound eyes for insects with a hypognathous and opisthognathous head. In prognathous insects, the vertex is not found between the compound eyes, but rather, where the ocelli are normally. This is because the primary axis of the head is rotated 90° to become parallel to the primary axis of the body. In some species, this region is modified and assumes a different name.[59]:13
|
156 |
+
|
157 |
+
The thorax is a tagma composed of three sections, the prothorax, mesothorax and the metathorax. The anterior segment, closest to the head, is the prothorax, with the major features being the first pair of legs and the pronotum. The middle segment is the mesothorax, with the major features being the second pair of legs and the anterior wings. The third and most posterior segment, abutting the abdomen, is the metathorax, which features the third pair of legs and the posterior wings. Each segment is dilineated by an intersegmental suture. Each segment has four basic regions. The dorsal surface is called the tergum (or notum) to distinguish it from the abdominal terga.[34] The two lateral regions are called the pleura (singular: pleuron) and the ventral aspect is called the sternum. In turn, the notum of the prothorax is called the pronotum, the notum for the mesothorax is called the mesonotum and the notum for the metathorax is called the metanotum. Continuing with this logic, the mesopleura and metapleura, as well as the mesosternum and metasternum, are used.[59]
|
158 |
+
|
159 |
+
The abdomen is the largest tagma of the insect, which typically consists of 11–12 segments and is less strongly sclerotized than the head or thorax. Each segment of the abdomen is represented by a sclerotized tergum and sternum. Terga are separated from each other and from the adjacent sterna or pleura by membranes. Spiracles are located in the pleural area. Variation of this ground plan includes the fusion of terga or terga and sterna to form continuous dorsal or ventral shields or a conical tube. Some insects bear a sclerite in the pleural area called a laterotergite. Ventral sclerites are sometimes called laterosternites. During the embryonic stage of many insects and the postembryonic stage of primitive insects, 11 abdominal segments are present. In modern insects there is a tendency toward reduction in the number of the abdominal segments, but the primitive number of 11 is maintained during embryogenesis. Variation in abdominal segment number is considerable. If the Apterygota are considered to be indicative of the ground plan for pterygotes, confusion reigns: adult Protura have 12 segments, Collembola have 6. The orthopteran family Acrididae has 11 segments, and a fossil specimen of Zoraptera has a 10-segmented abdomen.[59]
|
160 |
+
|
161 |
+
The insect outer skeleton, the cuticle, is made up of two layers: the epicuticle, which is a thin and waxy water resistant outer layer and contains no chitin, and a lower layer called the procuticle. The procuticle is chitinous and much thicker than the epicuticle and has two layers: an outer layer known as the exocuticle and an inner layer known as the endocuticle. The tough and flexible endocuticle is built from numerous layers of fibrous chitin and proteins, criss-crossing each other in a sandwich pattern, while the exocuticle is rigid and hardened.[34]:22–24 The exocuticle is greatly reduced in many insects during their larval stages, e.g., caterpillars. It is also reduced in soft-bodied adult insects.
|
162 |
+
|
163 |
+
Insects are the only invertebrates to have developed active flight capability, and this has played an important role in their success.[34]:186 Their flight muscles are able to contract multiple times for each single nerve impulse, allowing the wings to beat faster than would ordinarily be possible.
|
164 |
+
|
165 |
+
Having their muscles attached to their exoskeletons is efficient and allows more muscle connections.
|
166 |
+
|
167 |
+
The nervous system of an insect can be divided into a brain and a ventral nerve cord. The head capsule is made up of six fused segments, each with either a pair of ganglia, or a cluster of nerve cells outside of the brain. The first three pairs of ganglia are fused into the brain, while the three following pairs are fused into a structure of three pairs of ganglia under the insect's esophagus, called the subesophageal ganglion.[34]:57
|
168 |
+
|
169 |
+
The thoracic segments have one ganglion on each side, which are connected into a pair, one pair per segment. This arrangement is also seen in the abdomen but only in the first eight segments. Many species of insects have reduced numbers of ganglia due to fusion or reduction.[60] Some cockroaches have just six ganglia in the abdomen, whereas the wasp Vespa crabro has only two in the thorax and three in the abdomen. Some insects, like the house fly Musca domestica, have all the body ganglia fused into a single large thoracic ganglion.
|
170 |
+
|
171 |
+
At least a few insects have nociceptors, cells that detect and transmit signals responsible for the sensation of pain.[61][failed verification] This was discovered in 2003 by studying the variation in reactions of larvae of the common fruitfly Drosophila to the touch of a heated probe and an unheated one. The larvae reacted to the touch of the heated probe with a stereotypical rolling behavior that was not exhibited when the larvae were touched by the unheated probe.[62] Although nociception has been demonstrated in insects, there is no consensus that insects feel pain consciously[63]
|
172 |
+
|
173 |
+
Insects are capable of learning.[64]
|
174 |
+
|
175 |
+
An insect uses its digestive system to extract nutrients and other substances from the food it consumes.[65] Most of this food is ingested in the form of macromolecules and other complex substances like proteins, polysaccharides, fats and nucleic acids. These macromolecules must be broken down by catabolic reactions into smaller molecules like amino acids and simple sugars before being used by cells of the body for energy, growth, or reproduction. This break-down process is known as digestion.
|
176 |
+
|
177 |
+
There is extensive variation among different orders, life stages, and even castes in the digestive system of insects.[66] This is the result of extreme adaptations to various lifestyles. The present description focus on a generalized composition of the digestive system of an adult orthopteroid insect, which is considered basal to interpreting particularities of other groups.
|
178 |
+
|
179 |
+
The main structure of an insect's digestive system is a long enclosed tube called the alimentary canal, which runs lengthwise through the body. The alimentary canal directs food unidirectionally from the mouth to the anus. It has three sections, each of which performs a different process of digestion. In addition to the alimentary canal, insects also have paired salivary glands and salivary reservoirs. These structures usually reside in the thorax, adjacent to the foregut.[34]:70–77 The salivary glands (element 30 in numbered diagram) in an insect's mouth produce saliva. The salivary ducts lead from the glands to the reservoirs and then forward through the head to an opening called the salivarium, located behind the hypopharynx. By moving its mouthparts (element 32 in numbered diagram) the insect can mix its food with saliva. The mixture of saliva and food then travels through the salivary tubes into the mouth, where it begins to break down.[67][68] Some insects, like flies, have extra-oral digestion. Insects using extra-oral digestion expel digestive enzymes onto their food to break it down. This strategy allows insects to extract a significant proportion of the available nutrients from the food source.[69]:31 The gut is where almost all of insects' digestion takes place. It can be divided into the foregut, midgut and hindgut.
|
180 |
+
|
181 |
+
The first section of the alimentary canal is the foregut (element 27 in numbered diagram), or stomodaeum. The foregut is lined with a cuticular lining made of chitin and proteins as protection from tough food. The foregut includes the buccal cavity (mouth), pharynx, esophagus and crop and proventriculus (any part may be highly modified), which both store food and signify when to continue passing onward to the midgut.[34]:70
|
182 |
+
|
183 |
+
Digestion starts in buccal cavity (mouth) as partially chewed food is broken down by saliva from the salivary glands. As the salivary glands produce fluid and carbohydrate-digesting enzymes (mostly amylases), strong muscles in the pharynx pump fluid into the buccal cavity, lubricating the food like the salivarium does, and helping blood feeders, and xylem and phloem feeders.
|
184 |
+
|
185 |
+
From there, the pharynx passes food to the esophagus, which could be just a simple tube passing it on to the crop and proventriculus, and then onward to the midgut, as in most insects. Alternately, the foregut may expand into a very enlarged crop and proventriculus, or the crop could just be a diverticulum, or fluid-filled structure, as in some Diptera species.[69]:30–31
|
186 |
+
|
187 |
+
Once food leaves the crop, it passes to the midgut (element 13 in numbered diagram), also known as the mesenteron, where the majority of digestion takes place. Microscopic projections from the midgut wall, called microvilli, increase the surface area of the wall and allow more nutrients to be absorbed; they tend to be close to the origin of the midgut. In some insects, the role of the microvilli and where they are located may vary. For example, specialized microvilli producing digestive enzymes may more likely be near the end of the midgut, and absorption near the origin or beginning of the midgut.[69]:32
|
188 |
+
|
189 |
+
In the hindgut (element 16 in numbered diagram), or proctodaeum, undigested food particles are joined by uric acid to form fecal pellets. The rectum absorbs 90% of the water in these fecal pellets, and the dry pellet is then eliminated through the anus (element 17), completing the process of digestion. Envaginations at the anterior end of the hindgut form the Malpighian tubules, which form the main excretory system of insects.
|
190 |
+
|
191 |
+
Insects may have one to hundreds of Malpighian tubules (element 20). These tubules remove nitrogenous wastes from the hemolymph of the insect and regulate osmotic balance. Wastes and solutes are emptied directly into the alimentary canal, at the junction between the midgut and hindgut.[34]:71–72, 78–80
|
192 |
+
|
193 |
+
The reproductive system of female insects consist of a pair of ovaries, accessory glands, one or more spermathecae, and ducts connecting these parts. The ovaries are made up of a number of egg tubes, called ovarioles, which vary in size and number by species. The number of eggs that the insect is able to make vary by the number of ovarioles with the rate that eggs can develop being also influenced by ovariole design. Female insects are able make eggs, receive and store sperm, manipulate sperm from different males, and lay eggs. Accessory glands or glandular parts of the oviducts produce a variety of substances for sperm maintenance, transport and fertilization, as well as for protection of eggs. They can produce glue and protective substances for coating eggs or tough coverings for a batch of eggs called oothecae. Spermathecae are tubes or sacs in which sperm can be stored between the time of mating and the time an egg is fertilized.[59]:880
|
194 |
+
|
195 |
+
For males, the reproductive system is the testis, suspended in the body cavity by tracheae and the fat body. Most male insects have a pair of testes, inside of which are sperm tubes or follicles that are enclosed within a membranous sac. The follicles connect to the vas deferens by the vas efferens, and the two tubular vasa deferentia connect to a median ejaculatory duct that leads to the outside. A portion of the vas deferens is often enlarged to form the seminal vesicle, which stores the sperm before they are discharged into the female. The seminal vesicles have glandular linings that secrete nutrients for nourishment and maintenance of the sperm. The ejaculatory duct is derived from an invagination of the epidermal cells during development and, as a result, has a cuticular lining. The terminal portion of the ejaculatory duct may be sclerotized to form the intromittent organ, the aedeagus. The remainder of the male reproductive system is derived from embryonic mesoderm, except for the germ cells, or spermatogonia, which descend from the primordial pole cells very early during embryogenesis.[59]:885
|
196 |
+
|
197 |
+
Insect respiration is accomplished without lungs. Instead, the insect respiratory system uses a system of internal tubes and sacs through which gases either diffuse or are actively pumped, delivering oxygen directly to tissues that need it via their trachea (element 8 in numbered diagram). In most insects, air is taken in through openings on the sides of the abdomen and thorax called spiracles.
|
198 |
+
|
199 |
+
The respiratory system is an important factor that limits the size of insects. As insects get larger, this type of oxygen transport is less efficient and thus the heaviest insect currently weighs less than 100 g. However, with increased atmospheric oxygen levels, as were present in the late Paleozoic, larger insects were possible, such as dragonflies with wingspans of more than two feet.[70]
|
200 |
+
|
201 |
+
There are many different patterns of gas exchange demonstrated by different groups of insects. Gas exchange patterns in insects can range from continuous and diffusive ventilation, to discontinuous gas exchange.[34]:65–68 During continuous gas exchange, oxygen is taken in and carbon dioxide is released in a continuous cycle. In discontinuous gas exchange, however, the insect takes in oxygen while it is active and small amounts of carbon dioxide are released when the insect is at rest.[71] Diffusive ventilation is simply a form of continuous gas exchange that occurs by diffusion rather than physically taking in the oxygen. Some species of insect that are submerged also have adaptations to aid in respiration. As larvae, many insects have gills that can extract oxygen dissolved in water, while others need to rise to the water surface to replenish air supplies, which may be held or trapped in special structures.[72][73]
|
202 |
+
|
203 |
+
Because oxygen is delivered directly to tissues via tracheoles, the circulatory system is not used to carry oxygen, and is therefore greatly reduced. The insect circulatory system is open; it has no veins or arteries, and instead consists of little more than a single, perforated dorsal tube that pulses peristaltically. This dorsal blood vessel (element 14) is divided into two sections: the heart and aorta. The dorsal blood vessel circulates the hemolymph, arthropods' fluid analog of blood, from the rear of the body cavity forward.[34]:61–65[74] Hemolymph is composed of plasma in which hemocytes are suspended. Nutrients, hormones, wastes, and other substances are transported throughout the insect body in the hemolymph. Hemocytes include many types of cells that are important for immune responses, wound healing, and other functions. Hemolymph pressure may be increased by muscle contractions or by swallowing air into the digestive system to aid in moulting.[75] Hemolymph is also a major part of the open circulatory system of other arthropods, such as spiders and crustaceans.[76][77]
|
204 |
+
|
205 |
+
The majority of insects hatch from eggs. The fertilization and development takes place inside the egg, enclosed by a shell (chorion) that consists of maternal tissue. In contrast to eggs of other arthropods, most insect eggs are drought resistant. This is because inside the chorion two additional membranes develop from embryonic tissue, the amnion and the serosa. This serosa secretes a cuticle rich in chitin that protects the embryo against desiccation. In Schizophora however the serosa does not develop, but these flies lay their eggs in damp places, such as rotting matter.[78] Some species of insects, like the cockroach Blaptica dubia, as well as juvenile aphids and tsetse flies, are ovoviviparous. The eggs of ovoviviparous animals develop entirely inside the female, and then hatch immediately upon being laid.[7] Some other species, such as those in the genus of cockroaches known as Diploptera, are viviparous, and thus gestate inside the mother and are born alive.[34]:129, 131, 134–135 Some insects, like parasitic wasps, show polyembryony, where a single fertilized egg divides into many and in some cases thousands of separate embryos.[34]:136–137 Insects may be univoltine, bivoltine or multivoltine, i.e. they may have one, two or many broods (generations) in a year.[79]
|
206 |
+
|
207 |
+
Other developmental and reproductive variations include haplodiploidy, polymorphism, paedomorphosis or peramorphosis, sexual dimorphism, parthenogenesis and more rarely hermaphroditism.[34]:143 In haplodiploidy, which is a type of sex-determination system, the offspring's sex is determined by the number of sets of chromosomes an individual receives. This system is typical in bees and wasps.[80] Polymorphism is where a species may have different morphs or forms, as in the oblong winged katydid, which has four different varieties: green, pink and yellow or tan. Some insects may retain phenotypes that are normally only seen in juveniles; this is called paedomorphosis. In peramorphosis, an opposite sort of phenomenon, insects take on previously unseen traits after they have matured into adults. Many insects display sexual dimorphism, in which males and females have notably different appearances, such as the moth Orgyia recens as an exemplar of sexual dimorphism in insects.
|
208 |
+
|
209 |
+
Some insects use parthenogenesis, a process in which the female can reproduce and give birth without having the eggs fertilized by a male. Many aphids undergo a form of parthenogenesis, called cyclical parthenogenesis, in which they alternate between one or many generations of asexual and sexual reproduction.[81][82] In summer, aphids are generally female and parthenogenetic; in the autumn, males may be produced for sexual reproduction. Other insects produced by parthenogenesis are bees, wasps and ants, in which they spawn males. However, overall, most individuals are female, which are produced by fertilization. The males are haploid and the females are diploid.[7] More rarely, some insects display hermaphroditism, in which a given individual has both male and female reproductive organs.
|
210 |
+
|
211 |
+
Insect life-histories show adaptations to withstand cold and dry conditions. Some temperate region insects are capable of activity during winter, while some others migrate to a warmer climate or go into a state of torpor.[83] Still other insects have evolved mechanisms of diapause that allow eggs or pupae to survive these conditions.[84]
|
212 |
+
|
213 |
+
Metamorphosis in insects is the biological process of development all insects must undergo. There are two forms of metamorphosis: incomplete metamorphosis and complete metamorphosis.
|
214 |
+
|
215 |
+
Hemimetabolous insects, those with incomplete metamorphosis, change gradually by undergoing a series of molts. An insect molts when it outgrows its exoskeleton, which does not stretch and would otherwise restrict the insect's growth. The molting process begins as the insect's epidermis secretes a new epicuticle inside the old one. After this new epicuticle is secreted, the epidermis releases a mixture of enzymes that digests the endocuticle and thus detaches the old cuticle. When this stage is complete, the insect makes its body swell by taking in a large quantity of water or air, which makes the old cuticle split along predefined weaknesses where the old exocuticle was thinnest.[34]:142[85]
|
216 |
+
|
217 |
+
Immature insects that go through incomplete metamorphosis are called nymphs or in the case of dragonflies and damselflies, also naiads. Nymphs are similar in form to the adult except for the presence of wings, which are not developed until adulthood. With each molt, nymphs grow larger and become more similar in appearance to adult insects.
|
218 |
+
|
219 |
+
Holometabolism, or complete metamorphosis, is where the insect changes in four stages, an egg or embryo, a larva, a pupa and the adult or imago. In these species, an egg hatches to produce a larva, which is generally worm-like in form. This worm-like form can be one of several varieties: eruciform (caterpillar-like), scarabaeiform (grub-like), campodeiform (elongated, flattened and active), elateriform (wireworm-like) or vermiform (maggot-like). The larva grows and eventually becomes a pupa, a stage marked by reduced movement and often sealed within a cocoon. There are three types of pupae: obtect, exarate or coarctate. Obtect pupae are compact, with the legs and other appendages enclosed. Exarate pupae have their legs and other appendages free and extended. Coarctate pupae develop inside the larval skin.[34]:151 Insects undergo considerable change in form during the pupal stage, and emerge as adults. Butterflies are a well-known example of insects that undergo complete metamorphosis, although most insects use this life cycle. Some insects have evolved this system to hypermetamorphosis.
|
220 |
+
|
221 |
+
Complete metamorphosis is a trait of the most diverse insect group, the Endopterygota.[34]:143 Endopterygota includes 11 Orders, the largest being Diptera (flies), Lepidoptera (butterflies and moths), and Hymenoptera (bees, wasps, and ants), and Coleoptera (beetles). This form of development is exclusive to insects and not seen in any other arthropods.
|
222 |
+
|
223 |
+
Many insects possess very sensitive and specialized organs of perception. Some insects such as bees can perceive ultraviolet wavelengths, or detect polarized light, while the antennae of male moths can detect the pheromones of female moths over distances of many kilometers.[86] The yellow paper wasp (Polistes versicolor) is known for its wagging movements as a form of communication within the colony; it can waggle with a frequency of 10.6±2.1 Hz (n=190). These wagging movements can signal the arrival of new material into the nest and aggression between workers can be used to stimulate others to increase foraging expeditions.[87] There is a pronounced tendency for there to be a trade-off between visual acuity and chemical or tactile acuity, such that most insects with well-developed eyes have reduced or simple antennae, and vice versa. There are a variety of different mechanisms by which insects perceive sound; while the patterns are not universal, insects can generally hear sound if they can produce it. Different insect species can have varying hearing, though most insects can hear only a narrow range of frequencies related to the frequency of the sounds they can produce. Mosquitoes have been found to hear up to 2 kHz, and some grasshoppers can hear up to 50 kHz.[88] Certain predatory and parasitic insects can detect the characteristic sounds made by their prey or hosts, respectively. For instance, some nocturnal moths can perceive the ultrasonic emissions of bats, which helps them avoid predation.[34]:87–94 Insects that feed on blood have special sensory structures that can detect infrared emissions, and use them to home in on their hosts.
|
224 |
+
|
225 |
+
Some insects display a rudimentary sense of numbers,[89] such as the solitary wasps that prey upon a single species. The mother wasp lays her eggs in individual cells and provides each egg with a number of live caterpillars on which the young feed when hatched. Some species of wasp always provide five, others twelve, and others as high as twenty-four caterpillars per cell. The number of caterpillars is different among species, but always the same for each sex of larva. The male solitary wasp in the genus Eumenes is smaller than the female, so the mother of one species supplies him with only five caterpillars; the larger female receives ten caterpillars in her cell.
|
226 |
+
|
227 |
+
A few insects, such as members of the families Poduridae and Onychiuridae (Collembola), Mycetophilidae (Diptera) and the beetle families Lampyridae, Phengodidae, Elateridae and Staphylinidae are bioluminescent. The most familiar group are the fireflies, beetles of the family Lampyridae. Some species are able to control this light generation to produce flashes. The function varies with some species using them to attract mates, while others use them to lure prey. Cave dwelling larvae of Arachnocampa (Mycetophilidae, fungus gnats) glow to lure small flying insects into sticky strands of silk.[90]
|
228 |
+
Some fireflies of the genus Photuris mimic the flashing of female Photinus species to attract males of that species, which are then captured and devoured.[91] The colors of emitted light vary from dull blue (Orfelia fultoni, Mycetophilidae) to the familiar greens and the rare reds (Phrixothrix tiemanni, Phengodidae).[92]
|
229 |
+
|
230 |
+
Most insects, except some species of cave crickets, are able to perceive light and dark. Many species have acute vision capable of detecting minute movements. The eyes may include simple eyes or ocelli as well as compound eyes of varying sizes. Many species are able to detect light in the infrared, ultraviolet and the visible light wavelengths. Color vision has been demonstrated in many species and phylogenetic analysis suggests that UV-green-blue trichromacy existed from at least the Devonian period between 416 and 359 million years ago.[93]
|
231 |
+
|
232 |
+
Insects were the earliest organisms to produce and sense sounds. Insects make sounds mostly by mechanical action of appendages. In grasshoppers and crickets, this is achieved by stridulation. Cicadas make the loudest sounds among the insects by producing and amplifying sounds with special modifications to their body to form tymbals and associated musculature. The African cicada Brevisana brevis has been measured at 106.7 decibels at a distance of 50 cm (20 in).[94] Some insects, such as the Helicoverpa zea moths, hawk moths and Hedylid butterflies, can hear ultrasound and take evasive action when they sense that they have been detected by bats.[95][96] Some moths produce ultrasonic clicks that were once thought to have a role in jamming bat echolocation. The ultrasonic clicks were subsequently found to be produced mostly by unpalatable moths to warn bats, just as warning colorations are used against predators that hunt by sight.[97] Some otherwise palatable moths have evolved to mimic these calls.[98] More recently, the claim that some moths can jam bat sonar has been revisited. Ultrasonic recording and high-speed infrared videography of bat-moth interactions suggest the palatable tiger moth really does defend against attacking big brown bats using ultrasonic clicks that jam bat sonar.[99]
|
233 |
+
|
234 |
+
Very low sounds are also produced in various species of Coleoptera, Hymenoptera, Lepidoptera, Mantodea and Neuroptera. These low sounds are simply the sounds made by the insect's movement. Through microscopic stridulatory structures located on the insect's muscles and joints, the normal sounds of the insect moving are amplified and can be used to warn or communicate with other insects. Most sound-making insects also have tympanal organs that can perceive airborne sounds. Some species in Hemiptera, such as the corixids (water boatmen), are known to communicate via underwater sounds.[100] Most insects are also able to sense vibrations transmitted through surfaces.
|
235 |
+
|
236 |
+
Communication using surface-borne vibrational signals is more widespread among insects because of size constraints in producing air-borne sounds.[101] Insects cannot effectively produce low-frequency sounds, and high-frequency sounds tend to disperse more in a dense environment (such as foliage), so insects living in such environments communicate primarily using substrate-borne vibrations.[102] The mechanisms of production of vibrational signals are just as diverse as those for producing sound in insects.
|
237 |
+
|
238 |
+
Some species use vibrations for communicating within members of the same species, such as to attract mates as in the songs of the shield bug Nezara viridula.[103] Vibrations can also be used to communicate between entirely different species; lycaenid (gossamer-winged butterfly) caterpillars, which are myrmecophilous (living in a mutualistic association with ants) communicate with ants in this way.[104] The Madagascar hissing cockroach has the ability to press air through its spiracles to make a hissing noise as a sign of aggression;[105] the death's-head hawkmoth makes a squeaking noise by forcing air out of their pharynx when agitated, which may also reduce aggressive worker honey bee behavior when the two are in close proximity.[106]
|
239 |
+
|
240 |
+
Chemical communications in animals rely on a variety of aspects including taste and smell. Chemoreception is the physiological response of a sense organ (i.e. taste or smell) to a chemical stimulus where the chemicals act as signals to regulate the state or activity of a cell. A semiochemical is a message-carrying chemical that is meant to attract, repel, and convey information. Types of semiochemicals include pheromones and kairomones. One example is the butterfly Phengaris arion which uses chemical signals as a form of mimicry to aid in predation.[107]
|
241 |
+
|
242 |
+
In addition to the use of sound for communication, a wide range of insects have evolved chemical means for communication. These chemicals, termed semiochemicals, are often derived from plant metabolites include those meant to attract, repel and provide other kinds of information. Pheromones, a type of semiochemical, are used for attracting mates of the opposite sex, for aggregating conspecific individuals of both sexes, for deterring other individuals from approaching, to mark a trail, and to trigger aggression in nearby individuals. Allomones benefit their producer by the effect they have upon the receiver. Kairomones benefit their receiver instead of their producer. Synomones benefit the producer and the receiver. While some chemicals are targeted at individuals of the same species, others are used for communication across species. The use of scents is especially well known to have developed in social insects.[34]:96–105
|
243 |
+
|
244 |
+
Social insects, such as termites, ants and many bees and wasps, are the most familiar species of eusocial animals.[108] They live together in large well-organized colonies that may be so tightly integrated and genetically similar that the colonies of some species are sometimes considered superorganisms. It is sometimes argued that the various species of honey bee are the only invertebrates (and indeed one of the few non-human groups) to have evolved a system of abstract symbolic communication where a behavior is used to represent and convey specific information about something in the environment. In this communication system, called dance language, the angle at which a bee dances represents a direction relative to the sun, and the length of the dance represents the distance to be flown.[34]:309–311 Though perhaps not as advanced as honey bees, bumblebees also potentially have some social communication behaviors. Bombus terrestris, for example, exhibit a faster learning curve for visiting unfamiliar, yet rewarding flowers, when they can see a conspecific foraging on the same species.[109]
|
245 |
+
|
246 |
+
Only insects that live in nests or colonies demonstrate any true capacity for fine-scale spatial orientation or homing. This can allow an insect to return unerringly to a single hole a few millimeters in diameter among thousands of apparently identical holes clustered together, after a trip of up to several kilometers' distance. In a phenomenon known as philopatry, insects that hibernate have shown the ability to recall a specific location up to a year after last viewing the area of interest.[110] A few insects seasonally migrate large distances between different geographic regions (e.g., the overwintering areas of the monarch butterfly).[34]:14
|
247 |
+
|
248 |
+
The eusocial insects build nests, guard eggs, and provide food for offspring full-time (see Eusociality).
|
249 |
+
Most insects, however, lead short lives as adults, and rarely interact with one another except to mate or compete for mates. A small number exhibit some form of parental care, where they will at least guard their eggs, and sometimes continue guarding their offspring until adulthood, and possibly even feeding them. Another simple form of parental care is to construct a nest (a burrow or an actual construction, either of which may be simple or complex), store provisions in it, and lay an egg upon those provisions. The adult does not contact the growing offspring, but it nonetheless does provide food. This sort of care is typical for most species of bees and various types of wasps.[111]
|
250 |
+
|
251 |
+
Insects are the only group of invertebrates to have developed flight. The evolution of insect wings has been a subject of debate. Some entomologists suggest that the wings are from paranotal lobes, or extensions from the insect's exoskeleton called the nota, called the paranotal theory. Other theories are based on a pleural origin. These theories include suggestions that wings originated from modified gills, spiracular flaps or as from an appendage of the epicoxa. The epicoxal theory suggests the insect wings are modified epicoxal exites, a modified appendage at the base of the legs or coxa.[112] In the Carboniferous age, some of the Meganeura dragonflies had as much as a 50 cm (20 in) wide wingspan. The appearance of gigantic insects has been found to be consistent with high atmospheric oxygen. The respiratory system of insects constrains their size, however the high oxygen in the atmosphere allowed larger sizes.[113] The largest flying insects today are much smaller, with the largest wingspan belonging to the white witch moth (Thysania agrippina), at approximately 28 cm (11 in).[114]
|
252 |
+
|
253 |
+
Insect flight has been a topic of great interest in aerodynamics due partly to the inability of steady-state theories to explain the lift generated by the tiny wings of insects. But insect wings are in motion, with flapping and vibrations, resulting in churning and eddies, and the misconception that physics says "bumblebees can't fly" persisted throughout most of the twentieth century.
|
254 |
+
|
255 |
+
Unlike birds, many small insects are swept along by the prevailing winds[115] although many of the larger insects are known to make migrations. Aphids are known to be transported long distances by low-level jet streams.[116] As such, fine line patterns associated with converging winds within weather radar imagery, like the WSR-88D radar network, often represent large groups of insects.[117]
|
256 |
+
|
257 |
+
Many adult insects use six legs for walking and have adopted a tripedal gait. The tripedal gait allows for rapid walking while always having a stable stance and has been studied extensively in cockroaches and ants. The legs are used in alternate triangles touching the ground. For the first step, the middle right leg and the front and rear left legs are in contact with the ground and move the insect forward, while the front and rear right leg and the middle left leg are lifted and moved forward to a new position. When they touch the ground to form a new stable triangle the other legs can be lifted and brought forward in turn and so on.[118] The purest form of the tripedal gait is seen in insects moving at high speeds. However, this type of locomotion is not rigid and insects can adapt a variety of gaits. For example, when moving slowly, turning, avoiding obstacles, climbing or slippery surfaces, four (tetrapod) or more feet (wave-gait[119]) may be touching the ground. Insects can also adapt their gait to cope with the loss of one or more limbs.
|
258 |
+
|
259 |
+
Cockroaches are among the fastest insect runners and, at full speed, adopt a bipedal run to reach a high velocity in proportion to their body size. As cockroaches move very quickly, they need to be video recorded at several hundred frames per second to reveal their gait. More sedate locomotion is seen in the stick insects or walking sticks (Phasmatodea). A few insects have evolved to walk on the surface of the water, especially members of the Gerridae family, commonly known as water striders. A few species of ocean-skaters in the genus Halobates even live on the surface of open oceans, a habitat that has few insect species.[120]
|
260 |
+
|
261 |
+
Insect walking is of particular interest as an alternative form of locomotion in robots. The study of insects and bipeds has a significant impact on possible robotic methods of transport. This may allow new robots to be designed that can traverse terrain that robots with wheels may be unable to handle.[118]
|
262 |
+
|
263 |
+
A large number of insects live either part or the whole of their lives underwater. In many of the more primitive orders of insect, the immature stages are spent in an aquatic environment. Some groups of insects, like certain water beetles, have aquatic adults as well.[72]
|
264 |
+
|
265 |
+
Many of these species have adaptations to help in under-water locomotion. Water beetles and water bugs have legs adapted into paddle-like structures. Dragonfly naiads use jet propulsion, forcibly expelling water out of their rectal chamber.[121] Some species like the water striders are capable of walking on the surface of water. They can do this because their claws are not at the tips of the legs as in most insects, but recessed in a special groove further up the leg; this prevents the claws from piercing the water's surface film.[72] Other insects such as the Rove beetle Stenus are known to emit pygidial gland secretions that reduce surface tension making it possible for them to move on the surface of water by Marangoni propulsion (also known by the German term Entspannungsschwimmen).[122][123]
|
266 |
+
|
267 |
+
Insect ecology is the scientific study of how insects, individually or as a community, interact with the surrounding environment or ecosystem.[124]:3 Insects play one of the most important roles in their ecosystems, which includes many roles, such as soil turning and aeration, dung burial, pest control, pollination and wildlife nutrition. An example is the beetles, which are scavengers that feed on dead animals and fallen trees and thereby recycle biological materials into forms found useful by other organisms.[125] These insects, and others, are responsible for much of the process by which topsoil is created.[34]:3, 218–228
|
268 |
+
|
269 |
+
Insects are mostly soft bodied, fragile and almost defenseless compared to other, larger lifeforms. The immature stages are small, move slowly or are immobile, and so all stages are exposed to predation and parasitism. Insects then have a variety of defense strategies to avoid being attacked by predators or parasitoids. These include camouflage, mimicry, toxicity and active defense.[127]
|
270 |
+
|
271 |
+
Camouflage is an important defense strategy, which involves the use of coloration or shape to blend into the surrounding environment.[128] This sort of protective coloration is common and widespread among beetle families, especially those that feed on wood or vegetation, such as many of the leaf beetles (family Chrysomelidae) or weevils. In some of these species, sculpturing or various colored scales or hairs cause the beetle to resemble bird dung or other inedible objects. Many of those that live in sandy environments blend in with the coloration of the substrate.[127] Most phasmids are known for effectively replicating the forms of sticks and leaves, and the bodies of some species (such as O. macklotti and Palophus centaurus) are covered in mossy or lichenous outgrowths that supplement their disguise. Very rarely, a species may have the ability to change color as their surroundings shift (Bostra scabrinota). In a further behavioral adaptation to supplement crypsis, a number of species have been noted to perform a rocking motion where the body is swayed from side to side that is thought to reflect the movement of leaves or twigs swaying in the breeze. Another method by which stick insects avoid predation and resemble twigs is by feigning death (catalepsy), where the insect enters a motionless state that can be maintained for a long period. The nocturnal feeding habits of adults also aids Phasmatodea in remaining concealed from predators.[129]
|
272 |
+
|
273 |
+
Another defense that often uses color or shape to deceive potential enemies is mimicry. A number of longhorn beetles (family Cerambycidae) bear a striking resemblance to wasps, which helps them avoid predation even though the beetles are in fact harmless.[127] Batesian and Müllerian mimicry complexes are commonly found in Lepidoptera. Genetic polymorphism and natural selection give rise to otherwise edible species (the mimic) gaining a survival advantage by resembling inedible species (the model). Such a mimicry complex is referred to as Batesian. One of the most famous examples, where the viceroy butterfly was long believed to be a Batesian mimic of the inedible monarch, was later disproven, as the viceroy is more toxic than the monarch, and this resemblance is now considered to be a case of Müllerian mimicry.[126] In Müllerian mimicry, inedible species, usually within a taxonomic order, find it advantageous to resemble each other so as to reduce the sampling rate by predators who need to learn about the insects' inedibility. Taxa from the toxic genus Heliconius form one of the most well known Müllerian complexes.[130]
|
274 |
+
|
275 |
+
Chemical defense is another important defense found among species of Coleoptera and Lepidoptera, usually being advertised by bright colors, such as the monarch butterfly. They obtain their toxicity by sequestering the chemicals from the plants they eat into their own tissues. Some Lepidoptera manufacture their own toxins. Predators that eat poisonous butterflies and moths may become sick and vomit violently, learning not to eat those types of species; this is actually the basis of Müllerian mimicry. A predator who has previously eaten a poisonous lepidopteran may avoid other species with similar markings in the future, thus saving many other species as well.[131] Some ground beetles of the family Carabidae can spray chemicals from their abdomen with great accuracy, to repel predators.[127]
|
276 |
+
|
277 |
+
Pollination is the process by which pollen is transferred in the reproduction of plants, thereby enabling fertilisation and sexual reproduction. Most flowering plants require an animal to do the transportation. While other animals are included as pollinators, the majority of pollination is done by insects.[132] Because insects usually receive benefit for the pollination in the form of energy rich nectar it is a grand example of mutualism. The various flower traits (and combinations thereof) that differentially attract one type of pollinator or another are known as pollination syndromes. These arose through complex plant-animal adaptations. Pollinators find flowers through bright colorations, including ultraviolet, and attractant pheromones. The study of pollination by insects is known as anthecology.
|
278 |
+
|
279 |
+
Many insects are parasites of other insects such as the parasitoid wasps. These insects are known as entomophagous parasites. They can be beneficial due to their devastation of pests that can destroy crops and other resources. Many insects have a parasitic relationship with humans such as the mosquito. These insects are known to spread diseases such as malaria and yellow fever and because of such, mosquitoes indirectly cause more deaths of humans than any other animal.
|
280 |
+
|
281 |
+
Many insects are considered pests by humans. Insects commonly regarded as pests include those that are parasitic (e.g. lice, bed bugs), transmit diseases (mosquitoes, flies), damage structures (termites), or destroy agricultural goods (locusts, weevils). Many entomologists are involved in various forms of pest control, as in research for companies to produce insecticides, but increasingly rely on methods of biological pest control, or biocontrol. Biocontrol uses one organism to reduce the population density of another organism—the pest—and is considered a key element of integrated pest management.[133][134]
|
282 |
+
|
283 |
+
Despite the large amount of effort focused at controlling insects, human attempts to kill pests with insecticides can backfire. If used carelessly, the poison can kill all kinds of organisms in the area, including insects' natural predators, such as birds, mice and other insectivores. The effects of DDT's use exemplifies how some insecticides can threaten wildlife beyond intended populations of pest insects.[135][136]
|
284 |
+
|
285 |
+
Although pest insects attract the most attention, many insects are beneficial to the environment and to humans. Some insects, like wasps, bees, butterflies and ants, pollinate flowering plants. Pollination is a mutualistic relationship between plants and insects. As insects gather nectar from different plants of the same species, they also spread pollen from plants on which they have previously fed. This greatly increases plants' ability to cross-pollinate, which maintains and possibly even improves their evolutionary fitness. This ultimately affects humans since ensuring healthy crops is critical to agriculture. As well as pollination ants help with seed distribution of plants. This helps to spread the plants, which increases plant diversity. This leads to an overall better environment.[137] A serious environmental problem is the decline of populations of pollinator insects, and a number of species of insects are now cultured primarily for pollination management in order to have sufficient pollinators in the field, orchard or greenhouse at bloom time.[138]:240–243 Another solution, as shown in Delaware, has been to raise native plants to help support native pollinators like L. vierecki.[139] Insects also produce useful substances such as honey, wax, lacquer and silk. Honey bees have been cultured by humans for thousands of years for honey, although contracting for crop pollination is becoming more significant for beekeepers. The silkworm has greatly affected human history, as silk-driven trade established relationships between China and the rest of the world.
|
286 |
+
|
287 |
+
Insectivorous insects, or insects that feed on other insects, are beneficial to humans if they eat insects that could cause damage to agriculture and human structures. For example, aphids feed on crops and cause problems for farmers, but ladybugs feed on aphids, and can be used as a means to significantly reduce pest aphid populations. While birds are perhaps more visible predators of insects, insects themselves account for the vast majority of insect consumption. Ants also help control animal populations by consuming small vertebrates.[140] Without predators to keep them in check, insects can undergo almost unstoppable population explosions.[34]:328–348[34]:400[141][142]
|
288 |
+
|
289 |
+
Insects are also used in medicine, for example fly larvae (maggots) were formerly used to treat wounds to prevent or stop gangrene, as they would only consume dead flesh. This treatment is finding modern usage in some hospitals. Recently insects have also gained attention as potential sources of drugs and other medicinal substances.[143] Adult insects, such as crickets and insect larvae of various kinds, are also commonly used as fishing bait.[144]
|
290 |
+
|
291 |
+
Insects play important roles in biological research. For example, because of its small size, short generation time and high fecundity, the common fruit fly Drosophila melanogaster is a model organism for studies in the genetics of higher eukaryotes. D. melanogaster has been an essential part of studies into principles like genetic linkage, interactions between genes, chromosomal genetics, development, behavior and evolution. Because genetic systems are well conserved among eukaryotes, understanding basic cellular processes like DNA replication or transcription in fruit flies can help to understand those processes in other eukaryotes, including humans.[145] The genome of D. melanogaster was sequenced in 2000, reflecting the organism's important role in biological research. It was found that 70% of the fly genome is similar to the human genome, supporting the evolution theory.[146]
|
292 |
+
|
293 |
+
In some cultures, insects, especially deep-fried cicadas, are considered to be delicacies, whereas in other places they form part of the normal diet. Insects have a high protein content for their mass, and some authors suggest their potential as a major source of protein in human nutrition.[34]:10–13 In most first-world countries, however, entomophagy (the eating of insects), is taboo.[147]
|
294 |
+
Since it is impossible to entirely eliminate pest insects from the human food chain, insects are inadvertently present in many foods, especially grains. Food safety laws in many countries do not prohibit insect parts in food, but rather limit their quantity. According to cultural materialist anthropologist Marvin Harris, the eating of insects is taboo in cultures that have other protein sources such as fish or livestock.
|
295 |
+
|
296 |
+
Due to the abundance of insects and a worldwide concern of food shortages, the Food and Agriculture Organization of the United Nations considers that the world may have to, in the future, regard the prospects of eating insects as a food staple. Insects are noted for their nutrients, having a high content of protein, minerals and fats and are eaten by one-third of the global population.[148]
|
297 |
+
|
298 |
+
Several insect species such as the black soldier fly or the housefly in their maggot forms, as well as beetle larvae such as mealworms can be processed and used as feed for farmed animals such as chicken, fish and pigs.[149]
|
299 |
+
|
300 |
+
Insect larvae (i.e. black soldier fly larvae) can provide protein, grease, and chitin. The grease is usable in the pharmaceutical industry (cosmetics,[150] surfactants for shower gel) -hereby replacing other vegetable oils as palm oil.[151]
|
301 |
+
|
302 |
+
Also, insect cooking oil, insect butter and fatty alcohols can be made from such insects as the superworm (Zophobas morio).[152][153]
|
303 |
+
|
304 |
+
Many species of insects are sold and kept as pets.
|
305 |
+
|
306 |
+
Scarab beetles held religious and cultural symbolism in Old Egypt, Greece and some shamanistic Old World cultures. The ancient Chinese regarded cicadas as symbols of rebirth or immortality. In Mesopotamian literature, the epic poem of Gilgamesh has allusions to Odonata that signify the impossibility of immortality. Among the Aborigines of Australia of the Arrernte language groups, honey ants and witchety grubs served as personal clan totems. In the case of the 'San' bush-men of the Kalahari, it is the praying mantis that holds much cultural significance including creation and zen-like patience in waiting.[34]:9
|
en/2741.html.txt
ADDED
@@ -0,0 +1,306 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
See text.
|
6 |
+
|
7 |
+
Insects or Insecta (from Latin insectum) are hexapod invertebrates and the largest group within the arthropod phylum. Definitions and circumscriptions vary; usually, insects comprise a class within the Arthropoda. As used here, the term Insecta is synonymous with Ectognatha. Insects have a chitinous exoskeleton, a three-part body (head, thorax and abdomen), three pairs of jointed legs, compound eyes and one pair of antennae. Insects are the most diverse group of animals; they include more than a million described species and represent more than half of all known living organisms.[2][3] The total number of extant species is estimated at between six and ten million;[2][4][5] potentially over 90% of the animal life forms on Earth are insects.[5][6] Insects may be found in nearly all environments, although only a small number of species reside in the oceans, which are dominated by another arthropod group, crustaceans.
|
8 |
+
|
9 |
+
Nearly all insects hatch from eggs. Insect growth is constrained by the inelastic exoskeleton and development involves a series of molts. The immature stages often differ from the adults in structure, habit and habitat, and can include a passive pupal stage in those groups that undergo four-stage metamorphosis. Insects that undergo three-stage metamorphosis lack a pupal stage and adults develop through a series of nymphal stages.[7] The higher level relationship of the insects is unclear. Fossilized insects of enormous size have been found from the Paleozoic Era, including giant dragonflies with wingspans of 55 to 70 cm (22 to 28 in). The most diverse insect groups appear to have coevolved with flowering plants.
|
10 |
+
|
11 |
+
Adult insects typically move about by walking, flying, or sometimes swimming. As it allows for rapid yet stable movement, many insects adopt a tripedal gait in which they walk with their legs touching the ground in alternating triangles, composed of the front and rear on one side with the middle on the other side. Insects are the only invertebrates to have evolved flight, and all flying insects derive from one common ancestor. Many insects spend at least part of their lives under water, with larval adaptations that include gills, and some adult insects are aquatic and have adaptations for swimming. Some species, such as water striders, are capable of walking on the surface of water. Insects are mostly solitary, but some, such as certain bees, ants and termites, are social and live in large, well-organized colonies. Some insects, such as earwigs, show maternal care, guarding their eggs and young. Insects can communicate with each other in a variety of ways. Male moths can sense the pheromones of female moths over great distances. Other species communicate with sounds: crickets stridulate, or rub their wings together, to attract a mate and repel other males. Lampyrid beetles communicate with light.
|
12 |
+
|
13 |
+
Humans regard certain insects as pests, and attempt to control them using insecticides, and a host of other techniques. Some insects damage crops by feeding on sap, leaves, fruits, or wood. Some species are parasitic, and may vector diseases. Some insects perform complex ecological roles; blow-flies, for example, help consume carrion but also spread diseases. Insect pollinators are essential to the life cycle of many flowering plant species on which most organisms, including humans, are at least partly dependent; without them, the terrestrial portion of the biosphere would be devastated.[8] Many insects are considered ecologically beneficial as predators and a few provide direct economic benefit. Silkworms produce silk and honey bees produce honey and both have been domesticated by humans. Insects are consumed as food in 80% of the world's nations, by people in roughly 3000 ethnic groups.[9][10] Human activities also have effects on insect biodiversity.
|
14 |
+
|
15 |
+
The word "insect" comes from the Latin word insectum, meaning "with a notched or divided body", or literally "cut into", from the neuter singular perfect passive participle of insectare, "to cut into, to cut up", from in- "into" and secare "to cut";[11] because insects appear "cut into" three sections. A calque of Greek ἔντομον [éntomon], "cut into sections", Pliny the Elder introduced the Latin designation as a loan-translation of the Greek word ἔντομος (éntomos) or "insect" (as in entomology), which was Aristotle's term for this class of life, also in reference to their "notched" bodies. "Insect" first appears documented in English in 1601 in Holland's translation of Pliny. Translations of Aristotle's term also form the usual word for "insect" in Welsh (trychfil, from trychu "to cut" and mil, "animal"), Serbo-Croatian (zareznik, from rezati, "to cut"), Russian (насекомое nasekomoje, from seč'/-sekat', "to cut"), etc.[11][12]
|
16 |
+
|
17 |
+
The precise definition of the taxon Insecta and the equivalent English name "insect" varies; three alternative definitions are shown in the table.
|
18 |
+
|
19 |
+
In the broadest circumscription, Insecta sensu lato consists of all hexapods.[13][14] Traditionally, insects defined in this way were divided into "Apterygota" (the first five groups in the table)—the wingless insects—and Pterygota—the winged and secondarily wingless insects.[15] However, modern phylogenetic studies have shown that "Apterygota" is not monophyletic,[16] and so does not form a good taxon. A narrower circumscription restricts insects to those hexapods with external mouthparts, and comprises only the last three groups in the table. In this sense, Insecta sensu stricto is equivalent to Ectognatha.[13][16] In the narrowest circumscription, insects are restricted to hexapods that are either winged or descended from winged ancestors. Insecta sensu strictissimo is then equivalent to Pterygota.[17] For the purposes of this article, the middle definition is used; insects consist of two wingless taxa, Archaeognatha (jumping bristletails) and Zygentoma (silverfish), plus the winged or secondarily wingless Pterygota.
|
20 |
+
|
21 |
+
Hexapoda (Insecta, Collembola, Diplura, Protura)
|
22 |
+
|
23 |
+
Crustacea (crabs, shrimp, isopods, etc.)
|
24 |
+
|
25 |
+
Pauropoda
|
26 |
+
|
27 |
+
Diplopoda (millipedes)
|
28 |
+
|
29 |
+
Chilopoda (centipedes)
|
30 |
+
|
31 |
+
Symphyla
|
32 |
+
|
33 |
+
Arachnida (spiders, scorpions, mites, ticks, etc.)
|
34 |
+
|
35 |
+
Eurypterida (sea scorpions: extinct)
|
36 |
+
|
37 |
+
Xiphosura (horseshoe crabs)
|
38 |
+
|
39 |
+
Pycnogonida (sea spiders)
|
40 |
+
|
41 |
+
†Trilobites (extinct)
|
42 |
+
|
43 |
+
A phylogenetic tree of the arthropods and related groups[18]
|
44 |
+
|
45 |
+
The evolutionary relationship of insects to other animal groups remains unclear.
|
46 |
+
|
47 |
+
Although traditionally grouped with millipedes and centipedes—possibly on the basis of convergent adaptations to terrestrialisation[19]—evidence has emerged favoring closer evolutionary ties with crustaceans. In the Pancrustacea theory, insects, together with Entognatha, Remipedia, and Cephalocarida, make up a natural clade labeled Miracrustacea.[20]
|
48 |
+
|
49 |
+
Insects form a single clade, closely related to crustaceans and myriapods.[21]
|
50 |
+
|
51 |
+
Other terrestrial arthropods, such as centipedes, millipedes, scorpions, spiders, woodlice, mites, and ticks are sometimes confused with insects since their body plans can appear similar, sharing (as do all arthropods) a jointed exoskeleton. However, upon closer examination, their features differ significantly; most noticeably, they do not have the six-legged characteristic of adult insects.[22]
|
52 |
+
|
53 |
+
The higher-level phylogeny of the arthropods continues to be a matter of debate and research. In 2008, researchers at Tufts University uncovered what they believe is the world's oldest known full-body impression of a primitive flying insect, a 300-million-year-old specimen from the Carboniferous period.[23] The oldest definitive insect fossil is the Devonian Rhyniognatha hirsti, from the 396-million-year-old Rhynie chert. It may have superficially resembled a modern-day silverfish insect. This species already possessed dicondylic mandibles (two articulations in the mandible), a feature associated with winged insects, suggesting that wings may already have evolved at this time. Thus, the first insects probably appeared earlier, in the Silurian period.[1][24]
|
54 |
+
|
55 |
+
Four super radiations of insects have occurred: beetles (from about 300 million years ago), flies (from about 250 million years ago), moths and wasps (both from about 150 million years ago).[25] These four groups account for the majority of described species. The flies and moths along with the fleas evolved from the Mecoptera.
|
56 |
+
|
57 |
+
The origins of insect flight remain obscure, since the earliest winged insects currently known appear to have been capable fliers. Some extinct insects had an additional pair of winglets attaching to the first segment of the thorax, for a total of three pairs. As of 2009, no evidence suggests the insects were a particularly successful group of animals before they evolved to have wings.[26]
|
58 |
+
|
59 |
+
Late Carboniferous and Early Permian insect orders include both extant groups, their stem groups,[27] and a number of Paleozoic groups, now extinct. During this era, some giant dragonfly-like forms reached wingspans of 55 to 70 cm (22 to 28 in), making them far larger than any living insect. This gigantism may have been due to higher atmospheric oxygen levels that allowed increased respiratory efficiency relative to today. The lack of flying vertebrates could have been another factor. Most extinct orders of insects developed during the Permian period that began around 270 million years ago. Many of the early groups became extinct during the Permian-Triassic extinction event, the largest mass extinction in the history of the Earth, around 252 million years ago.[28]
|
60 |
+
|
61 |
+
The remarkably successful Hymenoptera appeared as long as 146 million years ago in the Cretaceous period, but achieved their wide diversity more recently in the Cenozoic era, which began 66 million years ago. A number of highly successful insect groups evolved in conjunction with flowering plants, a powerful illustration of coevolution.[29]
|
62 |
+
|
63 |
+
Many modern insect genera developed during the Cenozoic. Insects from this period on are often found preserved in amber, often in perfect condition. The body plan, or morphology, of such specimens is thus easily compared with modern species. The study of fossilized insects is called paleoentomology.
|
64 |
+
|
65 |
+
Archaeognatha (Hump-backed/jumping bristletails)
|
66 |
+
|
67 |
+
Zygentoma (silverfish, firebrats, fishmoths)
|
68 |
+
|
69 |
+
†Carbotriplurida
|
70 |
+
|
71 |
+
†Bojophlebiidae
|
72 |
+
|
73 |
+
Odonatoptera (Dragonflies)
|
74 |
+
|
75 |
+
Panephemeroptera (Mayflies)
|
76 |
+
|
77 |
+
Zoraptera (Angel insects)
|
78 |
+
|
79 |
+
Dermaptera (earwigs)
|
80 |
+
|
81 |
+
Plecoptera (stoneflies)
|
82 |
+
|
83 |
+
Orthoptera (grasshoppers, crickets, katydids)
|
84 |
+
|
85 |
+
Mantodea (praying mantises)
|
86 |
+
|
87 |
+
Blattodea (cockroaches & termites)
|
88 |
+
|
89 |
+
Grylloblattodea (ice-crawlers)
|
90 |
+
|
91 |
+
Mantophasmatodea (gladiators)
|
92 |
+
|
93 |
+
Phasmatodea (Stick insects)
|
94 |
+
|
95 |
+
Embioptera (Web spinners)
|
96 |
+
|
97 |
+
Psocodea (Book lice, barkice & sucking lice)
|
98 |
+
|
99 |
+
Hemiptera (true bugs)
|
100 |
+
|
101 |
+
Thysanoptera (Thrips)
|
102 |
+
|
103 |
+
Hymenoptera (sawflies, wasps, bees, ants)
|
104 |
+
|
105 |
+
Strepsiptera
|
106 |
+
|
107 |
+
Coleoptera (Beetles)
|
108 |
+
|
109 |
+
Rhaphidioptera
|
110 |
+
|
111 |
+
Neuroptera (Lacewings)
|
112 |
+
|
113 |
+
Megaloptera
|
114 |
+
|
115 |
+
Lepidoptera (Butterflies & moths)
|
116 |
+
|
117 |
+
Trichoptera (Caddisflies)
|
118 |
+
|
119 |
+
Diptera (True flies)
|
120 |
+
|
121 |
+
Nannomecoptera
|
122 |
+
|
123 |
+
Mecoptera (scorpionflies)
|
124 |
+
|
125 |
+
Neomecoptera (winter scorpionflies)
|
126 |
+
|
127 |
+
Siphonaptera (Fleas)
|
128 |
+
|
129 |
+
A cladogram based on the works of Sroka, Staniczek & Bechly 2014,[30] Prokop et al. 2017[31] & Wipfler et al. 2019.[32]
|
130 |
+
|
131 |
+
Cladogram of living insect groups,[33] with numbers of species in each group.[5] The Apterygota, Palaeoptera, and Exopterygota are possibly paraphyletic groups.
|
132 |
+
|
133 |
+
Traditional morphology-based or appearance-based systematics have usually given the Hexapoda the rank of superclass,[34]:180 and identified four groups within it: insects (Ectognatha), springtails (Collembola), Protura, and Diplura, the latter three being grouped together as the Entognatha on the basis of internalized mouth parts. Supraordinal relationships have undergone numerous changes with the advent of methods based on evolutionary history and genetic data. A recent theory is that the Hexapoda are polyphyletic (where the last common ancestor was not a member of the group), with the entognath classes having separate evolutionary histories from the Insecta.[35] Many of the traditional appearance-based taxa have been shown to be paraphyletic, so rather than using ranks like subclass, superorder, and infraorder, it has proved better to use monophyletic groupings (in which the last common ancestor is a member of the group). The following represents the best-supported monophyletic groupings for the Insecta.
|
134 |
+
|
135 |
+
Insects can be divided into two groups historically treated as subclasses: wingless insects, known as Apterygota, and winged insects, known as Pterygota. The Apterygota consist of the primitively wingless order of the silverfish (Zygentoma). Archaeognatha make up the Monocondylia based on the shape of their mandibles, while Zygentoma and Pterygota are grouped together as Dicondylia. The Zygentoma themselves possibly are not monophyletic, with the family Lepidotrichidae being a sister group to the Dicondylia (Pterygota and the remaining Zygentoma).[36][37]
|
136 |
+
|
137 |
+
Paleoptera and Neoptera are the winged orders of insects differentiated by the presence of hardened body parts called sclerites, and in the Neoptera, muscles that allow their wings to fold flatly over the abdomen. Neoptera can further be divided into incomplete metamorphosis-based (Polyneoptera and Paraneoptera) and complete metamorphosis-based groups. It has proved difficult to clarify the relationships between the orders in Polyneoptera because of constant new findings calling for revision of the taxa. For example, the Paraneoptera have turned out to be more closely related to the Endopterygota than to the rest of the Exopterygota. The recent molecular finding that the traditional louse orders Mallophaga and Anoplura are derived from within Psocoptera has led to the new taxon Psocodea.[38] Phasmatodea and Embiidina have been suggested to form the Eukinolabia.[39] Mantodea, Blattodea, and Isoptera are thought to form a monophyletic group termed Dictyoptera.[40]
|
138 |
+
|
139 |
+
The Exopterygota likely are paraphyletic in regard to the Endopterygota. Matters that have incurred controversy include Strepsiptera and Diptera grouped together as Halteria based on a reduction of one of the wing pairs—a position not well-supported in the entomological community.[41] The Neuropterida are often lumped or split on the whims of the taxonomist. Fleas are now thought to be closely related to boreid mecopterans.[42] Many questions remain in the basal relationships among endopterygote orders, particularly the Hymenoptera.
|
140 |
+
|
141 |
+
The study of the classification or taxonomy of any insect is called systematic entomology. If one works with a more specific order or even a family, the term may also be made specific to that order or family, for example systematic dipterology.
|
142 |
+
|
143 |
+
Insects are prey for a variety of organisms, including terrestrial vertebrates. The earliest vertebrates on land existed 400 million years ago and were large amphibious piscivores. Through gradual evolutionary change, insectivory was the next diet type to evolve.[43]
|
144 |
+
|
145 |
+
Insects were among the earliest terrestrial herbivores and acted as major selection agents on plants.[29] Plants evolved chemical defenses against this herbivory and the insects, in turn, evolved mechanisms to deal with plant toxins. Many insects make use of these toxins to protect themselves from their predators. Such insects often advertise their toxicity using warning colors.[44] This successful evolutionary pattern has also been used by mimics. Over time, this has led to complex groups of coevolved species. Conversely, some interactions between plants and insects, like pollination, are beneficial to both organisms. Coevolution has led to the development of very specific mutualisms in such systems.
|
146 |
+
|
147 |
+
Estimates on the total number of insect species, or those within specific orders, often vary considerably. Globally, averages of these estimates suggest there are around 1.5 million beetle species and 5.5 million insect species, with about 1 million insect species currently found and described.[45]
|
148 |
+
|
149 |
+
Between 950,000–1,000,000 of all described species are insects, so over 50% of all described eukaryotes (1.8 million) are insects (see illustration). With only 950,000 known non-insects, if the actual number of insects is 5.5 million, they may represent over 80% of the total. As only about 20,000 new species of all organisms are described each year, most insect species may remain undescribed, unless the rate of species descriptions greatly increases. Of the 24 orders of insects, four dominate in terms of numbers of described species; at least 670,000 identified species belong to Coleoptera, Diptera, Hymenoptera or Lepidoptera.
|
150 |
+
|
151 |
+
As of 2017, at least 66 insect species extinctions had been recorded in the previous 500 years, which generally occurred on oceanic islands.[47] Declines in insect abundance have been attributed to artificial lighting,[48] land use changes such as urbanization or agricultural use,[49][50] pesticide use,[51] and invasive species.[52] Studies summarized in a 2019 review suggested a large proportion of insect species are threatened with extinction in the 21st century.[53] Though ecologist Manu Sanders notes the 2019 review was biased by mostly excluding data showing increases or stability in insect population, with the studies limited to specific geographic areas and specific groups of species.[54] A larger meta-study published in 2020, analyzing data from 166 long-term surveys, suggested that populations of terrestrial insects are decreasing by about 9% per decade.[55][56] Claims of pending mass insect extinctions or "insect apocalypse" based on a subset of these studies have been popularized in news reports, but often extrapolate beyond the study data or hyperbolize study findings.[57] Other areas have shown increases in some insect species, although trends in most regions are currently unknown. It is difficult to assess long-term trends in insect abundance or diversity because historical measurements are generally not known for many species. Robust data to assess at-risk areas or species is especially lacking for arctic and tropical regions and a majority of the southern hemisphere.[57]
|
152 |
+
|
153 |
+
Insects have segmented bodies supported by exoskeletons, the hard outer covering made mostly of chitin. The segments of the body are organized into three distinctive but interconnected units, or tagmata: a head, a thorax and an abdomen.[58] The head supports a pair of sensory antennae, a pair of compound eyes, zero to three simple eyes (or ocelli) and three sets of variously modified appendages that form the mouthparts. The thorax is made up of three segments: the prothorax, mesothorax and the metathorax. Each thoracic segment supports one pair of legs. The meso- and metathoracic segments may each have a pair of wings, depending on the insect. The abdomen consists of eleven segments, though in a few species of insects, these segments may be fused together or reduced in size. The abdomen also contains most of the digestive, respiratory, excretory and reproductive internal structures.[34]:22–48 Considerable variation and many adaptations in the body parts of insects occur, especially wings, legs, antenna and mouthparts.
|
154 |
+
|
155 |
+
The head is enclosed in a hard, heavily sclerotized, unsegmented, exoskeletal head capsule, or epicranium, which contains most of the sensing organs, including the antennae, ocellus or eyes, and the mouthparts. Of all the insect orders, Orthoptera displays the most features found in other insects, including the sutures and sclerites.[59] Here, the vertex, or the apex (dorsal region), is situated between the compound eyes for insects with a hypognathous and opisthognathous head. In prognathous insects, the vertex is not found between the compound eyes, but rather, where the ocelli are normally. This is because the primary axis of the head is rotated 90° to become parallel to the primary axis of the body. In some species, this region is modified and assumes a different name.[59]:13
|
156 |
+
|
157 |
+
The thorax is a tagma composed of three sections, the prothorax, mesothorax and the metathorax. The anterior segment, closest to the head, is the prothorax, with the major features being the first pair of legs and the pronotum. The middle segment is the mesothorax, with the major features being the second pair of legs and the anterior wings. The third and most posterior segment, abutting the abdomen, is the metathorax, which features the third pair of legs and the posterior wings. Each segment is dilineated by an intersegmental suture. Each segment has four basic regions. The dorsal surface is called the tergum (or notum) to distinguish it from the abdominal terga.[34] The two lateral regions are called the pleura (singular: pleuron) and the ventral aspect is called the sternum. In turn, the notum of the prothorax is called the pronotum, the notum for the mesothorax is called the mesonotum and the notum for the metathorax is called the metanotum. Continuing with this logic, the mesopleura and metapleura, as well as the mesosternum and metasternum, are used.[59]
|
158 |
+
|
159 |
+
The abdomen is the largest tagma of the insect, which typically consists of 11–12 segments and is less strongly sclerotized than the head or thorax. Each segment of the abdomen is represented by a sclerotized tergum and sternum. Terga are separated from each other and from the adjacent sterna or pleura by membranes. Spiracles are located in the pleural area. Variation of this ground plan includes the fusion of terga or terga and sterna to form continuous dorsal or ventral shields or a conical tube. Some insects bear a sclerite in the pleural area called a laterotergite. Ventral sclerites are sometimes called laterosternites. During the embryonic stage of many insects and the postembryonic stage of primitive insects, 11 abdominal segments are present. In modern insects there is a tendency toward reduction in the number of the abdominal segments, but the primitive number of 11 is maintained during embryogenesis. Variation in abdominal segment number is considerable. If the Apterygota are considered to be indicative of the ground plan for pterygotes, confusion reigns: adult Protura have 12 segments, Collembola have 6. The orthopteran family Acrididae has 11 segments, and a fossil specimen of Zoraptera has a 10-segmented abdomen.[59]
|
160 |
+
|
161 |
+
The insect outer skeleton, the cuticle, is made up of two layers: the epicuticle, which is a thin and waxy water resistant outer layer and contains no chitin, and a lower layer called the procuticle. The procuticle is chitinous and much thicker than the epicuticle and has two layers: an outer layer known as the exocuticle and an inner layer known as the endocuticle. The tough and flexible endocuticle is built from numerous layers of fibrous chitin and proteins, criss-crossing each other in a sandwich pattern, while the exocuticle is rigid and hardened.[34]:22–24 The exocuticle is greatly reduced in many insects during their larval stages, e.g., caterpillars. It is also reduced in soft-bodied adult insects.
|
162 |
+
|
163 |
+
Insects are the only invertebrates to have developed active flight capability, and this has played an important role in their success.[34]:186 Their flight muscles are able to contract multiple times for each single nerve impulse, allowing the wings to beat faster than would ordinarily be possible.
|
164 |
+
|
165 |
+
Having their muscles attached to their exoskeletons is efficient and allows more muscle connections.
|
166 |
+
|
167 |
+
The nervous system of an insect can be divided into a brain and a ventral nerve cord. The head capsule is made up of six fused segments, each with either a pair of ganglia, or a cluster of nerve cells outside of the brain. The first three pairs of ganglia are fused into the brain, while the three following pairs are fused into a structure of three pairs of ganglia under the insect's esophagus, called the subesophageal ganglion.[34]:57
|
168 |
+
|
169 |
+
The thoracic segments have one ganglion on each side, which are connected into a pair, one pair per segment. This arrangement is also seen in the abdomen but only in the first eight segments. Many species of insects have reduced numbers of ganglia due to fusion or reduction.[60] Some cockroaches have just six ganglia in the abdomen, whereas the wasp Vespa crabro has only two in the thorax and three in the abdomen. Some insects, like the house fly Musca domestica, have all the body ganglia fused into a single large thoracic ganglion.
|
170 |
+
|
171 |
+
At least a few insects have nociceptors, cells that detect and transmit signals responsible for the sensation of pain.[61][failed verification] This was discovered in 2003 by studying the variation in reactions of larvae of the common fruitfly Drosophila to the touch of a heated probe and an unheated one. The larvae reacted to the touch of the heated probe with a stereotypical rolling behavior that was not exhibited when the larvae were touched by the unheated probe.[62] Although nociception has been demonstrated in insects, there is no consensus that insects feel pain consciously[63]
|
172 |
+
|
173 |
+
Insects are capable of learning.[64]
|
174 |
+
|
175 |
+
An insect uses its digestive system to extract nutrients and other substances from the food it consumes.[65] Most of this food is ingested in the form of macromolecules and other complex substances like proteins, polysaccharides, fats and nucleic acids. These macromolecules must be broken down by catabolic reactions into smaller molecules like amino acids and simple sugars before being used by cells of the body for energy, growth, or reproduction. This break-down process is known as digestion.
|
176 |
+
|
177 |
+
There is extensive variation among different orders, life stages, and even castes in the digestive system of insects.[66] This is the result of extreme adaptations to various lifestyles. The present description focus on a generalized composition of the digestive system of an adult orthopteroid insect, which is considered basal to interpreting particularities of other groups.
|
178 |
+
|
179 |
+
The main structure of an insect's digestive system is a long enclosed tube called the alimentary canal, which runs lengthwise through the body. The alimentary canal directs food unidirectionally from the mouth to the anus. It has three sections, each of which performs a different process of digestion. In addition to the alimentary canal, insects also have paired salivary glands and salivary reservoirs. These structures usually reside in the thorax, adjacent to the foregut.[34]:70–77 The salivary glands (element 30 in numbered diagram) in an insect's mouth produce saliva. The salivary ducts lead from the glands to the reservoirs and then forward through the head to an opening called the salivarium, located behind the hypopharynx. By moving its mouthparts (element 32 in numbered diagram) the insect can mix its food with saliva. The mixture of saliva and food then travels through the salivary tubes into the mouth, where it begins to break down.[67][68] Some insects, like flies, have extra-oral digestion. Insects using extra-oral digestion expel digestive enzymes onto their food to break it down. This strategy allows insects to extract a significant proportion of the available nutrients from the food source.[69]:31 The gut is where almost all of insects' digestion takes place. It can be divided into the foregut, midgut and hindgut.
|
180 |
+
|
181 |
+
The first section of the alimentary canal is the foregut (element 27 in numbered diagram), or stomodaeum. The foregut is lined with a cuticular lining made of chitin and proteins as protection from tough food. The foregut includes the buccal cavity (mouth), pharynx, esophagus and crop and proventriculus (any part may be highly modified), which both store food and signify when to continue passing onward to the midgut.[34]:70
|
182 |
+
|
183 |
+
Digestion starts in buccal cavity (mouth) as partially chewed food is broken down by saliva from the salivary glands. As the salivary glands produce fluid and carbohydrate-digesting enzymes (mostly amylases), strong muscles in the pharynx pump fluid into the buccal cavity, lubricating the food like the salivarium does, and helping blood feeders, and xylem and phloem feeders.
|
184 |
+
|
185 |
+
From there, the pharynx passes food to the esophagus, which could be just a simple tube passing it on to the crop and proventriculus, and then onward to the midgut, as in most insects. Alternately, the foregut may expand into a very enlarged crop and proventriculus, or the crop could just be a diverticulum, or fluid-filled structure, as in some Diptera species.[69]:30–31
|
186 |
+
|
187 |
+
Once food leaves the crop, it passes to the midgut (element 13 in numbered diagram), also known as the mesenteron, where the majority of digestion takes place. Microscopic projections from the midgut wall, called microvilli, increase the surface area of the wall and allow more nutrients to be absorbed; they tend to be close to the origin of the midgut. In some insects, the role of the microvilli and where they are located may vary. For example, specialized microvilli producing digestive enzymes may more likely be near the end of the midgut, and absorption near the origin or beginning of the midgut.[69]:32
|
188 |
+
|
189 |
+
In the hindgut (element 16 in numbered diagram), or proctodaeum, undigested food particles are joined by uric acid to form fecal pellets. The rectum absorbs 90% of the water in these fecal pellets, and the dry pellet is then eliminated through the anus (element 17), completing the process of digestion. Envaginations at the anterior end of the hindgut form the Malpighian tubules, which form the main excretory system of insects.
|
190 |
+
|
191 |
+
Insects may have one to hundreds of Malpighian tubules (element 20). These tubules remove nitrogenous wastes from the hemolymph of the insect and regulate osmotic balance. Wastes and solutes are emptied directly into the alimentary canal, at the junction between the midgut and hindgut.[34]:71–72, 78–80
|
192 |
+
|
193 |
+
The reproductive system of female insects consist of a pair of ovaries, accessory glands, one or more spermathecae, and ducts connecting these parts. The ovaries are made up of a number of egg tubes, called ovarioles, which vary in size and number by species. The number of eggs that the insect is able to make vary by the number of ovarioles with the rate that eggs can develop being also influenced by ovariole design. Female insects are able make eggs, receive and store sperm, manipulate sperm from different males, and lay eggs. Accessory glands or glandular parts of the oviducts produce a variety of substances for sperm maintenance, transport and fertilization, as well as for protection of eggs. They can produce glue and protective substances for coating eggs or tough coverings for a batch of eggs called oothecae. Spermathecae are tubes or sacs in which sperm can be stored between the time of mating and the time an egg is fertilized.[59]:880
|
194 |
+
|
195 |
+
For males, the reproductive system is the testis, suspended in the body cavity by tracheae and the fat body. Most male insects have a pair of testes, inside of which are sperm tubes or follicles that are enclosed within a membranous sac. The follicles connect to the vas deferens by the vas efferens, and the two tubular vasa deferentia connect to a median ejaculatory duct that leads to the outside. A portion of the vas deferens is often enlarged to form the seminal vesicle, which stores the sperm before they are discharged into the female. The seminal vesicles have glandular linings that secrete nutrients for nourishment and maintenance of the sperm. The ejaculatory duct is derived from an invagination of the epidermal cells during development and, as a result, has a cuticular lining. The terminal portion of the ejaculatory duct may be sclerotized to form the intromittent organ, the aedeagus. The remainder of the male reproductive system is derived from embryonic mesoderm, except for the germ cells, or spermatogonia, which descend from the primordial pole cells very early during embryogenesis.[59]:885
|
196 |
+
|
197 |
+
Insect respiration is accomplished without lungs. Instead, the insect respiratory system uses a system of internal tubes and sacs through which gases either diffuse or are actively pumped, delivering oxygen directly to tissues that need it via their trachea (element 8 in numbered diagram). In most insects, air is taken in through openings on the sides of the abdomen and thorax called spiracles.
|
198 |
+
|
199 |
+
The respiratory system is an important factor that limits the size of insects. As insects get larger, this type of oxygen transport is less efficient and thus the heaviest insect currently weighs less than 100 g. However, with increased atmospheric oxygen levels, as were present in the late Paleozoic, larger insects were possible, such as dragonflies with wingspans of more than two feet.[70]
|
200 |
+
|
201 |
+
There are many different patterns of gas exchange demonstrated by different groups of insects. Gas exchange patterns in insects can range from continuous and diffusive ventilation, to discontinuous gas exchange.[34]:65–68 During continuous gas exchange, oxygen is taken in and carbon dioxide is released in a continuous cycle. In discontinuous gas exchange, however, the insect takes in oxygen while it is active and small amounts of carbon dioxide are released when the insect is at rest.[71] Diffusive ventilation is simply a form of continuous gas exchange that occurs by diffusion rather than physically taking in the oxygen. Some species of insect that are submerged also have adaptations to aid in respiration. As larvae, many insects have gills that can extract oxygen dissolved in water, while others need to rise to the water surface to replenish air supplies, which may be held or trapped in special structures.[72][73]
|
202 |
+
|
203 |
+
Because oxygen is delivered directly to tissues via tracheoles, the circulatory system is not used to carry oxygen, and is therefore greatly reduced. The insect circulatory system is open; it has no veins or arteries, and instead consists of little more than a single, perforated dorsal tube that pulses peristaltically. This dorsal blood vessel (element 14) is divided into two sections: the heart and aorta. The dorsal blood vessel circulates the hemolymph, arthropods' fluid analog of blood, from the rear of the body cavity forward.[34]:61–65[74] Hemolymph is composed of plasma in which hemocytes are suspended. Nutrients, hormones, wastes, and other substances are transported throughout the insect body in the hemolymph. Hemocytes include many types of cells that are important for immune responses, wound healing, and other functions. Hemolymph pressure may be increased by muscle contractions or by swallowing air into the digestive system to aid in moulting.[75] Hemolymph is also a major part of the open circulatory system of other arthropods, such as spiders and crustaceans.[76][77]
|
204 |
+
|
205 |
+
The majority of insects hatch from eggs. The fertilization and development takes place inside the egg, enclosed by a shell (chorion) that consists of maternal tissue. In contrast to eggs of other arthropods, most insect eggs are drought resistant. This is because inside the chorion two additional membranes develop from embryonic tissue, the amnion and the serosa. This serosa secretes a cuticle rich in chitin that protects the embryo against desiccation. In Schizophora however the serosa does not develop, but these flies lay their eggs in damp places, such as rotting matter.[78] Some species of insects, like the cockroach Blaptica dubia, as well as juvenile aphids and tsetse flies, are ovoviviparous. The eggs of ovoviviparous animals develop entirely inside the female, and then hatch immediately upon being laid.[7] Some other species, such as those in the genus of cockroaches known as Diploptera, are viviparous, and thus gestate inside the mother and are born alive.[34]:129, 131, 134–135 Some insects, like parasitic wasps, show polyembryony, where a single fertilized egg divides into many and in some cases thousands of separate embryos.[34]:136–137 Insects may be univoltine, bivoltine or multivoltine, i.e. they may have one, two or many broods (generations) in a year.[79]
|
206 |
+
|
207 |
+
Other developmental and reproductive variations include haplodiploidy, polymorphism, paedomorphosis or peramorphosis, sexual dimorphism, parthenogenesis and more rarely hermaphroditism.[34]:143 In haplodiploidy, which is a type of sex-determination system, the offspring's sex is determined by the number of sets of chromosomes an individual receives. This system is typical in bees and wasps.[80] Polymorphism is where a species may have different morphs or forms, as in the oblong winged katydid, which has four different varieties: green, pink and yellow or tan. Some insects may retain phenotypes that are normally only seen in juveniles; this is called paedomorphosis. In peramorphosis, an opposite sort of phenomenon, insects take on previously unseen traits after they have matured into adults. Many insects display sexual dimorphism, in which males and females have notably different appearances, such as the moth Orgyia recens as an exemplar of sexual dimorphism in insects.
|
208 |
+
|
209 |
+
Some insects use parthenogenesis, a process in which the female can reproduce and give birth without having the eggs fertilized by a male. Many aphids undergo a form of parthenogenesis, called cyclical parthenogenesis, in which they alternate between one or many generations of asexual and sexual reproduction.[81][82] In summer, aphids are generally female and parthenogenetic; in the autumn, males may be produced for sexual reproduction. Other insects produced by parthenogenesis are bees, wasps and ants, in which they spawn males. However, overall, most individuals are female, which are produced by fertilization. The males are haploid and the females are diploid.[7] More rarely, some insects display hermaphroditism, in which a given individual has both male and female reproductive organs.
|
210 |
+
|
211 |
+
Insect life-histories show adaptations to withstand cold and dry conditions. Some temperate region insects are capable of activity during winter, while some others migrate to a warmer climate or go into a state of torpor.[83] Still other insects have evolved mechanisms of diapause that allow eggs or pupae to survive these conditions.[84]
|
212 |
+
|
213 |
+
Metamorphosis in insects is the biological process of development all insects must undergo. There are two forms of metamorphosis: incomplete metamorphosis and complete metamorphosis.
|
214 |
+
|
215 |
+
Hemimetabolous insects, those with incomplete metamorphosis, change gradually by undergoing a series of molts. An insect molts when it outgrows its exoskeleton, which does not stretch and would otherwise restrict the insect's growth. The molting process begins as the insect's epidermis secretes a new epicuticle inside the old one. After this new epicuticle is secreted, the epidermis releases a mixture of enzymes that digests the endocuticle and thus detaches the old cuticle. When this stage is complete, the insect makes its body swell by taking in a large quantity of water or air, which makes the old cuticle split along predefined weaknesses where the old exocuticle was thinnest.[34]:142[85]
|
216 |
+
|
217 |
+
Immature insects that go through incomplete metamorphosis are called nymphs or in the case of dragonflies and damselflies, also naiads. Nymphs are similar in form to the adult except for the presence of wings, which are not developed until adulthood. With each molt, nymphs grow larger and become more similar in appearance to adult insects.
|
218 |
+
|
219 |
+
Holometabolism, or complete metamorphosis, is where the insect changes in four stages, an egg or embryo, a larva, a pupa and the adult or imago. In these species, an egg hatches to produce a larva, which is generally worm-like in form. This worm-like form can be one of several varieties: eruciform (caterpillar-like), scarabaeiform (grub-like), campodeiform (elongated, flattened and active), elateriform (wireworm-like) or vermiform (maggot-like). The larva grows and eventually becomes a pupa, a stage marked by reduced movement and often sealed within a cocoon. There are three types of pupae: obtect, exarate or coarctate. Obtect pupae are compact, with the legs and other appendages enclosed. Exarate pupae have their legs and other appendages free and extended. Coarctate pupae develop inside the larval skin.[34]:151 Insects undergo considerable change in form during the pupal stage, and emerge as adults. Butterflies are a well-known example of insects that undergo complete metamorphosis, although most insects use this life cycle. Some insects have evolved this system to hypermetamorphosis.
|
220 |
+
|
221 |
+
Complete metamorphosis is a trait of the most diverse insect group, the Endopterygota.[34]:143 Endopterygota includes 11 Orders, the largest being Diptera (flies), Lepidoptera (butterflies and moths), and Hymenoptera (bees, wasps, and ants), and Coleoptera (beetles). This form of development is exclusive to insects and not seen in any other arthropods.
|
222 |
+
|
223 |
+
Many insects possess very sensitive and specialized organs of perception. Some insects such as bees can perceive ultraviolet wavelengths, or detect polarized light, while the antennae of male moths can detect the pheromones of female moths over distances of many kilometers.[86] The yellow paper wasp (Polistes versicolor) is known for its wagging movements as a form of communication within the colony; it can waggle with a frequency of 10.6±2.1 Hz (n=190). These wagging movements can signal the arrival of new material into the nest and aggression between workers can be used to stimulate others to increase foraging expeditions.[87] There is a pronounced tendency for there to be a trade-off between visual acuity and chemical or tactile acuity, such that most insects with well-developed eyes have reduced or simple antennae, and vice versa. There are a variety of different mechanisms by which insects perceive sound; while the patterns are not universal, insects can generally hear sound if they can produce it. Different insect species can have varying hearing, though most insects can hear only a narrow range of frequencies related to the frequency of the sounds they can produce. Mosquitoes have been found to hear up to 2 kHz, and some grasshoppers can hear up to 50 kHz.[88] Certain predatory and parasitic insects can detect the characteristic sounds made by their prey or hosts, respectively. For instance, some nocturnal moths can perceive the ultrasonic emissions of bats, which helps them avoid predation.[34]:87–94 Insects that feed on blood have special sensory structures that can detect infrared emissions, and use them to home in on their hosts.
|
224 |
+
|
225 |
+
Some insects display a rudimentary sense of numbers,[89] such as the solitary wasps that prey upon a single species. The mother wasp lays her eggs in individual cells and provides each egg with a number of live caterpillars on which the young feed when hatched. Some species of wasp always provide five, others twelve, and others as high as twenty-four caterpillars per cell. The number of caterpillars is different among species, but always the same for each sex of larva. The male solitary wasp in the genus Eumenes is smaller than the female, so the mother of one species supplies him with only five caterpillars; the larger female receives ten caterpillars in her cell.
|
226 |
+
|
227 |
+
A few insects, such as members of the families Poduridae and Onychiuridae (Collembola), Mycetophilidae (Diptera) and the beetle families Lampyridae, Phengodidae, Elateridae and Staphylinidae are bioluminescent. The most familiar group are the fireflies, beetles of the family Lampyridae. Some species are able to control this light generation to produce flashes. The function varies with some species using them to attract mates, while others use them to lure prey. Cave dwelling larvae of Arachnocampa (Mycetophilidae, fungus gnats) glow to lure small flying insects into sticky strands of silk.[90]
|
228 |
+
Some fireflies of the genus Photuris mimic the flashing of female Photinus species to attract males of that species, which are then captured and devoured.[91] The colors of emitted light vary from dull blue (Orfelia fultoni, Mycetophilidae) to the familiar greens and the rare reds (Phrixothrix tiemanni, Phengodidae).[92]
|
229 |
+
|
230 |
+
Most insects, except some species of cave crickets, are able to perceive light and dark. Many species have acute vision capable of detecting minute movements. The eyes may include simple eyes or ocelli as well as compound eyes of varying sizes. Many species are able to detect light in the infrared, ultraviolet and the visible light wavelengths. Color vision has been demonstrated in many species and phylogenetic analysis suggests that UV-green-blue trichromacy existed from at least the Devonian period between 416 and 359 million years ago.[93]
|
231 |
+
|
232 |
+
Insects were the earliest organisms to produce and sense sounds. Insects make sounds mostly by mechanical action of appendages. In grasshoppers and crickets, this is achieved by stridulation. Cicadas make the loudest sounds among the insects by producing and amplifying sounds with special modifications to their body to form tymbals and associated musculature. The African cicada Brevisana brevis has been measured at 106.7 decibels at a distance of 50 cm (20 in).[94] Some insects, such as the Helicoverpa zea moths, hawk moths and Hedylid butterflies, can hear ultrasound and take evasive action when they sense that they have been detected by bats.[95][96] Some moths produce ultrasonic clicks that were once thought to have a role in jamming bat echolocation. The ultrasonic clicks were subsequently found to be produced mostly by unpalatable moths to warn bats, just as warning colorations are used against predators that hunt by sight.[97] Some otherwise palatable moths have evolved to mimic these calls.[98] More recently, the claim that some moths can jam bat sonar has been revisited. Ultrasonic recording and high-speed infrared videography of bat-moth interactions suggest the palatable tiger moth really does defend against attacking big brown bats using ultrasonic clicks that jam bat sonar.[99]
|
233 |
+
|
234 |
+
Very low sounds are also produced in various species of Coleoptera, Hymenoptera, Lepidoptera, Mantodea and Neuroptera. These low sounds are simply the sounds made by the insect's movement. Through microscopic stridulatory structures located on the insect's muscles and joints, the normal sounds of the insect moving are amplified and can be used to warn or communicate with other insects. Most sound-making insects also have tympanal organs that can perceive airborne sounds. Some species in Hemiptera, such as the corixids (water boatmen), are known to communicate via underwater sounds.[100] Most insects are also able to sense vibrations transmitted through surfaces.
|
235 |
+
|
236 |
+
Communication using surface-borne vibrational signals is more widespread among insects because of size constraints in producing air-borne sounds.[101] Insects cannot effectively produce low-frequency sounds, and high-frequency sounds tend to disperse more in a dense environment (such as foliage), so insects living in such environments communicate primarily using substrate-borne vibrations.[102] The mechanisms of production of vibrational signals are just as diverse as those for producing sound in insects.
|
237 |
+
|
238 |
+
Some species use vibrations for communicating within members of the same species, such as to attract mates as in the songs of the shield bug Nezara viridula.[103] Vibrations can also be used to communicate between entirely different species; lycaenid (gossamer-winged butterfly) caterpillars, which are myrmecophilous (living in a mutualistic association with ants) communicate with ants in this way.[104] The Madagascar hissing cockroach has the ability to press air through its spiracles to make a hissing noise as a sign of aggression;[105] the death's-head hawkmoth makes a squeaking noise by forcing air out of their pharynx when agitated, which may also reduce aggressive worker honey bee behavior when the two are in close proximity.[106]
|
239 |
+
|
240 |
+
Chemical communications in animals rely on a variety of aspects including taste and smell. Chemoreception is the physiological response of a sense organ (i.e. taste or smell) to a chemical stimulus where the chemicals act as signals to regulate the state or activity of a cell. A semiochemical is a message-carrying chemical that is meant to attract, repel, and convey information. Types of semiochemicals include pheromones and kairomones. One example is the butterfly Phengaris arion which uses chemical signals as a form of mimicry to aid in predation.[107]
|
241 |
+
|
242 |
+
In addition to the use of sound for communication, a wide range of insects have evolved chemical means for communication. These chemicals, termed semiochemicals, are often derived from plant metabolites include those meant to attract, repel and provide other kinds of information. Pheromones, a type of semiochemical, are used for attracting mates of the opposite sex, for aggregating conspecific individuals of both sexes, for deterring other individuals from approaching, to mark a trail, and to trigger aggression in nearby individuals. Allomones benefit their producer by the effect they have upon the receiver. Kairomones benefit their receiver instead of their producer. Synomones benefit the producer and the receiver. While some chemicals are targeted at individuals of the same species, others are used for communication across species. The use of scents is especially well known to have developed in social insects.[34]:96–105
|
243 |
+
|
244 |
+
Social insects, such as termites, ants and many bees and wasps, are the most familiar species of eusocial animals.[108] They live together in large well-organized colonies that may be so tightly integrated and genetically similar that the colonies of some species are sometimes considered superorganisms. It is sometimes argued that the various species of honey bee are the only invertebrates (and indeed one of the few non-human groups) to have evolved a system of abstract symbolic communication where a behavior is used to represent and convey specific information about something in the environment. In this communication system, called dance language, the angle at which a bee dances represents a direction relative to the sun, and the length of the dance represents the distance to be flown.[34]:309–311 Though perhaps not as advanced as honey bees, bumblebees also potentially have some social communication behaviors. Bombus terrestris, for example, exhibit a faster learning curve for visiting unfamiliar, yet rewarding flowers, when they can see a conspecific foraging on the same species.[109]
|
245 |
+
|
246 |
+
Only insects that live in nests or colonies demonstrate any true capacity for fine-scale spatial orientation or homing. This can allow an insect to return unerringly to a single hole a few millimeters in diameter among thousands of apparently identical holes clustered together, after a trip of up to several kilometers' distance. In a phenomenon known as philopatry, insects that hibernate have shown the ability to recall a specific location up to a year after last viewing the area of interest.[110] A few insects seasonally migrate large distances between different geographic regions (e.g., the overwintering areas of the monarch butterfly).[34]:14
|
247 |
+
|
248 |
+
The eusocial insects build nests, guard eggs, and provide food for offspring full-time (see Eusociality).
|
249 |
+
Most insects, however, lead short lives as adults, and rarely interact with one another except to mate or compete for mates. A small number exhibit some form of parental care, where they will at least guard their eggs, and sometimes continue guarding their offspring until adulthood, and possibly even feeding them. Another simple form of parental care is to construct a nest (a burrow or an actual construction, either of which may be simple or complex), store provisions in it, and lay an egg upon those provisions. The adult does not contact the growing offspring, but it nonetheless does provide food. This sort of care is typical for most species of bees and various types of wasps.[111]
|
250 |
+
|
251 |
+
Insects are the only group of invertebrates to have developed flight. The evolution of insect wings has been a subject of debate. Some entomologists suggest that the wings are from paranotal lobes, or extensions from the insect's exoskeleton called the nota, called the paranotal theory. Other theories are based on a pleural origin. These theories include suggestions that wings originated from modified gills, spiracular flaps or as from an appendage of the epicoxa. The epicoxal theory suggests the insect wings are modified epicoxal exites, a modified appendage at the base of the legs or coxa.[112] In the Carboniferous age, some of the Meganeura dragonflies had as much as a 50 cm (20 in) wide wingspan. The appearance of gigantic insects has been found to be consistent with high atmospheric oxygen. The respiratory system of insects constrains their size, however the high oxygen in the atmosphere allowed larger sizes.[113] The largest flying insects today are much smaller, with the largest wingspan belonging to the white witch moth (Thysania agrippina), at approximately 28 cm (11 in).[114]
|
252 |
+
|
253 |
+
Insect flight has been a topic of great interest in aerodynamics due partly to the inability of steady-state theories to explain the lift generated by the tiny wings of insects. But insect wings are in motion, with flapping and vibrations, resulting in churning and eddies, and the misconception that physics says "bumblebees can't fly" persisted throughout most of the twentieth century.
|
254 |
+
|
255 |
+
Unlike birds, many small insects are swept along by the prevailing winds[115] although many of the larger insects are known to make migrations. Aphids are known to be transported long distances by low-level jet streams.[116] As such, fine line patterns associated with converging winds within weather radar imagery, like the WSR-88D radar network, often represent large groups of insects.[117]
|
256 |
+
|
257 |
+
Many adult insects use six legs for walking and have adopted a tripedal gait. The tripedal gait allows for rapid walking while always having a stable stance and has been studied extensively in cockroaches and ants. The legs are used in alternate triangles touching the ground. For the first step, the middle right leg and the front and rear left legs are in contact with the ground and move the insect forward, while the front and rear right leg and the middle left leg are lifted and moved forward to a new position. When they touch the ground to form a new stable triangle the other legs can be lifted and brought forward in turn and so on.[118] The purest form of the tripedal gait is seen in insects moving at high speeds. However, this type of locomotion is not rigid and insects can adapt a variety of gaits. For example, when moving slowly, turning, avoiding obstacles, climbing or slippery surfaces, four (tetrapod) or more feet (wave-gait[119]) may be touching the ground. Insects can also adapt their gait to cope with the loss of one or more limbs.
|
258 |
+
|
259 |
+
Cockroaches are among the fastest insect runners and, at full speed, adopt a bipedal run to reach a high velocity in proportion to their body size. As cockroaches move very quickly, they need to be video recorded at several hundred frames per second to reveal their gait. More sedate locomotion is seen in the stick insects or walking sticks (Phasmatodea). A few insects have evolved to walk on the surface of the water, especially members of the Gerridae family, commonly known as water striders. A few species of ocean-skaters in the genus Halobates even live on the surface of open oceans, a habitat that has few insect species.[120]
|
260 |
+
|
261 |
+
Insect walking is of particular interest as an alternative form of locomotion in robots. The study of insects and bipeds has a significant impact on possible robotic methods of transport. This may allow new robots to be designed that can traverse terrain that robots with wheels may be unable to handle.[118]
|
262 |
+
|
263 |
+
A large number of insects live either part or the whole of their lives underwater. In many of the more primitive orders of insect, the immature stages are spent in an aquatic environment. Some groups of insects, like certain water beetles, have aquatic adults as well.[72]
|
264 |
+
|
265 |
+
Many of these species have adaptations to help in under-water locomotion. Water beetles and water bugs have legs adapted into paddle-like structures. Dragonfly naiads use jet propulsion, forcibly expelling water out of their rectal chamber.[121] Some species like the water striders are capable of walking on the surface of water. They can do this because their claws are not at the tips of the legs as in most insects, but recessed in a special groove further up the leg; this prevents the claws from piercing the water's surface film.[72] Other insects such as the Rove beetle Stenus are known to emit pygidial gland secretions that reduce surface tension making it possible for them to move on the surface of water by Marangoni propulsion (also known by the German term Entspannungsschwimmen).[122][123]
|
266 |
+
|
267 |
+
Insect ecology is the scientific study of how insects, individually or as a community, interact with the surrounding environment or ecosystem.[124]:3 Insects play one of the most important roles in their ecosystems, which includes many roles, such as soil turning and aeration, dung burial, pest control, pollination and wildlife nutrition. An example is the beetles, which are scavengers that feed on dead animals and fallen trees and thereby recycle biological materials into forms found useful by other organisms.[125] These insects, and others, are responsible for much of the process by which topsoil is created.[34]:3, 218–228
|
268 |
+
|
269 |
+
Insects are mostly soft bodied, fragile and almost defenseless compared to other, larger lifeforms. The immature stages are small, move slowly or are immobile, and so all stages are exposed to predation and parasitism. Insects then have a variety of defense strategies to avoid being attacked by predators or parasitoids. These include camouflage, mimicry, toxicity and active defense.[127]
|
270 |
+
|
271 |
+
Camouflage is an important defense strategy, which involves the use of coloration or shape to blend into the surrounding environment.[128] This sort of protective coloration is common and widespread among beetle families, especially those that feed on wood or vegetation, such as many of the leaf beetles (family Chrysomelidae) or weevils. In some of these species, sculpturing or various colored scales or hairs cause the beetle to resemble bird dung or other inedible objects. Many of those that live in sandy environments blend in with the coloration of the substrate.[127] Most phasmids are known for effectively replicating the forms of sticks and leaves, and the bodies of some species (such as O. macklotti and Palophus centaurus) are covered in mossy or lichenous outgrowths that supplement their disguise. Very rarely, a species may have the ability to change color as their surroundings shift (Bostra scabrinota). In a further behavioral adaptation to supplement crypsis, a number of species have been noted to perform a rocking motion where the body is swayed from side to side that is thought to reflect the movement of leaves or twigs swaying in the breeze. Another method by which stick insects avoid predation and resemble twigs is by feigning death (catalepsy), where the insect enters a motionless state that can be maintained for a long period. The nocturnal feeding habits of adults also aids Phasmatodea in remaining concealed from predators.[129]
|
272 |
+
|
273 |
+
Another defense that often uses color or shape to deceive potential enemies is mimicry. A number of longhorn beetles (family Cerambycidae) bear a striking resemblance to wasps, which helps them avoid predation even though the beetles are in fact harmless.[127] Batesian and Müllerian mimicry complexes are commonly found in Lepidoptera. Genetic polymorphism and natural selection give rise to otherwise edible species (the mimic) gaining a survival advantage by resembling inedible species (the model). Such a mimicry complex is referred to as Batesian. One of the most famous examples, where the viceroy butterfly was long believed to be a Batesian mimic of the inedible monarch, was later disproven, as the viceroy is more toxic than the monarch, and this resemblance is now considered to be a case of Müllerian mimicry.[126] In Müllerian mimicry, inedible species, usually within a taxonomic order, find it advantageous to resemble each other so as to reduce the sampling rate by predators who need to learn about the insects' inedibility. Taxa from the toxic genus Heliconius form one of the most well known Müllerian complexes.[130]
|
274 |
+
|
275 |
+
Chemical defense is another important defense found among species of Coleoptera and Lepidoptera, usually being advertised by bright colors, such as the monarch butterfly. They obtain their toxicity by sequestering the chemicals from the plants they eat into their own tissues. Some Lepidoptera manufacture their own toxins. Predators that eat poisonous butterflies and moths may become sick and vomit violently, learning not to eat those types of species; this is actually the basis of Müllerian mimicry. A predator who has previously eaten a poisonous lepidopteran may avoid other species with similar markings in the future, thus saving many other species as well.[131] Some ground beetles of the family Carabidae can spray chemicals from their abdomen with great accuracy, to repel predators.[127]
|
276 |
+
|
277 |
+
Pollination is the process by which pollen is transferred in the reproduction of plants, thereby enabling fertilisation and sexual reproduction. Most flowering plants require an animal to do the transportation. While other animals are included as pollinators, the majority of pollination is done by insects.[132] Because insects usually receive benefit for the pollination in the form of energy rich nectar it is a grand example of mutualism. The various flower traits (and combinations thereof) that differentially attract one type of pollinator or another are known as pollination syndromes. These arose through complex plant-animal adaptations. Pollinators find flowers through bright colorations, including ultraviolet, and attractant pheromones. The study of pollination by insects is known as anthecology.
|
278 |
+
|
279 |
+
Many insects are parasites of other insects such as the parasitoid wasps. These insects are known as entomophagous parasites. They can be beneficial due to their devastation of pests that can destroy crops and other resources. Many insects have a parasitic relationship with humans such as the mosquito. These insects are known to spread diseases such as malaria and yellow fever and because of such, mosquitoes indirectly cause more deaths of humans than any other animal.
|
280 |
+
|
281 |
+
Many insects are considered pests by humans. Insects commonly regarded as pests include those that are parasitic (e.g. lice, bed bugs), transmit diseases (mosquitoes, flies), damage structures (termites), or destroy agricultural goods (locusts, weevils). Many entomologists are involved in various forms of pest control, as in research for companies to produce insecticides, but increasingly rely on methods of biological pest control, or biocontrol. Biocontrol uses one organism to reduce the population density of another organism—the pest—and is considered a key element of integrated pest management.[133][134]
|
282 |
+
|
283 |
+
Despite the large amount of effort focused at controlling insects, human attempts to kill pests with insecticides can backfire. If used carelessly, the poison can kill all kinds of organisms in the area, including insects' natural predators, such as birds, mice and other insectivores. The effects of DDT's use exemplifies how some insecticides can threaten wildlife beyond intended populations of pest insects.[135][136]
|
284 |
+
|
285 |
+
Although pest insects attract the most attention, many insects are beneficial to the environment and to humans. Some insects, like wasps, bees, butterflies and ants, pollinate flowering plants. Pollination is a mutualistic relationship between plants and insects. As insects gather nectar from different plants of the same species, they also spread pollen from plants on which they have previously fed. This greatly increases plants' ability to cross-pollinate, which maintains and possibly even improves their evolutionary fitness. This ultimately affects humans since ensuring healthy crops is critical to agriculture. As well as pollination ants help with seed distribution of plants. This helps to spread the plants, which increases plant diversity. This leads to an overall better environment.[137] A serious environmental problem is the decline of populations of pollinator insects, and a number of species of insects are now cultured primarily for pollination management in order to have sufficient pollinators in the field, orchard or greenhouse at bloom time.[138]:240–243 Another solution, as shown in Delaware, has been to raise native plants to help support native pollinators like L. vierecki.[139] Insects also produce useful substances such as honey, wax, lacquer and silk. Honey bees have been cultured by humans for thousands of years for honey, although contracting for crop pollination is becoming more significant for beekeepers. The silkworm has greatly affected human history, as silk-driven trade established relationships between China and the rest of the world.
|
286 |
+
|
287 |
+
Insectivorous insects, or insects that feed on other insects, are beneficial to humans if they eat insects that could cause damage to agriculture and human structures. For example, aphids feed on crops and cause problems for farmers, but ladybugs feed on aphids, and can be used as a means to significantly reduce pest aphid populations. While birds are perhaps more visible predators of insects, insects themselves account for the vast majority of insect consumption. Ants also help control animal populations by consuming small vertebrates.[140] Without predators to keep them in check, insects can undergo almost unstoppable population explosions.[34]:328–348[34]:400[141][142]
|
288 |
+
|
289 |
+
Insects are also used in medicine, for example fly larvae (maggots) were formerly used to treat wounds to prevent or stop gangrene, as they would only consume dead flesh. This treatment is finding modern usage in some hospitals. Recently insects have also gained attention as potential sources of drugs and other medicinal substances.[143] Adult insects, such as crickets and insect larvae of various kinds, are also commonly used as fishing bait.[144]
|
290 |
+
|
291 |
+
Insects play important roles in biological research. For example, because of its small size, short generation time and high fecundity, the common fruit fly Drosophila melanogaster is a model organism for studies in the genetics of higher eukaryotes. D. melanogaster has been an essential part of studies into principles like genetic linkage, interactions between genes, chromosomal genetics, development, behavior and evolution. Because genetic systems are well conserved among eukaryotes, understanding basic cellular processes like DNA replication or transcription in fruit flies can help to understand those processes in other eukaryotes, including humans.[145] The genome of D. melanogaster was sequenced in 2000, reflecting the organism's important role in biological research. It was found that 70% of the fly genome is similar to the human genome, supporting the evolution theory.[146]
|
292 |
+
|
293 |
+
In some cultures, insects, especially deep-fried cicadas, are considered to be delicacies, whereas in other places they form part of the normal diet. Insects have a high protein content for their mass, and some authors suggest their potential as a major source of protein in human nutrition.[34]:10–13 In most first-world countries, however, entomophagy (the eating of insects), is taboo.[147]
|
294 |
+
Since it is impossible to entirely eliminate pest insects from the human food chain, insects are inadvertently present in many foods, especially grains. Food safety laws in many countries do not prohibit insect parts in food, but rather limit their quantity. According to cultural materialist anthropologist Marvin Harris, the eating of insects is taboo in cultures that have other protein sources such as fish or livestock.
|
295 |
+
|
296 |
+
Due to the abundance of insects and a worldwide concern of food shortages, the Food and Agriculture Organization of the United Nations considers that the world may have to, in the future, regard the prospects of eating insects as a food staple. Insects are noted for their nutrients, having a high content of protein, minerals and fats and are eaten by one-third of the global population.[148]
|
297 |
+
|
298 |
+
Several insect species such as the black soldier fly or the housefly in their maggot forms, as well as beetle larvae such as mealworms can be processed and used as feed for farmed animals such as chicken, fish and pigs.[149]
|
299 |
+
|
300 |
+
Insect larvae (i.e. black soldier fly larvae) can provide protein, grease, and chitin. The grease is usable in the pharmaceutical industry (cosmetics,[150] surfactants for shower gel) -hereby replacing other vegetable oils as palm oil.[151]
|
301 |
+
|
302 |
+
Also, insect cooking oil, insect butter and fatty alcohols can be made from such insects as the superworm (Zophobas morio).[152][153]
|
303 |
+
|
304 |
+
Many species of insects are sold and kept as pets.
|
305 |
+
|
306 |
+
Scarab beetles held religious and cultural symbolism in Old Egypt, Greece and some shamanistic Old World cultures. The ancient Chinese regarded cicadas as symbols of rebirth or immortality. In Mesopotamian literature, the epic poem of Gilgamesh has allusions to Odonata that signify the impossibility of immortality. Among the Aborigines of Australia of the Arrernte language groups, honey ants and witchety grubs served as personal clan totems. In the case of the 'San' bush-men of the Kalahari, it is the praying mantis that holds much cultural significance including creation and zen-like patience in waiting.[34]:9
|
en/2742.html.txt
ADDED
@@ -0,0 +1,115 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Plucked
|
2 |
+
|
3 |
+
String instruments, stringed instruments, or chordophones are musical instruments that produce sound from vibrating strings when the performer plays or sounds the strings in some manner.
|
4 |
+
|
5 |
+
Musicians play some string instruments by plucking the strings with their fingers or a plectrum—and others by hitting the strings with a light wooden hammer or by rubbing the strings with a bow. In some keyboard instruments, such as the harpsichord, the musician presses a key that plucks the string.
|
6 |
+
|
7 |
+
With bowed instruments, the player pulls a rosined horsehair bow across the strings, causing them to vibrate. With a hurdy-gurdy, the musician cranks a wheel whose rosined edge touches the strings.
|
8 |
+
|
9 |
+
Bowed instruments include the string section instruments of the Classical music orchestra (violin, viola, cello and double bass) and a number of other instruments (e.g., viols and gambas used in early music from the Baroque music era and fiddles used in many types of folk music). All of the bowed string instruments can also be plucked with the fingers, a technique called "pizzicato". A wide variety of techniques are used to sound notes on the electric guitar, including plucking with the fingernails or a plectrum, strumming and even "tapping" on the fingerboard and using feedback from a loud, distorted guitar amplifier to produce a sustained sound. Some types of string instrument are mainly plucked, such as the harp and the electric bass. In the Hornbostel-Sachs scheme of musical instrument classification, used in organology, string instruments are called chordophones. Other examples include the sitar, rebab, banjo, mandolin, ukulele, and bouzouki.
|
10 |
+
|
11 |
+
In most string instruments, the vibrations are transmitted to the body of the instrument, which often incorporates some sort of hollow or enclosed area. The body of the instrument also vibrates, along with the air inside it. The vibration of the body of the instrument and the enclosed hollow or chamber make the vibration of the string more audible to the performer and audience. The body of most string instruments is hollow. Some, however—such as electric guitar and other instruments that rely on electronic amplification—may have a solid wood body.
|
12 |
+
|
13 |
+
Dating to around c. 13,000–BC, a cave painting in the Trois Frères cave in France depicts what some believe is a musical bow, a hunting bow used as a single-stringed musical instrument.[1][2] From the musical bow, families of stringed instruments developed; since each string played a single note, adding strings added new notes, creating bow harps, harps and lyres.[3] In turn, this led to being able to play dyads and chords. Another innovation occurred when the bow harp was straightened out and a bridge used to lift the strings off the stick-neck, creating the lute.[4]
|
14 |
+
|
15 |
+
This picture of musical bow to harp bow is theory and has been contested. In 1965 Franz Jahnel wrote his criticism stating that the early ancestors of plucked instruments are not currently known.[5] He felt that the harp bow was a long cry from the sophistication of the civilizations of western Asia in 4000 BC that took the primitive technology and created "technically and artistically well-made harps, lyres, citharas, and lutes."[5]
|
16 |
+
|
17 |
+
Archaeological digs have identified some of the earliest stringed instruments in Ancient Mesopotamian sites, like the lyres of Ur, which include artifacts over three thousand years old. The development of lyre instruments required the technology to create a tuning mechanism to tighten and loosen the string tension. Lyres with wooden bodies and strings used for plucking or playing with a bow represent key instruments that point towards later harps and violin-type instruments; moreover, Indian instruments from 500 BC have been discovered with anything from 7 to 21 strings.
|
18 |
+
|
19 |
+
Musicologists have put forth examples of that 4th-century BC technology, looking at engraved images that have survived. The earliest image showing a lute-like instrument came from Mesopotamia prior to 3000 BC.[7] A cylinder seal from c. 3100 BC or earlier (now in the possession of the British Museum) shows what is thought to be a woman playing a stick lute.[7][8] From the surviving images, theororists have categorized the Mesopotamian lutes, showing that they developed into a long variety and a short.[9] The line of long lutes may have developed into the tamburs and pandura.[10] The line of short lutes was further developed to the east of Mesopotamia, in Bactria, Gandhara, and Northwest India, and shown in sculpture from the 2nd century BC through the 4th or 5th centuries AD.[11][12][13]
|
20 |
+
|
21 |
+
During the medieval era, instrument development varied in different regions of the world. Middle Eastern rebecs represented breakthroughs in terms of shape and strings, with a half a pear shape using three strings. Early versions of the violin and fiddle, by comparison, emerged in Europe through instruments such as the gittern, a four-stringed precursor to the guitar, and basic lutes. These instruments typically used catgut (animal intestine) and other materials, including silk, for their strings.
|
22 |
+
|
23 |
+
String instrument design refined during the Renaissance and into the Baroque period (1600–1750) of musical history. Violins and guitars became more consistent in design and were roughly similar to acoustic guitars of the 2000s. The violins of the Renaissance featured intricate woodwork and stringing, while more elaborate bass instruments such as the bandora were produced alongside quill-plucked citterns, and Spanish body guitars.
|
24 |
+
|
25 |
+
In the 19th century, string instruments were made more widely available through mass production, with wood string instruments a key part of orchestras – cellos, violas, and upright basses, for example, were now standard instruments for chamber ensembles and smaller orchestras. At the same time, the 19th-century guitar became more typically associated with six string models, rather than traditional five string versions.
|
26 |
+
|
27 |
+
Major changes to string instruments in the 20th century primarily involved innovations in electronic instrument amplification and electronic music – electric violins were available by the 1920s, and were an important part of emerging jazz music trends in the United States. The acoustic guitar was widely used in blues and jazz, but as an acoustic instrument, it was not loud enough to be a solo instrument, so these genres mostly used it as an accompaniment rhythm section instrument. In big bands of the 1920s, the acoustic guitar played backing chords, but it was not loud enough to play solos like the saxophone and trumpet. The development of guitar amplifiers, which contained a power amplifier and a loudspeaker in a wooden cabinet, let jazz guitarists play solos and be heard over a big band. The development of the electric guitar provided guitarists with an instrument that was built to connect to guitar amplifiers. Electric guitars have magnetic pickups, volume control knobs and an output jack.
|
28 |
+
|
29 |
+
In the 1960s, larger, more powerful guitar amplifiers were developed, called "stacks". These powerful amplifiers enabled guitarists to perform in rock bands that played in large venues such as stadiums and outdoor music festivals (e.g., Woodstock Music Festival). Along with the development of guitar amplifiers, a large range of electronic effects units, many in small stompbox pedals were introduced in the 1960s and 1970s, such as fuzz pedals, flangers and phaser enabling performers to create unique new sounds during the psychedelic rock era. Breakthroughs in electric guitar and basses technologies and playing styles enabled major breakthroughs in pop and rock music in the 1960s and 1970s. The distinctive sound of the amplified electric guitar was the centerpiece of new genres of music such as blues rock and jazz-rock fusion. The sonic power of the loudly amplified, highly distorted electric guitar was to key element of the early heavy metal music, with the distorted guitar being used in lead guitar roles, and with power chords as a rhythm guitar.
|
30 |
+
|
31 |
+
The ongoing use of electronic amplification and effects units in string instruments, ranging from traditional instruments like the violin to the new electric guitar, added variety to contemporary classical music performances, and enabled experimentation in the dynamic and timbre (tone colour) range of orchestras, bands, and solo performances.[14]
|
32 |
+
|
33 |
+
String instruments can be divided in three groups
|
34 |
+
|
35 |
+
It is also possible to divide the instruments into categories focused on how the instrument is played.
|
36 |
+
|
37 |
+
All string instruments produce sound from one or more vibrating strings, transferred to the air by the body of the instrument (or by a pickup in the case of electronically amplified instruments). They are usually categorised by the technique used to make the strings vibrate (or by the primary technique, in the case of instruments where more than one may apply.) The three most common techniques are plucking, bowing, and striking. An important difference between bowing and plucking is that in the former the phenomenon is periodic so that the overtones are kept in a strictly harmonic relationship to the fundamental.[15]
|
38 |
+
|
39 |
+
Plucking is a method of playing on instruments such as the veena, banjo, ukulele, guitar, harp, lute, mandolin, oud, and sitar, using either a finger, thumb, or quills (now plastic plectra) to pluck the strings.
|
40 |
+
|
41 |
+
Instruments normally played by bowing (see below) may also be plucked, a technique referred to by the Italian term pizzicato.
|
42 |
+
|
43 |
+
Bowing (Italian: arco) is a method used in some string instruments, including the violin, viola, cello, and the double bass (of the violin family), and the old viol family. The bow consists of a stick with a "ribbon" of parallel horse tail hairs stretched between its ends. The hair is coated with rosin so it can grip the string; moving the hair across a string causes a stick-slip phenomenon, making the string vibrate, and prompting the instrument to emit sound. Darker grades of rosin grip well in cool, dry climates, but may be too sticky in warmer, more humid weather. Violin and viola players generally use harder, lighter-colored rosin than players of lower-pitched instruments, who tend to favor darker, softer rosin.[16]
|
44 |
+
|
45 |
+
The ravanahatha is one of the oldest string instruments. Ancestors of the modern bowed string instruments are the rebab of the Islamic Empires, the Persian kamanche and the Byzantine lira. Other bowed instruments are the rebec, hardingfele, nyckelharpa, kokyū, erhu, igil, sarangi and K'ni. The hurdy-gurdy is bowed by a wheel. Rarely, the guitar has been played with a bow (rather than plucked) for unique effects.
|
46 |
+
|
47 |
+
The third common method of sound production in stringed instruments is to strike the string. The piano and hammered dulcimer use this method of sound production. Even though the piano strikes the strings, the use of felt hammers means that the sound that is produced can nevertheless be mellow and rounded, in contrast to the sharp attack produced when a very hard hammer strikes the strings.
|
48 |
+
|
49 |
+
Violin family string instrument players are occasionally instructed to strike the string with the stick of the bow, a technique called col legno. This yields a percussive sound along with the pitch of the note. A well-known use of col legno for orchestral strings is Gustav Holst's "Mars" movement from The Planets suite.
|
50 |
+
|
51 |
+
The aeolian harp employs a very unusual method of sound production: the strings are excited by the movement of the air.
|
52 |
+
|
53 |
+
Some instruments that have strings have an attached keyboard that the player presses keys on to trigger a mechanism that sounds the strings, instead of directly manipulating the strings. These include the piano, the clavichord, and the harpsichord. With these keyboard instruments, strings are occasionally plucked or bowed by hand. Modern composers such as Henry Cowell wrote music that requires that the player reach inside the piano and pluck the strings directly, "bow" them with bow hair wrapped around the strings, or play them by rolling the bell of a brass instrument such as a trombone on the array of strings. However, these are relatively rarely used special techniques.
|
54 |
+
|
55 |
+
Other keyed string instruments, small enough for a strolling musician to play, include the plucked autoharp, the bowed nyckelharpa, and the hurdy-gurdy, which is played by cranking a rosined wheel.
|
56 |
+
|
57 |
+
Steel-stringed instruments (such as the guitar, bass, violin, etc.) can be played using a magnetic field. An E-Bow is a small hand-held battery-powered device that magnetically excites the strings of an electric string instrument to provide a sustained, singing tone reminiscent of a held bowed violin note.
|
58 |
+
|
59 |
+
Third bridge is a plucking method where the player frets a string and strikes the side opposite the bridge. The technique is mainly used on electric instruments because these have a pickup that amplifies only the local string vibration. It is possible on acoustic instruments as well, but less effective. For instance, a player might press on the seventh fret on a guitar and pluck it at the head side to make a tone resonate at the opposed side. On electric instruments, this technique generates multitone sounds reminiscent of a clock or bell.
|
60 |
+
|
61 |
+
Electric string instruments, such as the electric guitar, can also be played without touching the strings by using audio feedback. When an electric guitar is plugged into a loud, powerful guitar amplifier with a loudspeaker and a high level of distortion is intentionally used, the guitar produces sustained high-pitched sounds. By changing the proximity of the guitar to the speaker, the guitarist can produce sounds that cannot be produced with standard plucking and picking techniques. This technique was popularized by Jimi Hendrix and others in the 1960s. It was widely used in psychedelic rock and heavy metal music.
|
62 |
+
|
63 |
+
There are three ways to change the pitch of a vibrating string. String instruments are tuned by varying the strings' tension because adjusting length or mass per unit length is impractical. Instruments with a fingerboard are then played by adjusting the length of the vibrating portion of the strings. The following observations all apply to a string that is infinitely flexible (a theoretical assumption, because in practical applications, strings are not infinitely flexible) strung between two fixed supports. Real strings have finite curvature at the bridge and nut, and the bridge, because of its motion, are not exactly nodes of vibration. Hence the following statements about proportionality are approximations.
|
64 |
+
|
65 |
+
Pitch can be adjusted by varying the length of the string.[15] A longer string results in a lower pitch, while a shorter string results in a higher pitch. The frequency is inversely proportional to the length:
|
66 |
+
|
67 |
+
A string twice as long produces a tone of half the frequency (one octave lower).
|
68 |
+
|
69 |
+
Pitch can be adjusted by varying the tension of the string. A string with less tension (looser) results in a lower pitch, while a string with greater tension (tighter) results in a higher pitch. A homemade washtub bass made out of a length of rope, a broomstick and a washtub can produce different pitches by increasing the tension on the rope (producing a higher pitch) or reducing the tension (producing a lower pitch). The frequency is proportional to the square root of the tension:
|
70 |
+
|
71 |
+
The pitch of a string can also be varied by changing the linear density (mass per unit length) of the string. In practical applications, such as with double bass strings or bass piano strings, extra weight is added to strings by winding them with metal. A string with a heavier metal winding produces a lower pitch than a string of equal length without a metal winding. This can be seen on a 2016-era set of gut strings for double bass. The higher-pitched G string is often made of synthetic material, or sometimes animal intestine, with no metal wrapping. To enable the low E string to produce a much lower pitch with a string of the same length, it is wrapped with many wrappings of thin metal wire. This adds to its mass without making it too stiff. The frequency is inversely proportional to the square root of the linear density:
|
72 |
+
|
73 |
+
Given two strings of equal length and tension, the string with higher mass per unit length produces the lower pitch.
|
74 |
+
|
75 |
+
The length of the string from nut to bridge on bowed or plucked instruments ultimately determines the distance between different notes on the instrument. For example, a double bass with its low range needs a scale length of around 42 inches (110 cm), whilst a violin scale is only about 13 inches (33 cm). On the shorter scale of the violin, the left hand may easily reach a range of slightly more than two octaves without shifting position, while on the bass' longer scale, a single octave or a ninth is reachable in lower positions.
|
76 |
+
|
77 |
+
In bowed instruments, the bow is normally placed perpendicularly to the string, at a point halfway between the end of the fingerboard and the bridge. However, different bow placements can be selected to change timbre. Application of the bow close to the bridge (known as sul ponticello) produces an intense, sometimes harsh sound, which acoustically emphasizes the upper harmonics. Bowing above the fingerboard (sul tasto) produces a purer tone with less overtone strength, emphasizing the fundamental, also known as flautando, since it sounds less reedy and more flute-like.
|
78 |
+
|
79 |
+
Bowed instruments pose a challenge to instrument builders, as compared with instruments that are only plucked (e.g., guitar), because on bowed instruments, the musician must be able to play one string at a time if they wish. As such, a bowed instrument must have a curved bridge that makes the "outer" strings lower in height than the "inner" strings. With such a curved bridge, the player can select one string at a time to play. On guitars and lutes, the bridge can be flat, because the strings are played by plucking them with the fingers, fingernails or a pick; by moving the fingers or pick to different positions, the player can play different strings. On bowed instruments, the need to play strings individually with the bow also limits the number of strings to about six or seven strings; with more strings, it would be impossible to select individual strings to bow. (Note: bowed strings can also play two bowed notes on two different strings at the same time, a technique called a double stop.) Indeed, on the orchestral string section instruments, four strings are the norm, with the exception of five strings used on some double basses. In contrast, with stringed keyboard instruments, 88 courses are used on a piano, and even though these strings are arranged on a flat bridge, the mechanism can play any of the notes individually.
|
80 |
+
|
81 |
+
Similar timbral distinctions are also possible with plucked string instruments by selecting an appropriate plucking point, although the difference is perhaps more subtle.
|
82 |
+
|
83 |
+
In keyboard instruments, the contact point along the string (whether this be hammer, tangent, or plectrum) is a choice made by the instrument designer. Builders use a combination of experience and acoustic theory to establish the right set of contact points.
|
84 |
+
|
85 |
+
In harpsichords, often there are two sets of strings of equal length. These "choirs" usually differ in their plucking points. One choir has a "normal" plucking point, producing a canonical harpsichord sound; the other has a plucking point close to the bridge, producing a reedier "nasal" sound rich in upper harmonics.
|
86 |
+
|
87 |
+
A single string at a certain tension and length only produces one note. To produce multiple notes, string instruments use one of two methods. One is to add enough strings to cover the required range of different notes (e.g., as with the piano, which has sets of 88 strings to enable the performer to play 88 different notes). The other is to provide a way to stop the strings along their length to shorten the part that vibrates, which is the method used in guitar and violin family instruments to produce different notes from the same string. The piano and harp represent the first method, where each note on the instrument has its own string or course of multiple strings tuned to the same note. (Many notes on a piano are strung with a "choir" of three strings tuned alike, to increase the volume.) A guitar represents the second method—the player's fingers push the string against the fingerboard so that the string is pressed firmly against a metal fret. Pressing the string against a fret while plucking or strumming it shortens the vibrating part and thus produces a different note.
|
88 |
+
|
89 |
+
Some zithers combine stoppable (melody) strings with a greater number of "open" harmony or chord strings. On instruments with stoppable strings, such as the violin or guitar, the player can shorten the vibrating length of the string, using their fingers directly (or more rarely through some mechanical device, as in the nyckelharpa and the hurdy-gurdy). Such instruments usually have a fingerboard attached to the neck of the instrument, that provides a hard flat surface the player can stop the strings against. On some string instruments, the fingerboard has frets, raised ridges perpendicular to the strings, that stop the string at precise intervals, in which case the fingerboard is also called a fretboard.
|
90 |
+
|
91 |
+
Moving frets during performance is usually impractical. The bridges of a koto, on the other hand, may be moved by the player occasionally in the course of a single piece of music. Many modern Western harps include levers, either directly moved by fingers (on Celtic harps) or controlled by foot pedals (on orchestral harps), to raise the pitch of individual strings by a fixed amount. The Middle Eastern zither, the qanun, is equipped with small levers called mandal that let each course of multiple strings be incrementally retuned "on the fly" while the instrument is being played. These levers raise or lower the pitch of the string course by a microtone, less than a half step.
|
92 |
+
|
93 |
+
Some instruments are employed with sympathetic strings—which are additional strings not meant to be plucked. These strings resonate with the played notes, creating additional tones. Sympathetic strings vibrate naturally when various intervals, such as the unisons or the octaves of the notes of the sympathetic strings are plucked, bowed or struck. This system is used on the sarangi, the grand piano, the hardanger fiddle and the rubab.
|
94 |
+
|
95 |
+
A vibrating string strung on a very thick log, as a hypothetical example, would make only a very quiet sound, so string instruments are usually constructed in such a way that the vibrating string is coupled to a hollow resonating chamber, a soundboard, or both. On the violin, for example, the four strings pass over a thin wooden bridge resting on a hollow box (the body of the violin). The normal force applied to the body from the strings is supported in part by a small cylinder of wood called the soundpost. The violin body also has two "f-holes" carved on the top. The strings' vibrations are distributed via the bridge and soundpost to all surfaces of the instrument, and are thus made louder by matching of the acoustic impedance. The correct technical explanation is that they allow a better match to the acoustic impedance of the air.[citation needed]
|
96 |
+
|
97 |
+
It is sometimes said that the sounding board or soundbox "amplifies" the sound of the strings. In reality, no power amplification occurs, because all of the energy to produce sound comes from the vibrating string. The mechanism is that the sounding board of the instrument provides a larger surface area to create sound waves than that of the string and therefore acts a matching element between the acoustic impedance of the string and that of the surrounding air. . A larger vibrating surface can sometimes produce better matching; especially at lower frequencies.
|
98 |
+
|
99 |
+
All lute type instruments traditionally have a bridge, which holds the string at the proper action height from the fret/finger board at one end of the strings. On acoustic instruments, the bridge performs an equally important function of transmitting string energy into the "sound box" of the instrument, thereby increasing the sound volume. The specific design, and materials the used in the construction of the bridge of an instrument, have a dramatic impact upon both the sound and responsiveness of the instrument.
|
100 |
+
|
101 |
+
Achieving a tonal characteristic that is effective and pleasing to the player's and listener's ear is something of an art and craft, as well as a science, and the makers of string instruments often seek very high quality woods to this end, particularly spruce (chosen for its lightness, strength and flexibility) and maple (a very hard wood). Spruce is used for the sounding boards of instruments from the violin to the piano. Instruments such as the banjo use a drum, covered in natural or synthetic skin as their soundboard.
|
102 |
+
|
103 |
+
Acoustic instruments can also be made out of artificial materials, such as carbon fiber and fiberglass (particularly the larger, lower-pitched instruments, such as cellos and basses).
|
104 |
+
|
105 |
+
In the early 20th century, the Stroh violin used a diaphragm-type resonator and a metal horn to project the string sound, much like early mechanical gramophones. Its use declined beginning about 1920, as electronic amplification through power amplifiers and loudspeakers was developed and came into use. String instrument players can electronically amplify their instruments by connecting them to a PA system or a guitar amplifier.
|
106 |
+
|
107 |
+
Most string instruments can be fitted with piezoelectric or magnetic pickups to convert the string's vibrations into an electrical signal that is amplified and then converted back into sound by loudspeakers. Some players attach a pickup to their traditional string instrument to "electrify" it. Another option is to use a solid-bodied instrument, which reduces unwanted feedback howls or squeals.
|
108 |
+
|
109 |
+
Amplified string instruments can be much louder than their acoustic counterparts, so musicians can play them in relatively loud rock, blues, and jazz ensembles. Amplified instruments can also have their amplified tone modified by using electronic effects such as distortion, reverb, or wah-wah.
|
110 |
+
|
111 |
+
Bass-register string instruments such as the double bass and the electric bass are amplified with bass instrument amplifiers that are designed to reproduce low-frequency sounds. To modify the tone of amplified bass instruments, a range of electronic bass effects are available, such as distortion and chorus.
|
112 |
+
|
113 |
+
The string instruments usually used in the orchestra,[18] and often called the "symphonic strings" or string section are:[19]
|
114 |
+
|
115 |
+
When orchestral instrumentation specifies "strings," it often means this combination of string parts. Orchestral works rarely omit any of these string parts, but often include additional string instruments, especially the concert harp and piano. In the Baroque orchestra from the 1600s–1750 (or with modern groups playing early music) harpsichord is almost always used to play the basso continuo part (the written-out bass line and improvised chords), and often a theorbo or lute or a pipe organ. In some classical music, such as the string quartet, the double bass is not typically used; the cello plays the bass role in this literature.
|
en/2743.html.txt
ADDED
@@ -0,0 +1,115 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Plucked
|
2 |
+
|
3 |
+
String instruments, stringed instruments, or chordophones are musical instruments that produce sound from vibrating strings when the performer plays or sounds the strings in some manner.
|
4 |
+
|
5 |
+
Musicians play some string instruments by plucking the strings with their fingers or a plectrum—and others by hitting the strings with a light wooden hammer or by rubbing the strings with a bow. In some keyboard instruments, such as the harpsichord, the musician presses a key that plucks the string.
|
6 |
+
|
7 |
+
With bowed instruments, the player pulls a rosined horsehair bow across the strings, causing them to vibrate. With a hurdy-gurdy, the musician cranks a wheel whose rosined edge touches the strings.
|
8 |
+
|
9 |
+
Bowed instruments include the string section instruments of the Classical music orchestra (violin, viola, cello and double bass) and a number of other instruments (e.g., viols and gambas used in early music from the Baroque music era and fiddles used in many types of folk music). All of the bowed string instruments can also be plucked with the fingers, a technique called "pizzicato". A wide variety of techniques are used to sound notes on the electric guitar, including plucking with the fingernails or a plectrum, strumming and even "tapping" on the fingerboard and using feedback from a loud, distorted guitar amplifier to produce a sustained sound. Some types of string instrument are mainly plucked, such as the harp and the electric bass. In the Hornbostel-Sachs scheme of musical instrument classification, used in organology, string instruments are called chordophones. Other examples include the sitar, rebab, banjo, mandolin, ukulele, and bouzouki.
|
10 |
+
|
11 |
+
In most string instruments, the vibrations are transmitted to the body of the instrument, which often incorporates some sort of hollow or enclosed area. The body of the instrument also vibrates, along with the air inside it. The vibration of the body of the instrument and the enclosed hollow or chamber make the vibration of the string more audible to the performer and audience. The body of most string instruments is hollow. Some, however—such as electric guitar and other instruments that rely on electronic amplification—may have a solid wood body.
|
12 |
+
|
13 |
+
Dating to around c. 13,000–BC, a cave painting in the Trois Frères cave in France depicts what some believe is a musical bow, a hunting bow used as a single-stringed musical instrument.[1][2] From the musical bow, families of stringed instruments developed; since each string played a single note, adding strings added new notes, creating bow harps, harps and lyres.[3] In turn, this led to being able to play dyads and chords. Another innovation occurred when the bow harp was straightened out and a bridge used to lift the strings off the stick-neck, creating the lute.[4]
|
14 |
+
|
15 |
+
This picture of musical bow to harp bow is theory and has been contested. In 1965 Franz Jahnel wrote his criticism stating that the early ancestors of plucked instruments are not currently known.[5] He felt that the harp bow was a long cry from the sophistication of the civilizations of western Asia in 4000 BC that took the primitive technology and created "technically and artistically well-made harps, lyres, citharas, and lutes."[5]
|
16 |
+
|
17 |
+
Archaeological digs have identified some of the earliest stringed instruments in Ancient Mesopotamian sites, like the lyres of Ur, which include artifacts over three thousand years old. The development of lyre instruments required the technology to create a tuning mechanism to tighten and loosen the string tension. Lyres with wooden bodies and strings used for plucking or playing with a bow represent key instruments that point towards later harps and violin-type instruments; moreover, Indian instruments from 500 BC have been discovered with anything from 7 to 21 strings.
|
18 |
+
|
19 |
+
Musicologists have put forth examples of that 4th-century BC technology, looking at engraved images that have survived. The earliest image showing a lute-like instrument came from Mesopotamia prior to 3000 BC.[7] A cylinder seal from c. 3100 BC or earlier (now in the possession of the British Museum) shows what is thought to be a woman playing a stick lute.[7][8] From the surviving images, theororists have categorized the Mesopotamian lutes, showing that they developed into a long variety and a short.[9] The line of long lutes may have developed into the tamburs and pandura.[10] The line of short lutes was further developed to the east of Mesopotamia, in Bactria, Gandhara, and Northwest India, and shown in sculpture from the 2nd century BC through the 4th or 5th centuries AD.[11][12][13]
|
20 |
+
|
21 |
+
During the medieval era, instrument development varied in different regions of the world. Middle Eastern rebecs represented breakthroughs in terms of shape and strings, with a half a pear shape using three strings. Early versions of the violin and fiddle, by comparison, emerged in Europe through instruments such as the gittern, a four-stringed precursor to the guitar, and basic lutes. These instruments typically used catgut (animal intestine) and other materials, including silk, for their strings.
|
22 |
+
|
23 |
+
String instrument design refined during the Renaissance and into the Baroque period (1600–1750) of musical history. Violins and guitars became more consistent in design and were roughly similar to acoustic guitars of the 2000s. The violins of the Renaissance featured intricate woodwork and stringing, while more elaborate bass instruments such as the bandora were produced alongside quill-plucked citterns, and Spanish body guitars.
|
24 |
+
|
25 |
+
In the 19th century, string instruments were made more widely available through mass production, with wood string instruments a key part of orchestras – cellos, violas, and upright basses, for example, were now standard instruments for chamber ensembles and smaller orchestras. At the same time, the 19th-century guitar became more typically associated with six string models, rather than traditional five string versions.
|
26 |
+
|
27 |
+
Major changes to string instruments in the 20th century primarily involved innovations in electronic instrument amplification and electronic music – electric violins were available by the 1920s, and were an important part of emerging jazz music trends in the United States. The acoustic guitar was widely used in blues and jazz, but as an acoustic instrument, it was not loud enough to be a solo instrument, so these genres mostly used it as an accompaniment rhythm section instrument. In big bands of the 1920s, the acoustic guitar played backing chords, but it was not loud enough to play solos like the saxophone and trumpet. The development of guitar amplifiers, which contained a power amplifier and a loudspeaker in a wooden cabinet, let jazz guitarists play solos and be heard over a big band. The development of the electric guitar provided guitarists with an instrument that was built to connect to guitar amplifiers. Electric guitars have magnetic pickups, volume control knobs and an output jack.
|
28 |
+
|
29 |
+
In the 1960s, larger, more powerful guitar amplifiers were developed, called "stacks". These powerful amplifiers enabled guitarists to perform in rock bands that played in large venues such as stadiums and outdoor music festivals (e.g., Woodstock Music Festival). Along with the development of guitar amplifiers, a large range of electronic effects units, many in small stompbox pedals were introduced in the 1960s and 1970s, such as fuzz pedals, flangers and phaser enabling performers to create unique new sounds during the psychedelic rock era. Breakthroughs in electric guitar and basses technologies and playing styles enabled major breakthroughs in pop and rock music in the 1960s and 1970s. The distinctive sound of the amplified electric guitar was the centerpiece of new genres of music such as blues rock and jazz-rock fusion. The sonic power of the loudly amplified, highly distorted electric guitar was to key element of the early heavy metal music, with the distorted guitar being used in lead guitar roles, and with power chords as a rhythm guitar.
|
30 |
+
|
31 |
+
The ongoing use of electronic amplification and effects units in string instruments, ranging from traditional instruments like the violin to the new electric guitar, added variety to contemporary classical music performances, and enabled experimentation in the dynamic and timbre (tone colour) range of orchestras, bands, and solo performances.[14]
|
32 |
+
|
33 |
+
String instruments can be divided in three groups
|
34 |
+
|
35 |
+
It is also possible to divide the instruments into categories focused on how the instrument is played.
|
36 |
+
|
37 |
+
All string instruments produce sound from one or more vibrating strings, transferred to the air by the body of the instrument (or by a pickup in the case of electronically amplified instruments). They are usually categorised by the technique used to make the strings vibrate (or by the primary technique, in the case of instruments where more than one may apply.) The three most common techniques are plucking, bowing, and striking. An important difference between bowing and plucking is that in the former the phenomenon is periodic so that the overtones are kept in a strictly harmonic relationship to the fundamental.[15]
|
38 |
+
|
39 |
+
Plucking is a method of playing on instruments such as the veena, banjo, ukulele, guitar, harp, lute, mandolin, oud, and sitar, using either a finger, thumb, or quills (now plastic plectra) to pluck the strings.
|
40 |
+
|
41 |
+
Instruments normally played by bowing (see below) may also be plucked, a technique referred to by the Italian term pizzicato.
|
42 |
+
|
43 |
+
Bowing (Italian: arco) is a method used in some string instruments, including the violin, viola, cello, and the double bass (of the violin family), and the old viol family. The bow consists of a stick with a "ribbon" of parallel horse tail hairs stretched between its ends. The hair is coated with rosin so it can grip the string; moving the hair across a string causes a stick-slip phenomenon, making the string vibrate, and prompting the instrument to emit sound. Darker grades of rosin grip well in cool, dry climates, but may be too sticky in warmer, more humid weather. Violin and viola players generally use harder, lighter-colored rosin than players of lower-pitched instruments, who tend to favor darker, softer rosin.[16]
|
44 |
+
|
45 |
+
The ravanahatha is one of the oldest string instruments. Ancestors of the modern bowed string instruments are the rebab of the Islamic Empires, the Persian kamanche and the Byzantine lira. Other bowed instruments are the rebec, hardingfele, nyckelharpa, kokyū, erhu, igil, sarangi and K'ni. The hurdy-gurdy is bowed by a wheel. Rarely, the guitar has been played with a bow (rather than plucked) for unique effects.
|
46 |
+
|
47 |
+
The third common method of sound production in stringed instruments is to strike the string. The piano and hammered dulcimer use this method of sound production. Even though the piano strikes the strings, the use of felt hammers means that the sound that is produced can nevertheless be mellow and rounded, in contrast to the sharp attack produced when a very hard hammer strikes the strings.
|
48 |
+
|
49 |
+
Violin family string instrument players are occasionally instructed to strike the string with the stick of the bow, a technique called col legno. This yields a percussive sound along with the pitch of the note. A well-known use of col legno for orchestral strings is Gustav Holst's "Mars" movement from The Planets suite.
|
50 |
+
|
51 |
+
The aeolian harp employs a very unusual method of sound production: the strings are excited by the movement of the air.
|
52 |
+
|
53 |
+
Some instruments that have strings have an attached keyboard that the player presses keys on to trigger a mechanism that sounds the strings, instead of directly manipulating the strings. These include the piano, the clavichord, and the harpsichord. With these keyboard instruments, strings are occasionally plucked or bowed by hand. Modern composers such as Henry Cowell wrote music that requires that the player reach inside the piano and pluck the strings directly, "bow" them with bow hair wrapped around the strings, or play them by rolling the bell of a brass instrument such as a trombone on the array of strings. However, these are relatively rarely used special techniques.
|
54 |
+
|
55 |
+
Other keyed string instruments, small enough for a strolling musician to play, include the plucked autoharp, the bowed nyckelharpa, and the hurdy-gurdy, which is played by cranking a rosined wheel.
|
56 |
+
|
57 |
+
Steel-stringed instruments (such as the guitar, bass, violin, etc.) can be played using a magnetic field. An E-Bow is a small hand-held battery-powered device that magnetically excites the strings of an electric string instrument to provide a sustained, singing tone reminiscent of a held bowed violin note.
|
58 |
+
|
59 |
+
Third bridge is a plucking method where the player frets a string and strikes the side opposite the bridge. The technique is mainly used on electric instruments because these have a pickup that amplifies only the local string vibration. It is possible on acoustic instruments as well, but less effective. For instance, a player might press on the seventh fret on a guitar and pluck it at the head side to make a tone resonate at the opposed side. On electric instruments, this technique generates multitone sounds reminiscent of a clock or bell.
|
60 |
+
|
61 |
+
Electric string instruments, such as the electric guitar, can also be played without touching the strings by using audio feedback. When an electric guitar is plugged into a loud, powerful guitar amplifier with a loudspeaker and a high level of distortion is intentionally used, the guitar produces sustained high-pitched sounds. By changing the proximity of the guitar to the speaker, the guitarist can produce sounds that cannot be produced with standard plucking and picking techniques. This technique was popularized by Jimi Hendrix and others in the 1960s. It was widely used in psychedelic rock and heavy metal music.
|
62 |
+
|
63 |
+
There are three ways to change the pitch of a vibrating string. String instruments are tuned by varying the strings' tension because adjusting length or mass per unit length is impractical. Instruments with a fingerboard are then played by adjusting the length of the vibrating portion of the strings. The following observations all apply to a string that is infinitely flexible (a theoretical assumption, because in practical applications, strings are not infinitely flexible) strung between two fixed supports. Real strings have finite curvature at the bridge and nut, and the bridge, because of its motion, are not exactly nodes of vibration. Hence the following statements about proportionality are approximations.
|
64 |
+
|
65 |
+
Pitch can be adjusted by varying the length of the string.[15] A longer string results in a lower pitch, while a shorter string results in a higher pitch. The frequency is inversely proportional to the length:
|
66 |
+
|
67 |
+
A string twice as long produces a tone of half the frequency (one octave lower).
|
68 |
+
|
69 |
+
Pitch can be adjusted by varying the tension of the string. A string with less tension (looser) results in a lower pitch, while a string with greater tension (tighter) results in a higher pitch. A homemade washtub bass made out of a length of rope, a broomstick and a washtub can produce different pitches by increasing the tension on the rope (producing a higher pitch) or reducing the tension (producing a lower pitch). The frequency is proportional to the square root of the tension:
|
70 |
+
|
71 |
+
The pitch of a string can also be varied by changing the linear density (mass per unit length) of the string. In practical applications, such as with double bass strings or bass piano strings, extra weight is added to strings by winding them with metal. A string with a heavier metal winding produces a lower pitch than a string of equal length without a metal winding. This can be seen on a 2016-era set of gut strings for double bass. The higher-pitched G string is often made of synthetic material, or sometimes animal intestine, with no metal wrapping. To enable the low E string to produce a much lower pitch with a string of the same length, it is wrapped with many wrappings of thin metal wire. This adds to its mass without making it too stiff. The frequency is inversely proportional to the square root of the linear density:
|
72 |
+
|
73 |
+
Given two strings of equal length and tension, the string with higher mass per unit length produces the lower pitch.
|
74 |
+
|
75 |
+
The length of the string from nut to bridge on bowed or plucked instruments ultimately determines the distance between different notes on the instrument. For example, a double bass with its low range needs a scale length of around 42 inches (110 cm), whilst a violin scale is only about 13 inches (33 cm). On the shorter scale of the violin, the left hand may easily reach a range of slightly more than two octaves without shifting position, while on the bass' longer scale, a single octave or a ninth is reachable in lower positions.
|
76 |
+
|
77 |
+
In bowed instruments, the bow is normally placed perpendicularly to the string, at a point halfway between the end of the fingerboard and the bridge. However, different bow placements can be selected to change timbre. Application of the bow close to the bridge (known as sul ponticello) produces an intense, sometimes harsh sound, which acoustically emphasizes the upper harmonics. Bowing above the fingerboard (sul tasto) produces a purer tone with less overtone strength, emphasizing the fundamental, also known as flautando, since it sounds less reedy and more flute-like.
|
78 |
+
|
79 |
+
Bowed instruments pose a challenge to instrument builders, as compared with instruments that are only plucked (e.g., guitar), because on bowed instruments, the musician must be able to play one string at a time if they wish. As such, a bowed instrument must have a curved bridge that makes the "outer" strings lower in height than the "inner" strings. With such a curved bridge, the player can select one string at a time to play. On guitars and lutes, the bridge can be flat, because the strings are played by plucking them with the fingers, fingernails or a pick; by moving the fingers or pick to different positions, the player can play different strings. On bowed instruments, the need to play strings individually with the bow also limits the number of strings to about six or seven strings; with more strings, it would be impossible to select individual strings to bow. (Note: bowed strings can also play two bowed notes on two different strings at the same time, a technique called a double stop.) Indeed, on the orchestral string section instruments, four strings are the norm, with the exception of five strings used on some double basses. In contrast, with stringed keyboard instruments, 88 courses are used on a piano, and even though these strings are arranged on a flat bridge, the mechanism can play any of the notes individually.
|
80 |
+
|
81 |
+
Similar timbral distinctions are also possible with plucked string instruments by selecting an appropriate plucking point, although the difference is perhaps more subtle.
|
82 |
+
|
83 |
+
In keyboard instruments, the contact point along the string (whether this be hammer, tangent, or plectrum) is a choice made by the instrument designer. Builders use a combination of experience and acoustic theory to establish the right set of contact points.
|
84 |
+
|
85 |
+
In harpsichords, often there are two sets of strings of equal length. These "choirs" usually differ in their plucking points. One choir has a "normal" plucking point, producing a canonical harpsichord sound; the other has a plucking point close to the bridge, producing a reedier "nasal" sound rich in upper harmonics.
|
86 |
+
|
87 |
+
A single string at a certain tension and length only produces one note. To produce multiple notes, string instruments use one of two methods. One is to add enough strings to cover the required range of different notes (e.g., as with the piano, which has sets of 88 strings to enable the performer to play 88 different notes). The other is to provide a way to stop the strings along their length to shorten the part that vibrates, which is the method used in guitar and violin family instruments to produce different notes from the same string. The piano and harp represent the first method, where each note on the instrument has its own string or course of multiple strings tuned to the same note. (Many notes on a piano are strung with a "choir" of three strings tuned alike, to increase the volume.) A guitar represents the second method—the player's fingers push the string against the fingerboard so that the string is pressed firmly against a metal fret. Pressing the string against a fret while plucking or strumming it shortens the vibrating part and thus produces a different note.
|
88 |
+
|
89 |
+
Some zithers combine stoppable (melody) strings with a greater number of "open" harmony or chord strings. On instruments with stoppable strings, such as the violin or guitar, the player can shorten the vibrating length of the string, using their fingers directly (or more rarely through some mechanical device, as in the nyckelharpa and the hurdy-gurdy). Such instruments usually have a fingerboard attached to the neck of the instrument, that provides a hard flat surface the player can stop the strings against. On some string instruments, the fingerboard has frets, raised ridges perpendicular to the strings, that stop the string at precise intervals, in which case the fingerboard is also called a fretboard.
|
90 |
+
|
91 |
+
Moving frets during performance is usually impractical. The bridges of a koto, on the other hand, may be moved by the player occasionally in the course of a single piece of music. Many modern Western harps include levers, either directly moved by fingers (on Celtic harps) or controlled by foot pedals (on orchestral harps), to raise the pitch of individual strings by a fixed amount. The Middle Eastern zither, the qanun, is equipped with small levers called mandal that let each course of multiple strings be incrementally retuned "on the fly" while the instrument is being played. These levers raise or lower the pitch of the string course by a microtone, less than a half step.
|
92 |
+
|
93 |
+
Some instruments are employed with sympathetic strings—which are additional strings not meant to be plucked. These strings resonate with the played notes, creating additional tones. Sympathetic strings vibrate naturally when various intervals, such as the unisons or the octaves of the notes of the sympathetic strings are plucked, bowed or struck. This system is used on the sarangi, the grand piano, the hardanger fiddle and the rubab.
|
94 |
+
|
95 |
+
A vibrating string strung on a very thick log, as a hypothetical example, would make only a very quiet sound, so string instruments are usually constructed in such a way that the vibrating string is coupled to a hollow resonating chamber, a soundboard, or both. On the violin, for example, the four strings pass over a thin wooden bridge resting on a hollow box (the body of the violin). The normal force applied to the body from the strings is supported in part by a small cylinder of wood called the soundpost. The violin body also has two "f-holes" carved on the top. The strings' vibrations are distributed via the bridge and soundpost to all surfaces of the instrument, and are thus made louder by matching of the acoustic impedance. The correct technical explanation is that they allow a better match to the acoustic impedance of the air.[citation needed]
|
96 |
+
|
97 |
+
It is sometimes said that the sounding board or soundbox "amplifies" the sound of the strings. In reality, no power amplification occurs, because all of the energy to produce sound comes from the vibrating string. The mechanism is that the sounding board of the instrument provides a larger surface area to create sound waves than that of the string and therefore acts a matching element between the acoustic impedance of the string and that of the surrounding air. . A larger vibrating surface can sometimes produce better matching; especially at lower frequencies.
|
98 |
+
|
99 |
+
All lute type instruments traditionally have a bridge, which holds the string at the proper action height from the fret/finger board at one end of the strings. On acoustic instruments, the bridge performs an equally important function of transmitting string energy into the "sound box" of the instrument, thereby increasing the sound volume. The specific design, and materials the used in the construction of the bridge of an instrument, have a dramatic impact upon both the sound and responsiveness of the instrument.
|
100 |
+
|
101 |
+
Achieving a tonal characteristic that is effective and pleasing to the player's and listener's ear is something of an art and craft, as well as a science, and the makers of string instruments often seek very high quality woods to this end, particularly spruce (chosen for its lightness, strength and flexibility) and maple (a very hard wood). Spruce is used for the sounding boards of instruments from the violin to the piano. Instruments such as the banjo use a drum, covered in natural or synthetic skin as their soundboard.
|
102 |
+
|
103 |
+
Acoustic instruments can also be made out of artificial materials, such as carbon fiber and fiberglass (particularly the larger, lower-pitched instruments, such as cellos and basses).
|
104 |
+
|
105 |
+
In the early 20th century, the Stroh violin used a diaphragm-type resonator and a metal horn to project the string sound, much like early mechanical gramophones. Its use declined beginning about 1920, as electronic amplification through power amplifiers and loudspeakers was developed and came into use. String instrument players can electronically amplify their instruments by connecting them to a PA system or a guitar amplifier.
|
106 |
+
|
107 |
+
Most string instruments can be fitted with piezoelectric or magnetic pickups to convert the string's vibrations into an electrical signal that is amplified and then converted back into sound by loudspeakers. Some players attach a pickup to their traditional string instrument to "electrify" it. Another option is to use a solid-bodied instrument, which reduces unwanted feedback howls or squeals.
|
108 |
+
|
109 |
+
Amplified string instruments can be much louder than their acoustic counterparts, so musicians can play them in relatively loud rock, blues, and jazz ensembles. Amplified instruments can also have their amplified tone modified by using electronic effects such as distortion, reverb, or wah-wah.
|
110 |
+
|
111 |
+
Bass-register string instruments such as the double bass and the electric bass are amplified with bass instrument amplifiers that are designed to reproduce low-frequency sounds. To modify the tone of amplified bass instruments, a range of electronic bass effects are available, such as distortion and chorus.
|
112 |
+
|
113 |
+
The string instruments usually used in the orchestra,[18] and often called the "symphonic strings" or string section are:[19]
|
114 |
+
|
115 |
+
When orchestral instrumentation specifies "strings," it often means this combination of string parts. Orchestral works rarely omit any of these string parts, but often include additional string instruments, especially the concert harp and piano. In the Baroque orchestra from the 1600s–1750 (or with modern groups playing early music) harpsichord is almost always used to play the basso continuo part (the written-out bass line and improvised chords), and often a theorbo or lute or a pipe organ. In some classical music, such as the string quartet, the double bass is not typically used; the cello plays the bass role in this literature.
|
en/2744.html.txt
ADDED
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A percussion instrument is a musical instrument that is sounded by being struck or scraped by a beater including attached or enclosed beaters or rattles struck, scraped or rubbed by hand or struck against another similar instrument. Excluding zoomusicological instruments and the human voice, the percussion family is believed to include the oldest musical instruments.[1]
|
2 |
+
|
3 |
+
The percussion section of an orchestra most commonly contains instruments such as the timpani, snare drum, bass drum, cymbals, triangle and tambourine. However, the section can also contain non-percussive instruments, such as whistles and sirens, or a blown conch shell. Percussive techniques can even be applied to the human body itself, as in body percussion. On the other hand, keyboard instruments, such as the celesta, are not normally part of the percussion section, but keyboard percussion instruments such as the glockenspiel and xylophone (which do not have piano keyboards) are included.
|
4 |
+
|
5 |
+
Percussion instruments are most commonly divided into two classes: Pitched percussion instruments, which produce notes with an identifiable pitch, and unpitched percussion instruments, which produce notes or sounds in an indefinite pitch.[2][failed verification][3][failed verification]
|
6 |
+
|
7 |
+
Percussion instruments may play not only rhythm, but also melody and harmony.
|
8 |
+
|
9 |
+
Percussion is commonly referred to as "the backbone" or "the heartbeat" of a musical ensemble, often working in close collaboration with bass instruments, when present. In jazz and other popular music ensembles, the pianist, bassist, drummer and sometimes the guitarist are referred to as the rhythm section. Most classical pieces written for full orchestra since the time of Haydn and Mozart are orchestrated to place emphasis on the strings, woodwinds, and brass. However, often at least one pair of timpani is included, though they rarely play continuously. Rather, they serve to provide additional accents when needed. In the 18th and 19th centuries, other percussion instruments (like the triangle or cymbals) have been used, again generally sparingly. The use of percussion instruments became more frequent in the 20th century classical music.
|
10 |
+
|
11 |
+
In almost every style of music, percussion plays a pivotal role.[4] In military marching bands and pipes and drums, it is the beat of the bass drum that keeps the soldiers in step and at a regular speed, and it is the snare that provides that crisp, decisive air to the tune of a regiment. In classic jazz, one almost immediately thinks of the distinctive rhythm of the hi-hats or the ride cymbal when the word-swing is spoken. In more recent popular-music culture, it is almost impossible to name three or four rock, hip-hop, rap, funk or even soul charts or songs that do not have some sort of percussive beat keeping the tune in time.
|
12 |
+
|
13 |
+
Because of the diversity of percussive instruments, it is not uncommon to find large musical ensembles composed entirely of percussion. Rhythm, melody, and harmony are all represented in these ensembles.
|
14 |
+
|
15 |
+
Music for pitched percussion instruments can be notated on a staff with the same treble and bass clefs used by many non-percussive instruments. Music for percussive instruments without a definite pitch can be notated with a specialist rhythm or percussion-clef. The guitar also has a special "tab" staff. More often a bass clef is substituted for rhythm clef.
|
16 |
+
|
17 |
+
Percussion instruments are classified by various criteria sometimes depending on their construction, ethnic origin, function within musical theory and orchestration, or their relative prevalence in common knowledge.
|
18 |
+
|
19 |
+
The word percussion derives from the Latin verb percussio to beat, strike in the musical sense, and the noun percussus, a beating. As a noun in contemporary English, Wiktionary describes it as the collision of two bodies to produce a sound. The term is not unique to music, but has application in medicine and weaponry, as in percussion cap. However, all known uses of percussion appear to share a similar lineage beginning with the original Latin percussus. In a musical context then, the percussion instruments may have been originally coined to describe a family of musical instruments including drums, rattles, metal plates, or blocks that musicians beat or struck to produce sound.
|
20 |
+
|
21 |
+
The Hornbostel–Sachs system has no high-level section for percussion. Most percussion instruments as the term is normally understood are classified as idiophones and membranophones. However the term percussion is instead used at lower-levels of the Hornbostel–Sachs hierarchy, including to identify instruments struck with either a non sonorous object hand, stick, striker or against a non-sonorous object human body, the ground. This is opposed to concussion, which refers to instruments with two or more complementary sonorous parts that strike against each other and other meanings. For example:
|
22 |
+
|
23 |
+
111.1 Concussion idiophones or clappers, played in pairs and beaten against each other, such as zills and clapsticks.
|
24 |
+
|
25 |
+
111.2 Percussion idiophones, includes many percussion instruments played with the hand or by a percussion mallet, such as the hang, gongs and the xylophone, but not drums and only some cymbals.
|
26 |
+
|
27 |
+
21 Struck drums, includes most types of drum, such as the timpani, snare drum, and tom-tom.
|
28 |
+
|
29 |
+
412.12 Percussion reeds, a class of wind instrument unrelated to percussion in the more common sense
|
30 |
+
|
31 |
+
There are many instruments that have some claim to being percussion, but are classified otherwise:
|
32 |
+
|
33 |
+
Percussion instruments are sometimes classified as pitched or unpitched. While valid, this classification is widely seen as inadequate. Rather, it may be more informative to describe percussion instruments in regards to one or more of the following four paradigms:
|
34 |
+
|
35 |
+
Many texts, including Teaching Percussion by Gary Cook of the University of Arizona, begin by studying the physical characteristics of instruments and the methods by which they can produce sound. This is perhaps the most scientifically pleasing assignment of nomenclature whereas the other paradigms are more dependent on historical or social circumstances. Based on observation and experimentation, one can determine how an instrument produces sound and then assign the instrument to one of the following four categories:
|
36 |
+
|
37 |
+
"Idiophones produce sounds through the vibration of their entire body."[6] Examples of idiophones:
|
38 |
+
|
39 |
+
Most objects commonly known as drums are membranophones. Membranophones produce sound when the membrane or head is struck with a hand, mallet, stick, beater, or improvised tool.[6]
|
40 |
+
|
41 |
+
Examples of membranophones:
|
42 |
+
|
43 |
+
Most instruments known as chordophones are defined as string instruments, wherein their sound is derived from the vibration of a string, but some such as these examples also fall under percussion instruments.
|
44 |
+
|
45 |
+
Most instruments known as aerophones are defined as wind instruments such as a saxophone whereby sound is produced by a stream of air being blown through the object. Although most aerophones are played by specialist players who are trained for that specific instrument, in a traditional ensemble setting, aerophones are played by a percussionist, generally due to the instrument's unconventional nature. Examples of aerophones played by percussionists
|
46 |
+
|
47 |
+
When classifying instruments by function it is useful to note if a percussion instrument makes a definite pitch or indefinite pitch.
|
48 |
+
|
49 |
+
For example, some percussion instruments such as the marimba and timpani produce an obvious fundamental pitch and can therefore play melody and serve harmonic functions in music. Other instruments such as crash cymbals and snare drums produce sounds with such complex overtones and a wide range of prominent frequencies that no pitch is discernible.
|
50 |
+
|
51 |
+
Percussion instruments in this group are sometimes referred to as pitched or tuned.
|
52 |
+
|
53 |
+
Examples of percussion instruments with definite pitch:
|
54 |
+
|
55 |
+
Instruments in this group are sometimes referred to as non-pitched, unpitched, or untuned. Traditionally these instruments are thought of as making a sound that contains such complex frequencies that no discernible pitch can be heard.
|
56 |
+
|
57 |
+
In fact many traditionally unpitched instruments, such as triangles and even cymbals, have also been produced as tuned sets.[3]
|
58 |
+
|
59 |
+
Examples of percussion instruments with indefinite pitch:
|
60 |
+
|
61 |
+
It is difficult to define what is common knowledge but there are instruments percussionists and composers use in contemporary music that most people wouldn't consider musical instruments. It is worthwhile to try to distinguish between instruments based on their acceptance or consideration by a general audience.
|
62 |
+
|
63 |
+
For example, most people would not consider an anvil, a brake drum (on a vehicle with drum brakes, the circular hub the brake shoes press against), or a fifty-five gallon oil barrel musical instruments yet composers and percussionists use these objects.
|
64 |
+
|
65 |
+
Percussion instruments generally fall into the following categories:
|
66 |
+
|
67 |
+
One pre-20th century example of found percussion is the use of cannon usually loaded with blank charges in Tchiakovsky's 1812 Overture. John Cage, Harry Partch, Edgard Varèse, and Peter Schickele, all noted composers, created entire pieces of music using unconventional instruments. Beginning in the early 20th century perhaps with Ionisation by Edgard Varèse which used air-raid sirens among other things, composers began to require that percussionists invent or find objects to produce desired sounds and textures. Another example the use of a hammer and saw in Penderecki's De Natura Sonoris No. 2. By the late 20th century, such instruments were common in modern percussion ensemble music and popular productions, such as the off-Broadway show, Stomp. Rock band Aerosmith used a number of unconventional instruments in their song Sweet Emotion, including shotguns, brooms, and a sugar bag. The metal band Slipknot is well known for playing unusual percussion items, having two percussionists in the band. Along with deep sounding drums, their sound includes hitting baseball bats and other objects on beer kegs to create a distinctive sound.
|
68 |
+
|
69 |
+
It is not uncommon to discuss percussion instruments in relation to their cultural origin. This led to a division between instruments considered common or modern, and folk instruments with significant history or purpose within a geographic region or culture.
|
70 |
+
|
71 |
+
This category includes instruments that are widely available and popular throughout the world:
|
72 |
+
|
73 |
+
The percussionist uses various objects to strike a percussion instrument to produce sound.
|
74 |
+
|
75 |
+
The general term for a musician who plays percussion instruments is "percussionist" but the terms listed below often describe specialties:
|
en/2745.html.txt
ADDED
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A musical instrument is a device created or adapted to make musical sounds. In principle, any object that produces sound can be considered a musical instrument—it is through purpose that the object becomes a musical instrument. The history of musical instruments dates to the beginnings of human culture. Early musical instruments may have been used for ritual, such as a horn to signal success on the hunt, or a drum in a religious ceremony. Cultures eventually developed composition and performance of melodies for entertainment. Musical instruments evolved in step with changing applications and technologies.
|
4 |
+
|
5 |
+
The date and origin of the first device considered a musical instrument is disputed. The oldest object that some scholars refer to as a musical instrument, a simple flute, dates back as far as 67,000 years. Some consensus dates early flutes to about 37,000 years ago. However, most historians believe that determining a specific time of musical instrument invention is impossible, as many early musical instruments were made from animal skins, bone, wood and other non-durable materials.
|
6 |
+
|
7 |
+
Musical instruments developed independently in many populated regions of the world. However, contact among civilizations caused rapid spread and adaptation of most instruments in places far from their origin. By the Middle Ages, instruments from Mesopotamia were in maritime Southeast Asia, and Europeans played instruments originating from North Africa. Development in the Americas occurred at a slower pace, but cultures of North, Central, and South America shared musical instruments.
|
8 |
+
|
9 |
+
By 1400, musical instrument development slowed in many areas and was dominated by the Occident. During the Classical and Romantic periods of music, lasting from roughly 1750 to 1900, many new musical instruments were developed. While the evolution of traditional musical instruments slowed beginning in the 20th century, the proliferation of electricity led to the invention of new electric instruments, such as electric guitars and synthesizers.
|
10 |
+
|
11 |
+
Musical instrument classification is a discipline in its own right, and many systems of classification have been used over the years. Instruments can be classified by their effective range, their material composition, their size, role, etc. However, the most common academic method, Hornbostel–Sachs, uses the means by which they produce sound. The academic study of musical instruments is called organology.
|
12 |
+
|
13 |
+
A musical instrument is used to make musical sounds. Once humans moved from making sounds with their bodies — for example, by clapping—to using objects to create music from sounds, musical instruments were born.[1] Primitive instruments were probably designed to emulate natural sounds, and their purpose was ritual rather than entertainment.[2] The concept of melody and the artistic pursuit of musical composition were probably unknown to early players of musical instruments. A person sounding a bone flute to signal the start of a hunt does so without thought of the modern notion of "making music".[2]
|
14 |
+
|
15 |
+
Musical instruments are constructed in a broad array of styles and shapes, using many different materials. Early musical instruments were made from "found objects" such as shells and plant parts.[2] As instruments evolved, so did the selection and quality of materials. Virtually every material in nature has been used by at least one culture to make musical instruments.[2] One plays a musical instrument by interacting with it in some way — for example, by plucking the strings on a string instrument, striking the surface of a drum, or blowing into an animal horn.[2]
|
16 |
+
|
17 |
+
Researchers have discovered archaeological evidence of musical instruments in many parts of the world. Some artifacts have been dated to 67,000 years old, while critics often dispute the findings. Consensus solidifying about artifacts dated back to around 37,000 years old and later. Artifacts made from durable materials, or constructed using durable methods, have been found to survive. As such, the specimens found cannot be irrefutably placed as the earliest musical instruments.[3]
|
18 |
+
|
19 |
+
In July 1995, Slovenian archaeologist Ivan Turk discovered a bone carving in the northwest region of Slovenia. The carving, named the Divje Babe Flute, features four holes that Canadian musicologist Bob Fink determined could have been used to play four notes of a diatonic scale. Researchers estimate the flute's age at between 43,400 and 67,000 years old, making it the oldest known musical instrument and the only musical instrument associated with the Neanderthal culture.[4] However, some archaeologists and ethnomusicologists dispute the flute's status as a musical instrument.[5] German archaeologists have found mammoth bone and swan bone flutes dating back to 30,000 to 37,000 years old in the Swabian Alps. The flutes were made in the Upper Paleolithic age, and are more commonly accepted as being the oldest known musical instruments.[6]
|
20 |
+
|
21 |
+
Archaeological evidence of musical instruments was discovered in excavations at the Royal Cemetery in the Sumerian city of Ur. These instruments, one of the first ensembles of instruments yet discovered, include nine lyres ( the Lyres of Ur), two harps, a silver double flute, a sistra and cymbals. A set of reed-sounded silver pipes discovered in Ur was the likely predecessor of modern bagpipes.[7] The cylindrical pipes feature three side-holes that allowed players to produce whole tone scales.[8] These excavations, carried out by Leonard Woolley in the 1920s, uncovered non-degradable fragments of instruments and the voids left by the degraded segments that, together, have been used to reconstruct them.[9] The graves these instruments were buried in have been carbon dated to between 2600 and 2500 BC, providing evidence that these instruments were used in Sumeria by this time.[10]
|
22 |
+
|
23 |
+
Archaeologists in the Jiahu site of central Henan province of China have found flutes made of bones that date back 7,000 to 9,000 years,[11] representing some of the "earliest complete, playable, tightly-dated, multinote musical instruments" ever found.[11][12]
|
24 |
+
|
25 |
+
Scholars agree that there are no completely reliable methods of determining the exact chronology of musical instruments across cultures. Comparing and organizing instruments based on their complexity is misleading, since advancements in musical instruments have sometimes reduced complexity. For example, construction of early slit drums involved felling and hollowing out large trees; later slit drums were made by opening bamboo stalks, a much simpler task.[13]
|
26 |
+
|
27 |
+
German musicologist Curt Sachs, one of the most prominent musicologists[14] and musical ethnologists[15] in modern times, argues that it is misleading to arrange the development of musical instruments by workmanship, since cultures advance at different rates and have access to different raw materials. For example, contemporary anthropologists comparing musical instruments from two cultures that existed at the same time but differed in organization, culture, and handicraft cannot determine which instruments are more "primitive".[16] Ordering instruments by geography is also not totally reliable, as it cannot always be determined when and how cultures contacted one another and shared knowledge. Sachs proposed that a geographical chronology until approximately 1400 is preferable, however, due to its limited subjectivity.[17] Beyond 1400, one can follow the overall development of musical instruments by time period.[17]
|
28 |
+
|
29 |
+
The science of marking the order of musical instrument development relies on archaeological artifacts, artistic depictions, and literary references. Since data in one research path can be inconclusive, all three paths provide a better historical picture.[3]
|
30 |
+
|
31 |
+
Until the 19th century AD, European-written music histories began with mythological accounts mingled with scripture of how musical instruments were invented. Such accounts included Jubal, descendant of Cain and "father of all such as handle the harp and the organ" (Genesis 4:21) Pan, inventor of the pan pipes, and Mercury, who is said to have made a dried tortoise shell into the first lyre. Modern histories have replaced such mythology with anthropological speculation, occasionally informed by archeological evidence. Scholars agree that there was no definitive "invention" of the musical instrument since the definition of the term "musical instrument" is completely subjective to both the scholar and the would-be inventor. For example, a Homo habilis slapping his body could be the makings of a musical instrument regardless of the being's intent.[18]
|
32 |
+
|
33 |
+
Among the first devices external to the human body that are considered instruments are rattles, stampers, and various drums.[19] These instruments evolved due to the human motor impulse to add sound to emotional movements such as dancing.[20] Eventually, some cultures assigned ritual functions to their musical instruments, using them for hunting and various ceremonies.[21] Those cultures developed more complex percussion instruments and other instruments such as ribbon reeds, flutes, and trumpets. Some of these labels carry far different connotations from those used in modern day; early flutes and trumpets are so-labeled for their basic operation and function rather than resemblance to modern instruments.[22] Among early cultures for whom drums developed ritual, even sacred importance are the Chukchi people of the Russian Far East, the indigenous people of Melanesia, and many cultures of Africa. In fact, drums were pervasive throughout every African culture.[23] One East African tribe, the Wahinda, believed it was so holy that seeing a drum would be fatal to any person other than the sultan.[24]
|
34 |
+
|
35 |
+
Humans eventually developed the concept of using musical instruments to produce melody, which was previously common only in singing. Similar to the process of reduplication in language, instrument players first developed repetition and then arrangement. An early form of melody was produced by pounding two stamping tubes of slightly different sizes—one tube would produce a "clear" sound and the other would answer with a "darker" sound. Such instrument pairs also included bullroarers, slit drums, shell trumpets, and skin drums. Cultures who used these instrument pairs associated them with gender; the "father" was the bigger or more energetic instrument, while the "mother" was the smaller or duller instrument. Musical instruments existed in this form for thousands of years before patterns of three or more tones would evolve in the form of the earliest xylophone.[25] Xylophones originated in the mainland and archipelago of Southeast Asia, eventually spreading to Africa, the Americas, and Europe.[26] Along with xylophones, which ranged from simple sets of three "leg bars" to carefully tuned sets of parallel bars, various cultures developed instruments such as the ground harp, ground zither, musical bow, and jaw harp.[27]
|
36 |
+
|
37 |
+
Images of musical instruments begin to appear in Mesopotamian artifacts in 2800 BC or earlier. Beginning around 2000 BC, Sumerian and Babylonian cultures began delineating two distinct classes of musical instruments due to division of labor and the evolving class system. Popular instruments, simple and playable by anyone, evolved differently from professional instruments whose development focused on effectiveness and skill.[28] Despite this development, very few musical instruments have been recovered in Mesopotamia. Scholars must rely on artifacts and cuneiform texts written in Sumerian or Akkadian to reconstruct the early history of musical instruments in Mesopotamia. Even the process of assigning names to these instruments is challenging since there is no clear distinction among various instruments and the words used to describe them.[29]
|
38 |
+
|
39 |
+
Although Sumerian and Babylonian artists mainly depicted ceremonial instruments, historians have distinguished six idiophones used in early Mesopotamia: concussion clubs, clappers, sistra, bells, cymbals, and rattles.[30] Sistra are depicted prominently in a great relief of Amenhotep III,[31] and are of particular interest because similar designs have been found in far-reaching places such as Tbilisi, Georgia and among the Native American Yaqui tribe.[32] The people of Mesopotamia preferred stringed instruments, as evidenced by their proliferation in Mesopotamian figurines, plaques, and seals. Innumerable varieties of harps are depicted, as well as lyres and lutes, the forerunner of modern stringed instruments such as the violin.[33]
|
40 |
+
|
41 |
+
Musical instruments used by the Egyptian culture before 2700 BC bore striking similarity to those of Mesopotamia, leading historians to conclude that the civilizations must have been in contact with one another. Sachs notes that Egypt did not possess any instruments that the Sumerian culture did not also possess.[34] However, by 2700 BC the cultural contacts seem to have dissipated; the lyre, a prominent ceremonial instrument in Sumer, did not appear in Egypt for another 800 years.[34] Clappers and concussion sticks appear on Egyptian vases as early as 3000 BC. The civilization also made use of sistra, vertical flutes, double clarinets, arched and angular harps, and various drums.[35]
|
42 |
+
|
43 |
+
Little history is available in the period between 2700 BC and 1500 BC, as Egypt (and indeed, Babylon) entered a long violent period of war and destruction. This period saw the Kassites destroy the Babylonian empire in Mesopotamia and the Hyksos destroy the Middle Kingdom of Egypt. When the Pharaohs of Egypt conquered Southwest Asia in around 1500 BC, the cultural ties to Mesopotamia were renewed and Egypt's musical instruments also reflected heavy influence from Asiatic cultures.[34] Under their new cultural influences, the people of the New Kingdom began using oboes, trumpets, lyres, lutes, castanets, and cymbals.[36]
|
44 |
+
|
45 |
+
Unlike Mesopotamia and Egypt, professional musicians did not exist in Israel between 2000 and 1000 BC. While the history of musical instruments in Mesopotamia and Egypt relies on artistic representations, the culture in Israel produced few such representations. Scholars must therefore rely on information gleaned from the Bible and the Talmud.[37] The Hebrew texts mention two prominent instruments associated with Jubal: the ugab (pipes) and kinnor (lyre).[38] Other instruments of the period included the tof (frame drum), pa'amon (small bells or jingles), shofar, and the trumpet-like hasosra.[39]
|
46 |
+
|
47 |
+
The introduction of a monarchy in Israel during the 11th century BC produced the first professional musicians and with them a drastic increase in the number and variety of musical instruments.[40] However, identifying and classifying the instruments remains a challenge due to the lack of artistic interpretations. For example, stringed instruments of uncertain design called nevals and asors existed, but neither archaeology nor etymology can clearly define them.[41] In her book A Survey of Musical Instruments, American musicologist Sibyl Marcuse proposes that the nevel must be similar to vertical harp due to its relation to nabla, the Phoenician term for "harp".[42]
|
48 |
+
|
49 |
+
In Greece, Rome, and Etruria, the use and development of musical instruments stood in stark contrast to those cultures' achievements in architecture and sculpture. The instruments of the time were simple and virtually all of them were imported from other cultures.[43] Lyres were the principal instrument, as musicians used them to honor the gods.[44] Greeks played a variety of wind instruments they classified as aulos (reeds) or syrinx (flutes); Greek writing from that time reflects a serious study of reed production and playing technique.[8] Romans played reed instruments named tibia, featuring side-holes that could be opened or closed, allowing for greater flexibility in playing modes.[45] Other instruments in common use in the region included vertical harps derived from those of the Orient, lutes of Egyptian design, various pipes and organs, and clappers, which were played primarily by women.[46]
|
50 |
+
|
51 |
+
Evidence of musical instruments in use by early civilizations of India is almost completely lacking, making it impossible to reliably attribute instruments to the Munda and Dravidian language-speaking cultures that first settled the area. Rather, the history of musical instruments in the area begins with the Indus Valley Civilization that emerged around 3000 BC. Various rattles and whistles found among excavated artifacts are the only physical evidence of musical instruments.[47] A clay statuette indicates the use of drums, and examination of the Indus script has also revealed representations of vertical arched harps identical in design to those depicted in Sumerian artifacts. This discovery is among many indications that the Indus Valley and Sumerian cultures maintained cultural contact. Subsequent developments in musical instruments in India occurred with the Rigveda, or hymns. These songs used various drums, shell trumpets, harps, and flutes.[48] Other prominent instruments in use during the early centuries AD were the snake charmer's double clarinet, bagpipes, barrel drums, cross flutes, and short lutes. In all, India had no unique musical instruments until the Middle Ages.[49]
|
52 |
+
|
53 |
+
Musical instruments such as zithers appeared in Chinese writings around 12th century BC and earlier.[50] Early Chinese philosophers such as Confucius (551–479 BC), Mencius (372–289 BC), and Laozi shaped the development of musical instruments in China, adopting an attitude toward music similar to that of the Greeks. The Chinese believed that music was an essential part of character and community, and developed a unique system of classifying their musical instruments according to their material makeup.[51]
|
54 |
+
|
55 |
+
Idiophones were extremely important in Chinese music, hence the majority of early instruments were idiophones. Poetry of the Shang dynasty mentions bells, chimes, drums, and globular flutes carved from bone, the latter of which has been excavated and preserved by archaeologists.[52] The Zhou dynasty saw percussion instruments such as clappers, troughs, wooden fish, and yǔ (wooden tiger). Wind instruments such as flute, pan-pipes, pitch-pipes, and mouth organs also appeared in this time period.[53] The xiao (an end-blown flute) and various other instruments that spread through many cultures, came into use in China during and after the Han dynasty.[54]
|
56 |
+
|
57 |
+
Although civilizations in Central America attained a relatively high level of sophistication by the eleventh century AD, they lagged behind other civilizations in the development of musical instruments. For example, they had no stringed instruments; all of their instruments were idiophones, drums, and wind instruments such as flutes and trumpets. Of these, only the flute was capable of producing a melody.[55] In contrast, pre-Columbian South American civilizations in areas such as modern-day Peru, Colombia, Ecuador, Bolivia, and Chile were less advanced culturally but more advanced musically. South American cultures of the time used pan-pipes as well as varieties of flutes, idiophones, drums, and shell or wood trumpets.[56]
|
58 |
+
|
59 |
+
During the period of time loosely referred to as the Middle Ages, China developed a tradition of integrating musical influence from other regions. The first record of this type of influence is in 384 AD, when China established an orchestra in its imperial court after a conquest in Turkestan. Influences from Middle East, Persia, India, Mongolia, and other countries followed. In fact, Chinese tradition attributes many musical instruments from this period to those regions and countries.[57] Cymbals gained popularity, along with more advanced trumpets, clarinets, pianos, oboes, flutes, drums, and lutes.[58] Some of the first bowed zithers appeared in China in the 9th or 10th century, influenced by Mongolian culture.[59]
|
60 |
+
|
61 |
+
India experienced similar development to China in the Middle Ages; however, stringed instruments developed differently as they accommodated different styles of music. While stringed instruments of China were designed to produce precise tones capable of matching the tones of chimes, stringed instruments of India were considerably more flexible. This flexibility suited the slides and tremolos of Hindu music. Rhythm was of paramount importance in Indian music of the time, as evidenced by the frequent depiction of drums in reliefs dating to the Middle Ages. The emphasis on rhythm is an aspect native to Indian music.[60] Historians divide the development of musical instruments in medieval India between pre-Islamic and Islamic periods due to the different influence each period provided.[61]
|
62 |
+
|
63 |
+
In pre-Islamic times, idiophones such as handbells, cymbals, and peculiar instruments resembling gongs came into wide use in Hindu music. The gong-like instrument was a bronze disk that was struck with a hammer instead of a mallet. Tubular drums, stick zithers (veena), short fiddles, double and triple flutes, coiled trumpets, and curved India horns emerged in this time period.[62] Islamic influences brought new types of drums, perfectly circular or octagonal as opposed to the irregular pre-Islamic drums.[63] Persian influence brought oboes and sitars, although Persian sitars had three strings and Indian version had from four to seven.[64] The Islamic culture also introduced double-clarinet instruments as the Alboka (from Arab, al-buq or "horn") nowadays only alive in Basque Country. It must be played using the technique of the circular breathing.
|
64 |
+
|
65 |
+
Southeast Asian musical innovations include those during a period of Indian influence that ended around 920 AD.[65] Balinese and Javanese music made use of xylophones and metallophones, bronze versions of the former.[66] The most prominent and important musical instrument of Southeast Asia was the gong. While the gong likely originated in the geographical area between Tibet and Burma, it was part of every category of human activity in maritime Southeast Asia including Java.[67]
|
66 |
+
|
67 |
+
The areas of Mesopotamia and the Arabian Peninsula experiences rapid growth and sharing of musical instruments once they were united by Islamic culture in the seventh century.[68] Frame drums and cylindrical drums of various depths were immensely important in all genres of music.[69] Conical oboes were involved in the music that accompanied wedding and circumcision ceremonies. Persian miniatures provide information on the development of kettle drums in Mesopotamia that spread as far as Java.[70] Various lutes, zithers, dulcimers, and harps spread as far as Madagascar to the south and modern-day Sulawesi to the east.[71]
|
68 |
+
|
69 |
+
Despite the influences of Greece and Rome, most musical instruments in Europe during the Middles Ages came from Asia. The lyre is the only musical instrument that may have been invented in Europe until this period.[72] Stringed instruments were prominent in Middle Age Europe. The central and northern regions used mainly lutes, stringed instruments with necks, while the southern region used lyres, which featured a two-armed body and a crossbar.[72] Various harps served Central and Northern Europe as far north as Ireland, where the harp eventually became a national symbol.[73] Lyres propagated through the same areas, as far east as Estonia.[74]
|
70 |
+
|
71 |
+
European music between 800 and 1100 became more sophisticated, more frequently requiring instruments capable of polyphony. The 9th-century Persian geographer Ibn Khordadbeh mentioned in his lexicographical discussion of music instruments that, in the Byzantine Empire, typical instruments included the urghun (organ), shilyani (probably a type of harp or lyre), salandj (probably a bagpipe) and the lyra.[75] The Byzantine lyra, a bowed string instrument, is an ancestor of most European bowed instruments, including the violin.[76]
|
72 |
+
|
73 |
+
The monochord served as a precise measure of the notes of a musical scale, allowing more accurate musical arrangements.[77] Mechanical hurdy-gurdies allowed single musicians to play more complicated arrangements than a fiddle would; both were prominent folk instruments in the Middle Ages.[78][79] Southern Europeans played short and long lutes whose pegs extended to the sides, unlike the rear-facing pegs of Central and Northern European instruments.[80] Idiophones such as bells and clappers served various practical purposes, such as warning of the approach of a leper.[81]
|
74 |
+
|
75 |
+
The ninth century revealed the first bagpipes, which spread throughout Europe and had many uses from folk instruments to military instruments.[82] The construction of pneumatic organs evolved in Europe starting in fifth-century Spain, spreading to England in about 700.[83] The resulting instruments varied in size and use from portable organs worn around the neck to large pipe organs.[84] Literary accounts of organs being played in English Benedictine abbeys toward the end of the tenth century are the first references to organs being connected to churches.[85] Reed players of the Middle Ages were limited to oboes; no evidence of clarinets exists during this period.[86]
|
76 |
+
|
77 |
+
Musical instrument development was dominated by the Occident from 1400 on, indeed, the most profound changes occurred during the Renaissance period.[18] Instruments took on other purposes than accompanying singing or dance, and performers used them as solo instruments. Keyboards and lutes developed as polyphonic instruments, and composers arranged increasingly complex pieces using more advanced tablature. Composers also began designing pieces of music for specific instruments.[18] In the latter half of the sixteenth century, orchestration came into common practice as a method of writing music for a variety of instruments. Composers now specified orchestration where individual performers once applied their own discretion.[87] The polyphonic style dominated popular music, and the instrument makers responded accordingly.[88]
|
78 |
+
|
79 |
+
Beginning in about 1400, the rate of development of musical instruments increased in earnest as compositions demanded more dynamic sounds. People also began writing books about creating, playing, and cataloging musical instruments; the first such book was Sebastian Virdung's 1511 treatise Musica getuscht und ausgezogen ('Music Germanized and Abstracted').[87] Virdung's work is noted as being particularly thorough for including descriptions of "irregular" instruments such as hunters' horns and cow bells, though Virdung is critical of the same. Other books followed, including Arnolt Schlick's Spiegel der Orgelmacher und Organisten ('Mirror of Organ Makers and Organ Players') the following year, a treatise on organ building and organ playing.[89] Of the instructional books and references published in the Renaissance era, one is noted for its detailed description and depiction of all wind and stringed instruments, including their relative sizes. This book, the Syntagma musicum by Michael Praetorius, is now considered an authoritative reference of sixteenth-century musical instruments.[90]
|
80 |
+
|
81 |
+
In the sixteenth century, musical instrument builders gave most instruments – such as the violin – the "classical shapes" they retain today. An emphasis on aesthetic beauty also developed; listeners were as pleased with the physical appearance of an instrument as they were with its sound. Therefore, builders paid special attention to materials and workmanship, and instruments became collectibles in homes and museums.[91] It was during this period that makers began constructing instruments of the same type in various sizes to meet the demand of consorts, or ensembles playing works written for these groups of instruments.[92]
|
82 |
+
|
83 |
+
Instrument builders developed other features that endure today. For example, while organs with multiple keyboards and pedals already existed, the first organs with solo stops emerged in the early fifteenth century. These stops were meant to produce a mixture of timbres, a development needed for the complexity of music of the time.[93] Trumpets evolved into their modern form to improve portability, and players used mutes to properly blend into chamber music.[94]
|
84 |
+
|
85 |
+
Beginning in the seventeenth century, composers began writing works to a higher emotional degree. They felt that polyphony better suited the emotional style they were aiming for and began writing musical parts for instruments that would complement the singing human voice.[88] As a result, many instruments that were incapable of larger ranges and dynamics, and therefore were seen as unemotional, fell out of favor. One such instrument was the shawm.[95] Bowed instruments such as the violin, viola, baryton, and various lutes dominated popular music.[96] Beginning in around 1750, however, the lute disappeared from musical compositions in favor of the rising popularity of the guitar.[97] As the prevalence of string orchestras rose, wind instruments such as the flute, oboe, and bassoon were readmitted to counteract the monotony of hearing only strings.[98]
|
86 |
+
|
87 |
+
In the mid-seventeenth century, what was known as a hunter's horn underwent transformation into an "art instrument" consisting of a lengthened tube, a narrower bore, a wider bell, and much wider range. The details of this transformation are unclear, but the modern horn or, more colloquially, French horn, had emerged by 1725.[99] The slide trumpet appeared, a variation that includes a long-throated mouthpiece that slid in and out, allowing the player infinite adjustments in pitch. This variation on the trumpet was unpopular due to the difficulty involved in playing it.[100] Organs underwent tonal changes in the Baroque period, as manufacturers such as Abraham Jordan of London made the stops more expressive and added devices such as expressive pedals. Sachs viewed this trend as a "degeneration" of the general organ sound.[101]
|
88 |
+
|
89 |
+
During the Classical and Romantic periods of music, lasting from roughly 1750 to 1900, a great deal of musical instruments capable of producing new timbres and higher volume were developed and introduced into popular music. The design changes that broadened the quality of timbres allowed instruments to produce a wider variety of expression. Large orchestras rose in popularity and, in parallel, the composers determined to produce entire orchestral scores that made use of the expressive abilities of modern instruments. Since instruments were involved in collaborations of a much larger scale, their designs had to evolve to accommodate the demands of the orchestra.[102]
|
90 |
+
|
91 |
+
Some instruments also had to become louder to fill larger halls and be heard over sizable orchestras. Flutes and bowed instruments underwent many modifications and design changes—most of them unsuccessful—in efforts to increase volume. Other instruments were changed just so they could play their parts in the scores. Trumpets traditionally had a "defective" range—they were incapable of producing certain notes with precision.[103] New instruments such as the clarinet, saxophone, and tuba became fixtures in orchestras. Instruments such as the clarinet also grew into entire "families" of instruments capable of different ranges: small clarinets, normal clarinets, bass clarinets, and so on.[102]
|
92 |
+
|
93 |
+
Accompanying the changes to timbre and volume was a shift in the typical pitch used to tune instruments. Instruments meant to play together, as in an orchestra, must be tuned to the same standard lest they produce audibly different sounds while playing the same notes. Beginning in 1762, the average concert pitch began rising from a low of 377 vibrations to a high of 457 in 1880 Vienna.[104] Different regions, countries, and even instrument manufacturers preferred different standards, making orchestral collaboration a challenge. Despite even the efforts of two organized international summits attended by noted composers like Hector Berlioz, no standard could be agreed upon.[105]
|
94 |
+
|
95 |
+
The evolution of traditional musical instruments slowed beginning in the 20th century.[106] Instruments such as the violin, flute, french horn, and harp are largely the same as those manufactured throughout the eighteenth and nineteenth centuries. Gradual iterations do emerge; for example, the "New Violin Family" began in 1964 to provide differently sized violins to expand the range of available sounds.[107] The slowdown in development was practical response to the concurrent slowdown in orchestra and venue size.[108] Despite this trend in traditional instruments, the development of new musical instruments exploded in the twentieth century, and the variety of instruments developed overshadows any prior period.[106]
|
96 |
+
|
97 |
+
The proliferation of electricity in the 20th century lead to the creation of an entirely new category of musical instruments: electronic instruments, or electrophones.[109] The vast majority of electrophones produced in the first half of the 20th century were what Sachs called "electromechanical instruments"; they have mechanical parts that produce sound vibrations, and these vibrations are picked up and amplified by electrical components. Examples of electromechanical instruments include Hammond organs and electric guitars.[109] Sachs also defined a subcategory of "radioelectric instruments" such as the theremin, which produces music through the player's hand movements around two antennas.[110]
|
98 |
+
|
99 |
+
The latter half of the 20th century saw the evolution of synthesizers, which produce sound using analog or digital circuits and microchips. In the late 1960s, Bob Moog and other inventors developed the first commercial synthesizers, such as the Moog synthesizer.[111] Whereas once they had filled rooms, synthesizers now can be embedded in any electronic device,[111] and are ubiquitous in modern music.[112] Samplers, introduced around 1980, allow users to sample and reuse existing sounds, and were important to the development of hip hop.[113] 1982 saw the introduction of MIDI, a standardized means of synchronizing electronic instruments that remains an industry standard.[114] The modern proliferation of computers and microchips has created an industry of electronic musical instruments.[115]
|
100 |
+
|
101 |
+
There are many different methods of classifying musical instruments. Various methods examine aspects such as the physical properties of the instrument (material, color, shape, etc.), the use for the instrument, the means by which music is produced with the instrument, the range of the instrument, and the instrument's place in an orchestra or other ensemble. Most methods are specific to a geographic area or cultural group and were developed to serve the unique classification requirements of the group.[116] The problem with these specialized classification schemes is that they tend to break down once they are applied outside of their original area. For example, a system based on instrument use would fail if a culture invented a new use for the same instrument. Scholars recognize Hornbostel–Sachs as the only system that applies to any culture and, more important, provides only possible classification for each instrument.[117][118] The most common types of instrument classifications are strings, brass, woodwind, and percussion.
|
102 |
+
|
103 |
+
An ancient Hindu system named the Natya Shastra, written by the sage Bharata Muni and dating from between 200 BC and 200 AD, divides instruments into four main classification groups: instruments where the sound is produced by vibrating strings; percussion instruments with skin heads; instruments where the sound is produced by vibrating columns of air; and "solid", or non-skin, percussion instruments.[117] This system was adapted to some degree in 12th-century Europe by Johannes de Muris, who used the terms tensibilia (stringed instruments), inflatibilia (wind instruments), and percussibilia (all percussion instruments).[119] In 1880, Victor-Charles Mahillon adapted the Natya Shastra and assigned Greek labels to the four classifications: chordophones (stringed instruments), membranophones (skin-head percussion instruments), aerophones (wind instruments), and autophones (non-skin percussion instruments).[117]
|
104 |
+
|
105 |
+
Erich von Hornbostel and Curt Sachs adopted Mahillon's scheme and published an extensive new scheme for classification in Zeitschrift für Ethnologie in 1914. Hornbostel and Sachs used most of Mahillon's system, but replaced the term autophone with idiophone.[117]
|
106 |
+
|
107 |
+
The original Hornbostel–Sachs system classified instruments into four main groups:
|
108 |
+
|
109 |
+
Sachs later added a fifth category, electrophones, such as theremins, which produce sound by electronic means.[109] Within each category are many subgroups. The system has been criticised and revised over the years, but remains widely used by ethnomusicologists and organologists.[119][124]
|
110 |
+
|
111 |
+
Andre Schaeffner, a curator at the Musée de l'Homme, disagreed with the Hornbostel–Sachs system and developed his own system in 1932. Schaeffner believed that the pure physics of a musical instrument, rather than its specific construction or playing method, should always determine its classification. (Hornbostel–Sachs, for example, divide aerophones on the basis of sound production, but membranophones on the basis of the shape of the instrument). His system divided instruments into two categories: instruments with solid, vibrating bodies and instruments containing vibrating air.[125]
|
112 |
+
|
113 |
+
Musical instruments are also often classified by their musical range in comparison with other instruments in the same family. This exercise is useful when placing instruments in context of an orchestra or other ensemble.
|
114 |
+
|
115 |
+
These terms are named after singing voice classifications:
|
116 |
+
|
117 |
+
Some instruments fall into more than one category. For example, the cello may be considered tenor, baritone or bass, depending on how its music fits into the ensemble. The trombone and French horn may be alto, tenor, baritone, or bass depending on the range it is played in. Many instruments have their range as part of their name: soprano saxophone, tenor saxophone, baritone horn, alto flute, bass guitar, etc. Additional adjectives describe instruments above the soprano range or below the bass, for example the sopranino saxophone and contrabass clarinet. When used in the name of an instrument, these terms are relative, describing the instrument's range in comparison to other instruments of its family and not in comparison to the human voice range or instruments of other families. For example, a bass flute's range is from C3 to F♯6, while a bass clarinet plays about one octave lower.
|
118 |
+
|
119 |
+
The materials used in making musical instruments vary greatly by culture and application. Many of the materials have special significance owing to their source or rarity. Some cultures worked substances from the human body into their instruments. In ancient Mexico, for example, the material drums were made from might contain actual human body parts obtained from sacrificial offerings. In New Guinea, drum makers would mix human blood into the adhesive used to attach the membrane.[126] Mulberry trees are held in high regard in China owing to their mythological significance—instrument makers would hence use them to make zithers. The Yakuts believe that making drums from trees struck by lightning gives them a special connection to nature.[127]
|
120 |
+
|
121 |
+
Musical instrument construction is a specialized trade that requires years of training, practice, and sometimes an apprenticeship. Most makers of musical instruments specialize in one genre of instruments; for example, a luthier makes only stringed instruments. Some make only one type of instrument such as a piano. Whatever the instrument constructed, the instrument maker must consider materials, construction technique, and decoration, creating a balanced instrument that is both functional and aesthetically pleasing.[128] Some builders are focused on a more artistic approach and develop experimental musical instruments, often meant for individual playing styles developed by the builder themself.
|
122 |
+
|
123 |
+
Regardless of how the sound is produced, many musical instruments have a keyboard as the user interface. Keyboard instruments are any instruments that are played with a musical keyboard, which is a row of small keys that can be pressed. Every key generates one or more sounds; most keyboard instruments have extra means (pedals for a piano, stops and a pedal keyboard for an organ) to manipulate these sounds. They may produce sound by wind being fanned (organ) or pumped (accordion),[130][131] vibrating strings either hammered (piano) or plucked (harpsichord),[132][133] by electronic means (synthesizer),[134] or in some other way. Sometimes, instruments that do not usually have a keyboard, such as the glockenspiel, are fitted with one.[135] Though they have no moving parts and are struck by mallets held in the player's hands, they have the same physical arrangement of keys and produce soundwaves in a similar manner. The theremin, an electrophone, is played without physical contact by the player. The theremin senses the proximity of the player's hands, which triggers changes in its sound. More recently, a MIDI controller keyboard used with a digital audio workstation may have a musical keyboard and a bank of sliders, knobs, and buttons that change many sound parameters of a synthesizer.
|
124 |
+
|
en/2746.html.txt
ADDED
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A percussion instrument is a musical instrument that is sounded by being struck or scraped by a beater including attached or enclosed beaters or rattles struck, scraped or rubbed by hand or struck against another similar instrument. Excluding zoomusicological instruments and the human voice, the percussion family is believed to include the oldest musical instruments.[1]
|
2 |
+
|
3 |
+
The percussion section of an orchestra most commonly contains instruments such as the timpani, snare drum, bass drum, cymbals, triangle and tambourine. However, the section can also contain non-percussive instruments, such as whistles and sirens, or a blown conch shell. Percussive techniques can even be applied to the human body itself, as in body percussion. On the other hand, keyboard instruments, such as the celesta, are not normally part of the percussion section, but keyboard percussion instruments such as the glockenspiel and xylophone (which do not have piano keyboards) are included.
|
4 |
+
|
5 |
+
Percussion instruments are most commonly divided into two classes: Pitched percussion instruments, which produce notes with an identifiable pitch, and unpitched percussion instruments, which produce notes or sounds in an indefinite pitch.[2][failed verification][3][failed verification]
|
6 |
+
|
7 |
+
Percussion instruments may play not only rhythm, but also melody and harmony.
|
8 |
+
|
9 |
+
Percussion is commonly referred to as "the backbone" or "the heartbeat" of a musical ensemble, often working in close collaboration with bass instruments, when present. In jazz and other popular music ensembles, the pianist, bassist, drummer and sometimes the guitarist are referred to as the rhythm section. Most classical pieces written for full orchestra since the time of Haydn and Mozart are orchestrated to place emphasis on the strings, woodwinds, and brass. However, often at least one pair of timpani is included, though they rarely play continuously. Rather, they serve to provide additional accents when needed. In the 18th and 19th centuries, other percussion instruments (like the triangle or cymbals) have been used, again generally sparingly. The use of percussion instruments became more frequent in the 20th century classical music.
|
10 |
+
|
11 |
+
In almost every style of music, percussion plays a pivotal role.[4] In military marching bands and pipes and drums, it is the beat of the bass drum that keeps the soldiers in step and at a regular speed, and it is the snare that provides that crisp, decisive air to the tune of a regiment. In classic jazz, one almost immediately thinks of the distinctive rhythm of the hi-hats or the ride cymbal when the word-swing is spoken. In more recent popular-music culture, it is almost impossible to name three or four rock, hip-hop, rap, funk or even soul charts or songs that do not have some sort of percussive beat keeping the tune in time.
|
12 |
+
|
13 |
+
Because of the diversity of percussive instruments, it is not uncommon to find large musical ensembles composed entirely of percussion. Rhythm, melody, and harmony are all represented in these ensembles.
|
14 |
+
|
15 |
+
Music for pitched percussion instruments can be notated on a staff with the same treble and bass clefs used by many non-percussive instruments. Music for percussive instruments without a definite pitch can be notated with a specialist rhythm or percussion-clef. The guitar also has a special "tab" staff. More often a bass clef is substituted for rhythm clef.
|
16 |
+
|
17 |
+
Percussion instruments are classified by various criteria sometimes depending on their construction, ethnic origin, function within musical theory and orchestration, or their relative prevalence in common knowledge.
|
18 |
+
|
19 |
+
The word percussion derives from the Latin verb percussio to beat, strike in the musical sense, and the noun percussus, a beating. As a noun in contemporary English, Wiktionary describes it as the collision of two bodies to produce a sound. The term is not unique to music, but has application in medicine and weaponry, as in percussion cap. However, all known uses of percussion appear to share a similar lineage beginning with the original Latin percussus. In a musical context then, the percussion instruments may have been originally coined to describe a family of musical instruments including drums, rattles, metal plates, or blocks that musicians beat or struck to produce sound.
|
20 |
+
|
21 |
+
The Hornbostel–Sachs system has no high-level section for percussion. Most percussion instruments as the term is normally understood are classified as idiophones and membranophones. However the term percussion is instead used at lower-levels of the Hornbostel–Sachs hierarchy, including to identify instruments struck with either a non sonorous object hand, stick, striker or against a non-sonorous object human body, the ground. This is opposed to concussion, which refers to instruments with two or more complementary sonorous parts that strike against each other and other meanings. For example:
|
22 |
+
|
23 |
+
111.1 Concussion idiophones or clappers, played in pairs and beaten against each other, such as zills and clapsticks.
|
24 |
+
|
25 |
+
111.2 Percussion idiophones, includes many percussion instruments played with the hand or by a percussion mallet, such as the hang, gongs and the xylophone, but not drums and only some cymbals.
|
26 |
+
|
27 |
+
21 Struck drums, includes most types of drum, such as the timpani, snare drum, and tom-tom.
|
28 |
+
|
29 |
+
412.12 Percussion reeds, a class of wind instrument unrelated to percussion in the more common sense
|
30 |
+
|
31 |
+
There are many instruments that have some claim to being percussion, but are classified otherwise:
|
32 |
+
|
33 |
+
Percussion instruments are sometimes classified as pitched or unpitched. While valid, this classification is widely seen as inadequate. Rather, it may be more informative to describe percussion instruments in regards to one or more of the following four paradigms:
|
34 |
+
|
35 |
+
Many texts, including Teaching Percussion by Gary Cook of the University of Arizona, begin by studying the physical characteristics of instruments and the methods by which they can produce sound. This is perhaps the most scientifically pleasing assignment of nomenclature whereas the other paradigms are more dependent on historical or social circumstances. Based on observation and experimentation, one can determine how an instrument produces sound and then assign the instrument to one of the following four categories:
|
36 |
+
|
37 |
+
"Idiophones produce sounds through the vibration of their entire body."[6] Examples of idiophones:
|
38 |
+
|
39 |
+
Most objects commonly known as drums are membranophones. Membranophones produce sound when the membrane or head is struck with a hand, mallet, stick, beater, or improvised tool.[6]
|
40 |
+
|
41 |
+
Examples of membranophones:
|
42 |
+
|
43 |
+
Most instruments known as chordophones are defined as string instruments, wherein their sound is derived from the vibration of a string, but some such as these examples also fall under percussion instruments.
|
44 |
+
|
45 |
+
Most instruments known as aerophones are defined as wind instruments such as a saxophone whereby sound is produced by a stream of air being blown through the object. Although most aerophones are played by specialist players who are trained for that specific instrument, in a traditional ensemble setting, aerophones are played by a percussionist, generally due to the instrument's unconventional nature. Examples of aerophones played by percussionists
|
46 |
+
|
47 |
+
When classifying instruments by function it is useful to note if a percussion instrument makes a definite pitch or indefinite pitch.
|
48 |
+
|
49 |
+
For example, some percussion instruments such as the marimba and timpani produce an obvious fundamental pitch and can therefore play melody and serve harmonic functions in music. Other instruments such as crash cymbals and snare drums produce sounds with such complex overtones and a wide range of prominent frequencies that no pitch is discernible.
|
50 |
+
|
51 |
+
Percussion instruments in this group are sometimes referred to as pitched or tuned.
|
52 |
+
|
53 |
+
Examples of percussion instruments with definite pitch:
|
54 |
+
|
55 |
+
Instruments in this group are sometimes referred to as non-pitched, unpitched, or untuned. Traditionally these instruments are thought of as making a sound that contains such complex frequencies that no discernible pitch can be heard.
|
56 |
+
|
57 |
+
In fact many traditionally unpitched instruments, such as triangles and even cymbals, have also been produced as tuned sets.[3]
|
58 |
+
|
59 |
+
Examples of percussion instruments with indefinite pitch:
|
60 |
+
|
61 |
+
It is difficult to define what is common knowledge but there are instruments percussionists and composers use in contemporary music that most people wouldn't consider musical instruments. It is worthwhile to try to distinguish between instruments based on their acceptance or consideration by a general audience.
|
62 |
+
|
63 |
+
For example, most people would not consider an anvil, a brake drum (on a vehicle with drum brakes, the circular hub the brake shoes press against), or a fifty-five gallon oil barrel musical instruments yet composers and percussionists use these objects.
|
64 |
+
|
65 |
+
Percussion instruments generally fall into the following categories:
|
66 |
+
|
67 |
+
One pre-20th century example of found percussion is the use of cannon usually loaded with blank charges in Tchiakovsky's 1812 Overture. John Cage, Harry Partch, Edgard Varèse, and Peter Schickele, all noted composers, created entire pieces of music using unconventional instruments. Beginning in the early 20th century perhaps with Ionisation by Edgard Varèse which used air-raid sirens among other things, composers began to require that percussionists invent or find objects to produce desired sounds and textures. Another example the use of a hammer and saw in Penderecki's De Natura Sonoris No. 2. By the late 20th century, such instruments were common in modern percussion ensemble music and popular productions, such as the off-Broadway show, Stomp. Rock band Aerosmith used a number of unconventional instruments in their song Sweet Emotion, including shotguns, brooms, and a sugar bag. The metal band Slipknot is well known for playing unusual percussion items, having two percussionists in the band. Along with deep sounding drums, their sound includes hitting baseball bats and other objects on beer kegs to create a distinctive sound.
|
68 |
+
|
69 |
+
It is not uncommon to discuss percussion instruments in relation to their cultural origin. This led to a division between instruments considered common or modern, and folk instruments with significant history or purpose within a geographic region or culture.
|
70 |
+
|
71 |
+
This category includes instruments that are widely available and popular throughout the world:
|
72 |
+
|
73 |
+
The percussionist uses various objects to strike a percussion instrument to produce sound.
|
74 |
+
|
75 |
+
The general term for a musician who plays percussion instruments is "percussionist" but the terms listed below often describe specialties:
|
en/2747.html.txt
ADDED
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
An island or isle is any piece of sub-continental land that is surrounded by water.[1] Very small islands such as emergent land features on atolls can be called islets, skerries, cays or keys. An island in a river or a lake island may be called an eyot or ait, and a small island off the coast may be called a holm. Sedimentary islands in the Ganges delta are called chars. A grouping of geographically or geologically related islands, such as the Philippines, is referred to as an archipelago.
|
6 |
+
|
7 |
+
An island may be described as such, despite the presence of an artificial land bridge; examples are Singapore and its causeway, and the various Dutch delta islands, such as IJsselmonde. Some places may even retain "island" in their names for historical reasons after being connected to a larger landmass by a land bridge or landfill, such as Coney Island and Coronado Island, though these are, strictly speaking, tied islands. Conversely, when a piece of land is separated from the mainland by a man-made canal, for example the Peloponnese by the Corinth Canal, more or less the entirety of Fennoscandia by the White Sea Canal, or Marble Hill in northern Manhattan during the time between the building of the United States Ship Canal and the filling-in of the Harlem River which surrounded the area, it is generally not considered an island.
|
8 |
+
|
9 |
+
There are two main types of islands in the sea: continental and oceanic. There are also artificial islands, which are man-made.
|
10 |
+
|
11 |
+
The word island derives from Middle English iland, from Old English igland (from ig or ieg, similarly meaning 'island' when used independently, and -land carrying its contemporary meaning; cf. Dutch eiland ("island"), German Eiland ("small island")). However, the spelling of the word was modified in the 15th century because of a false etymology caused by an incorrect association with the etymologically unrelated Old French loanword isle, which itself comes from the Latin word insula.[3][4] Old English ieg is actually a cognate of Swedish ö and German Aue, and related to Latin aqua (water).[5]
|
12 |
+
|
13 |
+
Greenland is the world's largest island, with an area of over 2.1 million km2, while Australia, the world's smallest continent, has an area of 7.6 million km2, but there is no standard of size that distinguishes islands from continents,[6] or from islets.[7]
|
14 |
+
|
15 |
+
There is a difference between islands and continents in terms of geology.[8]
|
16 |
+
Continents are the largest landmass of a particular continental plate; this holds true for Australia, which sits on its own continental lithosphere and tectonic plate (the Australian plate).
|
17 |
+
By contrast, islands are either extensions of the oceanic crust (e.g. volcanic islands), or belong to a continental plate containing a larger landmass; the latter is the case of Greenland, which sits on the North American plate.
|
18 |
+
|
19 |
+
Continental islands are bodies of land that lie on the continental shelf of a continent.[9] Examples are Borneo, Java, Sumatra, Sakhalin, Taiwan and Hainan off Asia; New Guinea, Tasmania, and Kangaroo Island off Australia; Great Britain, Ireland, and Sicily off Europe; Greenland, Newfoundland, Long Island, and Sable Island off North America; and Barbados, the Falkland Islands, and Trinidad off South America.
|
20 |
+
|
21 |
+
A special type of continental island is the microcontinental island, which is created when a continent is rifted. Examples are Madagascar and Socotra off Africa, New Caledonia, New Zealand, and some of the Seychelles.
|
22 |
+
|
23 |
+
Another subtype is an island or bar formed by deposition of tiny rocks where water current loses some of its carrying capacity. This includes:
|
24 |
+
|
25 |
+
Islets are very small islands.
|
26 |
+
|
27 |
+
Oceanic islands are islands that do not sit on continental shelves. The vast majority are volcanic in origin, such as Saint Helena in the South Atlantic Ocean.[10] The few oceanic islands that are not volcanic are tectonic in origin and arise where plate movements have lifted up the ocean floor above the surface. Examples are Saint Peter and Paul Rocks in the Atlantic Ocean and Macquarie Island in the Pacific.
|
28 |
+
|
29 |
+
One type of volcanic oceanic island is found in a volcanic island arc. These islands arise from volcanoes where the subduction of one plate under another is occurring. Examples are the Aleutian Islands, the Mariana Islands, and most of Tonga in the Pacific Ocean. The only examples in the Atlantic Ocean are some of the Lesser Antilles and the South Sandwich Islands.
|
30 |
+
|
31 |
+
Another type of volcanic oceanic island occurs where an oceanic rift reaches the surface. There are two examples: Iceland, which is the world's second largest volcanic island, and Jan Mayen. Both are in the Atlantic.
|
32 |
+
|
33 |
+
A third type of volcanic oceanic island is formed over volcanic hotspots. A hotspot is more or less stationary relative to the moving tectonic plate above it, so a chain of islands results as the plate drifts. Over long periods of time, this type of island is eventually "drowned" by isostatic adjustment and eroded, becoming a seamount. Plate movement across a hot-spot produces a line of islands oriented in the direction of the plate movement. An example is the Hawaiian Islands, from Hawaii to Kure, which continue beneath the sea surface in a more northerly direction as the Emperor Seamounts. Another chain with similar orientation is the Tuamotu Archipelago; its older, northerly trend is the Line Islands. The southernmost chain is the Austral Islands, with its northerly trending part the atolls in the nation of Tuvalu. Tristan da Cunha is an example of a hotspot volcano in the Atlantic Ocean. Another hotspot in the Atlantic is the island of Surtsey, which was formed in 1963.
|
34 |
+
|
35 |
+
An atoll is an island formed from a coral reef that has grown on an eroded and submerged volcanic island. The reef rises to the surface of the water and forms a new island. Atolls are typically ring-shaped with a central lagoon. Examples are the Line Islands in the Pacific and the Maldives in the Indian Ocean.
|
36 |
+
|
37 |
+
Approximately 45,000 tropical islands with an area of at least 5 hectares (12 acres) exist.[11] Examples formed from coral reefs include Maldives, Tonga, Samoa, Nauru, and Polynesia.[11] Granite islands include Seychelles and Tioman and volcanic islands such as Saint Helena.
|
38 |
+
|
39 |
+
The socio-economic diversity of tropical islands ranges from the Stone Age societies in the interior of North Sentinel, Madagascar, Borneo, and Papua New Guinea to the high-tech lifestyles of the city-islands of Singapore and Hong Kong.[12]
|
40 |
+
|
41 |
+
International tourism is a significant factor in the economy of many tropical islands including Seychelles, Sri Lanka, Mauritius, Réunion, Hawaii, Puerto Rico and the Maldives.
|
42 |
+
|
43 |
+
Almost all of Earth's islands are natural and have been formed by tectonic forces or volcanic eruptions. However, artificial (man-made) islands also exist, such as the island in Osaka Bay off the Japanese island of Honshu, on which Kansai International Airport is located. Artificial islands can be built using natural materials (e.g., earth, rock, or sand) or artificial ones (e.g., concrete slabs or recycled waste).[13][14] Sometimes natural islands are artificially enlarged, such as Vasilyevsky Island in the Russian city of St. Petersburg, which had its western shore extended westward by some 0.5 km in the construction of the Passenger Port of St. Petersburg.[15]
|
44 |
+
|
45 |
+
Artificial islands are sometimes built on pre-existing "low-tide elevation," a naturally formed area of land which is surrounded by and above water at low tide but submerged at high tide. Legally these are not islands and have no territorial sea of their own.[16]
|
46 |
+
|
47 |
+
Islands portal
|
48 |
+
|
49 |
+
|
50 |
+
|
51 |
+
|
52 |
+
|
en/2748.html.txt
ADDED
@@ -0,0 +1,351 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
1A7F, 1AI0, 1AIY, 1B9E, 1BEN, 1EV3, 1EV6, 1EVR, 1FU2, 1FUB, 1G7A, 1G7B, 1GUJ, 1HIQ, 1HIS, 1HIT, 1HLS, 1HTV, 1HUI, 1IOG, 1IOH, 1J73, 1JCA, 1JCO, 1K3M, 1KMF, 1LKQ, 1LPH, 1MHI, 1MHJ, 1MSO, 1OS3, 1OS4, 1Q4V, 1QIY, 1QIZ, 1QJ0, 1RWE, 1SF1, 1T1K, 1T1P, 1T1Q, 1TRZ, 1TYL, 1TYM, 1UZ9, 1VKT, 1W8P, 1XDA, 1XGL, 1XW7, 1ZEG, 1ZEH, 1ZNJ, 2AIY, 2C8Q, 2C8R, 2CEU, 2H67, 2HH4, 2HHO, 2HIU, 2JMN, 2JUM, 2JUU, 2JUV, 2JV1, 2JZQ, 2K91, 2K9R, 2KJJ, 2KJU, 2KQQ, 2KXK, 2L1Y, 2L1Z, 2LGB, 2M1D, 2M1E, 2M2M, 2M2N, 2M2O, 2M2P, 2OLY, 2OLZ, 2OM0, 2OM1, 2OMG, 2OMH, 2OMI, 2QIU, 2R34, 2R35, 2R36, 2RN5, 2VJZ, 2VK0, 2W44, 2WBY, 2WC0, 2WRU, 2WRV, 2WRW, 2WRX, 2WS0, 2WS1, 2WS4, 2WS6, 2WS7, 3AIY, 3BXQ, 3E7Y, 3E7Z, 3EXX, 3FQ9, 3I3Z, 3I40, 3ILG, 3INC, 3IR0, 3Q6E, 3ROV, 3TT8, 3U4N, 3UTQ, 3UTS, 3UTT, 3V19, 3V1G, 3W11, 3W12, 3W13, 3W7Y, 3W7Z, 3W80, 3ZI3, 3ZQR, 3ZS2, 3ZU1, 4AIY, 4AJX, 4AJZ, 4AK0, 4AKJ, 4EFX, 4EWW, 4EWX, 4EWZ, 4EX0, 4EX1, 4EXX, 4EY1, 4EY9, 4EYD, 4EYN, 4EYP, 4F0N, 4F0O, 4F1A, 4F1B, 4F1C, 4F1D, 4F1F, 4F1G, 4F4T, 4F4V, 4F51, 4F8F, 4FG3, 4FKA, 4GBC, 4GBI, 4GBK, 4GBL, 4GBN, 4IUZ, 5AIY, 2LWZ, 3JSD, 3KQ6, 3P2X, 3P33, 1JK8, 2MLI, 2MPG, 2MPI, 2MVC, 2MVD, 4CXL, 4CXN, 4CY7, 4NIB, 4OGA, 4P65, 4Q5Z, 4RXW, 4UNE, 4UNG, 4UNH, 4XC4, 4WDI, 4Z76, 4Z77, 4Z78, 2N2W, 5CO6, 5ENA, 4Y19, 5BQQ, 5BOQ, 2N2V, 5CNY, 5CO9, 5EN9, 4Y1A, 2N2X, 5BPO, 5CO2, 5BTS, 5HYJ, 5C0D,%%s1EFE, 1SJT, 1SJU, 2KQP,%%s1T0C,%%s2G54, 2G56, 3HYD, 2OMQ
|
2 |
+
|
3 |
+
3630
|
4 |
+
|
5 |
+
16334
|
6 |
+
|
7 |
+
ENSG00000254647
|
8 |
+
|
9 |
+
ENSMUSG00000000215
|
10 |
+
|
11 |
+
P01308
|
12 |
+
|
13 |
+
P01326
|
14 |
+
|
15 |
+
NM_000207NM_001185097NM_001185098NM_001291897
|
16 |
+
|
17 |
+
NM_001185083NM_001185084NM_008387
|
18 |
+
|
19 |
+
NP_001172026.1NP_001172027.1NP_001278826.1NP_000198NP_000198NP_000198NP_000198
|
20 |
+
|
21 |
+
NP_001172012NP_001172013NP_032413
|
22 |
+
|
23 |
+
Insulin (/ˈɪn.sjʊ.lɪn/,[5][6] from Latin insula, 'island') is a peptide hormone produced by beta cells of the pancreatic islets; it is considered to be the main anabolic hormone of the body.[7] It regulates the metabolism of carbohydrates, fats and protein by promoting the absorption of glucose from the blood into liver, fat and skeletal muscle cells.[8] In these tissues the absorbed glucose is converted into either glycogen via glycogenesis or fats (triglycerides) via lipogenesis, or, in the case of the liver, into both.[8] Glucose production and secretion by the liver is strongly inhibited by high concentrations of insulin in the blood.[9] Circulating insulin also affects the synthesis of proteins in a wide variety of tissues. It is therefore an anabolic hormone, promoting the conversion of small molecules in the blood into large molecules inside the cells. Low insulin levels in the blood have the opposite effect by promoting widespread catabolism, especially of reserve body fat.
|
24 |
+
|
25 |
+
Beta cells are sensitive to blood sugar levels so that they secrete insulin into the blood in response to high level of glucose; and inhibit secretion of insulin when glucose levels are low.[10] Insulin enhances glucose uptake and metabolism in the cells, thereby reducing blood sugar level. Their neighboring alpha cells, by taking their cues from the beta cells,[10] secrete glucagon into the blood in the opposite manner: increased secretion when blood glucose is low, and decreased secretion when glucose concentrations are high. Glucagon increases blood glucose level by stimulating glycogenolysis and gluconeogenesis in the liver.[8][10] The secretion of insulin and glucagon into the blood in response to the blood glucose concentration is the primary mechanism of glucose homeostasis.[10]
|
26 |
+
|
27 |
+
Decreased or loss of insulin activity results in diabetes mellitus, a condition of high blood sugar level (hyperglycaemia). There are two types of the disease. In type 1 diabetes mellitus, the beta cells are destroyed by an autoimmune reaction so that insulin can no longer be synthesized or be secreted into the blood.[11] In type 2 diabetes mellitus, the destruction of beta cells is less pronounced than in type 1 diabetes, and is not due to an autoimmune process. Instead, there is an accumulation of amyloid in the pancreatic islets, which likely disrupts their anatomy and physiology.[10] The pathogenesis of type 2 diabetes is not well understood but reduced population of islet beta-cells, reduced secretory function of islet beta-cells that survive, and peripheral tissue insulin resistance are known to be involved.[7] Type 2 diabetes is characterized by increased glucagon secretion which is unaffected by, and unresponsive to the concentration of blood glucose. But insulin is still secreted into the blood in response to the blood glucose.[10] As a result, glucose accumulates in the blood.
|
28 |
+
|
29 |
+
The human insulin protein is composed of 51 amino acids, and has a molecular mass of 5808 Da. It is a heterodimer of an A-chain and a B-chain, which are linked together by disulfide bonds. Insulin's structure varies slightly between species of animals. Insulin from animal sources differs somewhat in effectiveness (in carbohydrate metabolism effects) from human insulin because of these variations. Porcine insulin is especially close to the human version, and was widely used to treat type 1 diabetics before human insulin could be produced in large quantities by recombinant DNA technologies.[12][13][14][15]
|
30 |
+
|
31 |
+
Insulin was the first peptide hormone discovered.[16] Frederick Banting and Charles Herbert Best, working in the laboratory of J.J.R. Macleod at the University of Toronto, were the first to isolate insulin from dog pancreas in 1921. Frederick Sanger sequenced the amino acid structure in 1951, which made insulin the first protein to be fully sequenced.[17] The crystal structure of insulin in the solid state was determined by Dorothy Hodgkin in 1969. Insulin is also the first protein to be chemically synthesised and produced by DNA recombinant technology.[18] It is on the WHO Model List of Essential Medicines, the most important medications needed in a basic health system.[19]
|
32 |
+
|
33 |
+
Insulin may have originated more than a billion years ago.[20] The molecular origins of insulin go at least as far back as the simplest unicellular eukaryotes.[21] Apart from animals, insulin-like proteins are also known to exist in the Fungi and Protista kingdoms.[20]
|
34 |
+
|
35 |
+
Insulin is produced by beta cells of the pancreatic islets in most vertebrates and by the Brockmann body in some teleost fish.[22] Cone snails Conus geographus and Conus tulipa, venomous sea snails that hunt small fish, use modified forms of insulin in their venom cocktails. The insulin toxin, closer in structure to fishes' than to snails' native insulin, slows down the prey fishes by lowering their blood glucose levels.[23][24]
|
36 |
+
|
37 |
+
The preproinsulin precursor of insulin is encoded by the INS gene, which is located on Chromosome 11p15.5.[25][26]
|
38 |
+
|
39 |
+
A variety of mutant alleles with changes in the coding region have been identified. A read-through gene, INS-IGF2, overlaps with this gene at the 5' region and with the IGF2 gene at the 3' region.[25]
|
40 |
+
|
41 |
+
In the pancreatic β cells, glucose is the primary physiological stimulus for the regulation of insulin synthesis. Insulin is mainly regulated through the transcription factors PDX1, NeuroD1, and MafA.[27][28][29][30]
|
42 |
+
|
43 |
+
During a low-glucose state, PDX1 (pancreatic and duodenal homeobox protein 1) is located in the nuclear periphery as a result of interaction with HDAC1 and 2,[31] which results in downregulation of insulin secretion.[32] An increase in blood glucose levels causes phosphorylation of PDX1, which leads it to undergo nuclear translocation and bind the A3 element within the insulin promoter.[33] Upon translocation it interacts with coactivators HAT p300 and SETD7. PDX1 affects the histone modifications through acetylation and deacetylation as well as methylation. It is also said to suppress glucagon.[34]
|
44 |
+
|
45 |
+
NeuroD1, also known as β2, regulates insulin exocytosis in pancreatic β cells by directly inducing the expression of genes involved in exocytosis.[35] It is localized in the cytosol, but in response to high glucose it becomes glycosylated by OGT and/or phosphorylated by ERK, which causes translocation to the nucleus. In the nucleus β2 heterodimerizes with E47, binds to the E1 element of the insulin promoter and recruits co-activator p300 which acetylates β2. It is able to interact with other transcription factors as well in activation of the insulin gene.[35]
|
46 |
+
|
47 |
+
MafA is degraded by proteasomes upon low blood glucose levels. Increased levels of glucose make an unknown protein glycosylated. This protein works as a transcription factor for MafA in an unknown manner and MafA is transported out of the cell. MafA is then translocated back into the nucleus where it binds the C1 element of the insulin promoter.[36][37]
|
48 |
+
|
49 |
+
These transcription factors work synergistically and in a complex arrangement. Increased blood glucose can after a while destroy the binding capacities of these proteins, and therefore reduce the amount of insulin secreted, causing diabetes. The decreased binding activities can be mediated by glucose induced oxidative stress and antioxidants are said to prevent the decreased insulin secretion in glucotoxic pancreatic β cells. Stress signalling molecules and reactive oxygen species inhibits the insulin gene by interfering with the cofactors binding the transcription factors and the transcription factors itself.[38]
|
50 |
+
|
51 |
+
Several regulatory sequences in the promoter region of the human insulin gene bind to transcription factors. In general, the A-boxes bind to Pdx1 factors, E-boxes bind to NeuroD, C-boxes bind to MafA, and cAMP response elements to CREB. There are also silencers that inhibit transcription.
|
52 |
+
|
53 |
+
Contrary to an initial belief that hormones would be generally small chemical molecules, as the first peptide hormone known of its structure, insulin was found to be quite large.[16] A single protein (monomer) of human insulin is composed of 51 amino acids, and has a molecular mass of 5808 Da. The molecular formula of human insulin is C257H383N65O77S6.[41] It is a combination of two peptide chains (dimer) named an A-chain and a B-chain, which are linked together by two disulfide bonds. The A-chain is composed of 21 amino acids, while the B-chain consists of 30 residues. The linking (interchain) disulfide bonds are formed at cysteine residues between the positions A7-B7 and A20-B19. There is an additional (intrachain) disulfide bond within the A-chain between cysteine residues at positions A4 and A11. The A-chain exhibits two α-helical regions at A1-A8 and A12-A19 which are antiparallel; while the B chain has a central α -helix (covering residues B9-B19) flanked by the disulfide bond on either sides and two β-sheets (covering B7-B10 and B20-B23).[16][42]
|
54 |
+
|
55 |
+
The amino acid sequence of insulin is strongly conserved and varies only slightly between species. Bovine insulin differs from human in only three amino acid residues, and porcine insulin in one. Even insulin from some species of fish is similar enough to human to be clinically effective in humans. Insulin in some invertebrates is quite similar in sequence to human insulin, and has similar physiological effects. The strong homology seen in the insulin sequence of diverse species suggests that it has been conserved across much of animal evolutionary history. The C-peptide of proinsulin, however, differs much more among species; it is also a hormone, but a secondary one.[42]
|
56 |
+
|
57 |
+
Insulin is produced and stored in the body as a hexamer (a unit of six insulin molecules), while the active form is the monomer. The hexamer is about 36000 Da in size. The six molecules are linked together as three dimeric units to form symmetrical molecule. An important feature is the presence of zinc atoms (Zn2+) on the axis of symmetry, which are surrounded by three water molecules and three histamine residues at position B10.[16][42]
|
58 |
+
|
59 |
+
The hexamer is an inactive form with long-term stability, which serves as a way to keep the highly reactive insulin protected, yet readily available. The hexamer-monomer conversion is one of the central aspects of insulin formulations for injection. The hexamer is far more stable than the monomer, which is desirable for practical reasons; however, the monomer is a much faster-reacting drug because diffusion rate is inversely related to particle size. A fast-reacting drug means insulin injections do not have to precede mealtimes by hours, which in turn gives people with diabetes more flexibility in their daily schedules.[43] Insulin can aggregate and form fibrillar interdigitated beta-sheets. This can cause injection amyloidosis, and prevents the storage of insulin for long periods.[44]
|
60 |
+
|
61 |
+
Insulin is produced in the pancreas and the Brockmann body (in some fish), and released when any of several stimuli are detected. These stimuli include the rise in plasma concentrations of amino acids and glucose resulting from the digestion of food.[45] Carbohydrates can be polymers of simple sugars or the simple sugars themselves. If the carbohydrates include glucose, then that glucose will be absorbed into the bloodstream and blood glucose level will begin to rise. In target cells, insulin initiates a signal transduction, which has the effect of increasing glucose uptake and storage. Finally, insulin is degraded, terminating the response.
|
62 |
+
|
63 |
+
In mammals, insulin is synthesized in the pancreas within the beta cells. One million to three million pancreatic islets form the endocrine part of the pancreas, which is primarily an exocrine gland. The endocrine portion accounts for only 2% of the total mass of the pancreas. Within the pancreatic islets, beta cells constitute 65–80% of all the cells.[citation needed]
|
64 |
+
|
65 |
+
Insulin consists of two polypeptide chains, the A- and B- chains, linked together by disulfide bonds. It is however first synthesized as a single polypeptide called preproinsulin in beta cells. Preproinsulin contains a 24-residue signal peptide which directs the nascent polypeptide chain to the rough endoplasmic reticulum (RER). The signal peptide is cleaved as the polypeptide is translocated into lumen of the RER, forming proinsulin.[46] In the RER the proinsulin folds into the correct conformation and 3 disulfide bonds are formed. About 5–10 min after its assembly in the endoplasmic reticulum, proinsulin is transported to the trans-Golgi network (TGN) where immature granules are formed. Transport to the TGN may take about 30 minutes.[citation needed]
|
66 |
+
|
67 |
+
Proinsulin undergoes maturation into active insulin through the action of cellular endopeptidases known as prohormone convertases (PC1 and PC2), as well as the exoprotease carboxypeptidase E.[47] The endopeptidases cleave at 2 positions, releasing a fragment called the C-peptide, and leaving 2 peptide chains, the B- and A- chains, linked by 2 disulfide bonds. The cleavage sites are each located after a pair of basic residues (lysine-64 and arginine-65, and arginine-31 and −32). After cleavage of the C-peptide, these 2 pairs of basic residues are removed by the carboxypeptidase.[48] The C-peptide is the central portion of proinsulin, and the primary sequence of proinsulin goes in the order "B-C-A" (the B and A chains were identified on the basis of mass and the C-peptide was discovered later).[citation needed]
|
68 |
+
|
69 |
+
The resulting mature insulin is packaged inside mature granules waiting for metabolic signals (such as leucine, arginine, glucose and mannose) and vagal nerve stimulation to be exocytosed from the cell into the circulation.[49]
|
70 |
+
|
71 |
+
The endogenous production of insulin is regulated in several steps along the synthesis pathway:
|
72 |
+
|
73 |
+
Insulin and its related proteins have been shown to be produced inside the brain, and reduced levels of these proteins are linked to Alzheimer's disease.[50][51][52]
|
74 |
+
|
75 |
+
Insulin release is stimulated also by beta-2 receptor stimulation and inhibited by alpha-1 receptor stimulation. In addition, cortisol, glucagon and growth hormone antagonize the actions of insulin during times of stress. Insulin also inhibits fatty acid release by hormone sensitive lipase in adipose tissue.[8]
|
76 |
+
|
77 |
+
Beta cells in the islets of Langerhans release insulin in two phases. The first-phase release is rapidly triggered in response to increased blood glucose levels, and lasts about 10 minutes. The second phase is a sustained, slow release of newly formed vesicles triggered independently of sugar, peaking in 2 to 3 hours. Reduced first-phase insulin release may be the earliest detectable beta cell defect predicting onset of type 2 diabetes.[53] First-phase release and insulin sensitivity are independent predictors of diabetes.[54]
|
78 |
+
|
79 |
+
The description of first phase release is as follows:
|
80 |
+
|
81 |
+
This is the primary mechanism for release of insulin. Other substances known to stimulate insulin release include the amino acids arginine and leucine, parasympathetic release of acetylcholine (acting via the phospholipase C pathway), sulfonylurea, cholecystokinin (CCK, also via phospholipase C),[59] and the gastrointestinally derived incretins, such as glucagon-like peptide-1 (GLP-1) and glucose-dependent insulinotropic peptide (GIP).
|
82 |
+
|
83 |
+
Release of insulin is strongly inhibited by norepinephrine (noradrenaline), which leads to increased blood glucose levels during stress. It appears that release of catecholamines by the sympathetic nervous system has conflicting influences on insulin release by beta cells, because insulin release is inhibited by α2-adrenergic receptors[60] and stimulated by β2-adrenergic receptors.[61] The net effect of norepinephrine from sympathetic nerves and epinephrine from adrenal glands on insulin release is inhibition due to dominance of the α-adrenergic receptors.[62]
|
84 |
+
|
85 |
+
When the glucose level comes down to the usual physiologic value, insulin release from the β-cells slows or stops. If the blood glucose level drops lower than this, especially to dangerously low levels, release of hyperglycemic hormones (most prominently glucagon from islet of Langerhans alpha cells) forces release of glucose into the blood from the liver glycogen stores, supplemented by gluconeogenesis if the glycogen stores become depleted. By increasing blood glucose, the hyperglycemic hormones prevent or correct life-threatening hypoglycemia.
|
86 |
+
|
87 |
+
Evidence of impaired first-phase insulin release can be seen in the glucose tolerance test, demonstrated by a substantially elevated blood glucose level at 30 minutes after the ingestion of a glucose load (75 or 100 g of glucose), followed by a slow drop over the next 100 minutes, to remain above 120 mg/100 ml after two hours after the start of the test. In a normal person the blood glucose level is corrected (and may even be slightly over-corrected) by the end of the test. An insulin spike is a 'first response' to blood glucose increase, this response is individual and dose specific although it was always previously assumed to be food type specific only.
|
88 |
+
|
89 |
+
Even during digestion, in general, one or two hours following a meal, insulin release from the pancreas is not continuous, but oscillates with a period of 3–6 minutes, changing from generating a blood insulin concentration more than about 800 p mol/l to less than 100 pmol/l (in rats).[63] This is thought to avoid downregulation of insulin receptors in target cells, and to assist the liver in extracting insulin from the blood.[63] This oscillation is important to consider when administering insulin-stimulating medication, since it is the oscillating blood concentration of insulin release, which should, ideally, be achieved, not a constant high concentration.[63] This may be achieved by delivering insulin rhythmically to the portal vein, by light activated delivery, or by islet cell transplantation to the liver.[63][64][65]
|
90 |
+
|
91 |
+
The blood insulin level can be measured in international units, such as µIU/mL or in molar concentration, such as pmol/L, where 1 µIU/mL equals 6.945 pmol/L.[66] A typical blood level between meals is 8–11 μIU/mL (57–79 pmol/L).[67]
|
92 |
+
|
93 |
+
The effects of insulin are initiated by its binding to a receptor present in the cell membrane. The receptor molecule contains an α- and β subunits. Two molecules are joined to form what is known as a homodimer. Insulin binds to the α-subunits of the homodimer, which faces the extracellular side of the cells. The β subunits have tyrosine kinase enzyme activity which is triggered by the insulin binding. This activity provokes the autophosphorylation of the β subunits and subsequently the phosphorylation of proteins inside the cell known as insulin receptor substrates (IRS). The phosphorylation of the IRS activates a signal transduction cascade that leads to the activation of other kinases as well as transcription factors that mediate the intracellular effects of insulin.[68]
|
94 |
+
|
95 |
+
The cascade that leads to the insertion of GLUT4 glucose transporters into the cell membranes of muscle and fat cells, and to the synthesis of glycogen in liver and muscle tissue, as well as the conversion of glucose into triglycerides in liver, adipose, and lactating mammary gland tissue, operates via the activation, by IRS-1, of phosphoinositol 3 kinase (PI3K). This enzyme converts a phospholipid in the cell membrane by the name of phosphatidylinositol 4,5-bisphosphate (PIP2), into phosphatidylinositol 3,4,5-triphosphate (PIP3), which, in turn, activates protein kinase B (PKB). Activated PKB facilitates the fusion of GLUT4 containing endosomes with the cell membrane, resulting in an increase in GLUT4 transporters in the plasma membrane.[69] PKB also phosphorylates glycogen synthase kinase (GSK), thereby inactivating this enzyme.[70] This means that its substrate, glycogen synthase (GS), cannot be phosphorylated, and remains dephosphorylated, and therefore active. The active enzyme, glycogen synthase (GS), catalyzes the rate limiting step in the synthesis of glycogen from glucose. Similar dephosphorylations affect the enzymes controlling the rate of glycolysis leading to the synthesis of fats via malonyl-CoA in the tissues that can generate triglycerides, and also the enzymes that control the rate of gluconeogenesis in the liver. The overall effect of these final enzyme dephosphorylations is that, in the tissues that can carry out these reactions, glycogen and fat synthesis from glucose are stimulated, and glucose production by the liver through glycogenolysis and gluconeogenesis are inhibited.[71] The breakdown of triglycerides by adipose tissue into free fatty acids and glycerol is also inhibited.[71]
|
96 |
+
|
97 |
+
After the intracellular signal that resulted from the binding of insulin to its receptor has been produced, termination of signaling is then needed. As mentioned below in the section on degradation, endocytosis and degradation of the receptor bound to insulin is a main mechanism to end signaling.[49] In addition, the signaling pathway is also terminated by dephosphorylation of the tyrosine residues in the various signaling pathways by tyrosine phosphatases. Serine/Threonine kinases are also known to reduce the activity of insulin.
|
98 |
+
|
99 |
+
The structure of the insulin–insulin receptor complex has been determined using the techniques of X-ray crystallography.[72]
|
100 |
+
|
101 |
+
The actions of insulin on the global human metabolism level include:
|
102 |
+
|
103 |
+
The actions of insulin (indirect and direct) on cells include:
|
104 |
+
|
105 |
+
Insulin also influences other body functions, such as vascular compliance and cognition. Once insulin enters the human brain, it enhances learning and memory and benefits verbal memory in particular.[81] Enhancing brain insulin signaling by means of intranasal insulin administration also enhances the acute thermoregulatory and glucoregulatory response to food intake, suggesting that central nervous insulin contributes to the co-ordination of a wide variety of homeostatic or regulatory processes in the human body.[82] Insulin also has stimulatory effects on gonadotropin-releasing hormone from the hypothalamus, thus favoring fertility.[83]
|
106 |
+
|
107 |
+
Once an insulin molecule has docked onto the receptor and effected its action, it may be released back into the extracellular environment, or it may be degraded by the cell. The two primary sites for insulin clearance are the liver and the kidney. The liver clears most insulin during first-pass transit, whereas the kidney clears most of the insulin in systemic circulation. Degradation normally involves endocytosis of the insulin-receptor complex, followed by the action of insulin-degrading enzyme. An insulin molecule produced endogenously by the beta cells is estimated to be degraded within about one hour after its initial release into circulation (insulin half-life ~ 4–6 minutes).[84][85]
|
108 |
+
|
109 |
+
Insulin is a major regulator of endocannabinoid (EC) metabolism and insulin treatment has been shown to reduce intracellular ECs, the 2-arachidonylglycerol (2-AG) and anandamide (AEA), which correspond with insulin-sensitive expression changes in enzymes of EC metabolism. In insulin-resistant adipocytes, patterns of insulin-induced enzyme expression is disturbed in a manner consistent with elevated EC synthesis and reduced EC degradation. Findings suggest that insulin-resistant adipocytes fail to regulate EC metabolism and decrease intracellular EC levels in response to insulin stimulation, whereby obese insulin-resistant individuals exhibit increased concentrations of ECs.[86][87] This dysregulation contributes to excessive visceral fat accumulation and reduced adiponectin release from abdominal adipose tissue, and further to the onset of several cardiometabolic risk factors that are associated with obesity and type 2 diabetes.[88]
|
110 |
+
|
111 |
+
Hypoglycemia, also known as "low blood sugar", is when blood sugar decreases to below normal levels.[89] This may result in a variety of symptoms including clumsiness, trouble talking, confusion, loss of consciousness, seizures or death.[89] A feeling of hunger, sweating, shakiness and weakness may also be present.[89] Symptoms typically come on quickly.[89]
|
112 |
+
|
113 |
+
The most common cause of hypoglycemia is medications used to treat diabetes mellitus such as insulin and sulfonylureas.[90][91] Risk is greater in diabetics who have eaten less than usual, exercised more than usual or have drunk alcohol.[89] Other causes of hypoglycemia include kidney failure, certain tumors, such as insulinoma, liver disease, hypothyroidism, starvation, inborn error of metabolism, severe infections, reactive hypoglycemia and a number of drugs including alcohol.[89][91] Low blood sugar may occur in otherwise healthy babies who have not eaten for a few hours.[92]
|
114 |
+
|
115 |
+
There are several conditions in which insulin disturbance is pathologic:
|
116 |
+
|
117 |
+
Biosynthetic human insulin (insulin human rDNA, INN) for clinical use is manufactured by recombinant DNA technology.[12] Biosynthetic human insulin has increased purity when compared with extractive animal insulin, enhanced purity reducing antibody formation. Researchers have succeeded in introducing the gene for human insulin into plants as another method of producing insulin ("biopharming") in safflower.[97] This technique is anticipated to reduce production costs.
|
118 |
+
|
119 |
+
Several analogs of human insulin are available. These insulin analogs are closely related to the human insulin structure, and were developed for specific aspects of glycemic control in terms of fast action (prandial insulins) and long action (basal insulins).[98] The first biosynthetic insulin analog was developed for clinical use at mealtime (prandial insulin), Humalog (insulin lispro),[99] it is more rapidly absorbed after subcutaneous injection than regular insulin, with an effect 15 minutes after injection. Other rapid-acting analogues are NovoRapid and Apidra, with similar profiles.[100] All are rapidly absorbed due to amino acid sequences that will reduce formation of dimers and hexamers (monomeric insulins are more rapidly absorbed). Fast acting insulins do not require the injection-to-meal interval previously recommended for human insulin and animal insulins. The other type is long acting insulin; the first of these was Lantus (insulin glargine). These have a steady effect for an extended period from 18 to 24 hours. Likewise, another protracted insulin analogue (Levemir) is based on a fatty acid acylation approach. A myristic acid molecule is attached to this analogue, which associates the insulin molecule to the abundant serum albumin, which in turn extends the effect and reduces the risk of hypoglycemia. Both protracted analogues need to be taken only once daily, and are used for type 1 diabetics as the basal insulin. A combination of a rapid acting and a protracted insulin is also available, making it more likely for patients to achieve an insulin profile that mimics that of the body's own insulin release.[101][102]
|
120 |
+
|
121 |
+
Insulin is usually taken as subcutaneous injections by single-use syringes with needles, via an insulin pump, or by repeated-use insulin pens with disposable needles. Inhaled insulin is also available in the U.S. market now.
|
122 |
+
|
123 |
+
Synthetic insulin can trigger adverse effects, so some people with diabetes rely on animal-source insulin.[103]
|
124 |
+
|
125 |
+
Unlike many medicines, insulin currently cannot be taken orally because, like nearly all other proteins introduced into the gastrointestinal tract, it is reduced to fragments, whereupon all activity is lost. There has been some research into ways to protect insulin from the digestive tract, so that it can be administered orally or sublingually.[104][105]
|
126 |
+
|
127 |
+
In 1869, while studying the structure of the pancreas under a microscope, Paul Langerhans, a medical student in Berlin, identified some previously unnoticed tissue clumps scattered throughout the bulk of the pancreas.[106] The function of the "little heaps of cells", later known as the islets of Langerhans, initially remained unknown, but Édouard Laguesse later suggested they might produce secretions that play a regulatory role in digestion.[107] Paul Langerhans' son, Archibald, also helped to understand this regulatory role.
|
128 |
+
|
129 |
+
In 1889, the physician Oskar Minkowski, in collaboration with Joseph von Mering, removed the pancreas from a healthy dog to test its assumed role in digestion. On testing the urine, they found sugar, establishing for the first time a relationship between the pancreas and diabetes. In 1901, another major step was taken by the American physician and scientist Eugene Lindsay Opie, when he isolated the role of the pancreas to the islets of Langerhans: "Diabetes mellitus when the result of a lesion of the pancreas is caused by destruction of the islands of Langerhans and occurs only when these bodies are in part or wholly destroyed".[108][109][110]
|
130 |
+
|
131 |
+
Over the next two decades researchers made several attempts to isolate the islets' secretions. In 1906 George Ludwig Zuelzer achieved partial success in treating dogs with pancreatic extract, but he was unable to continue his work. Between 1911 and 1912, E.L. Scott at the University of Chicago tried aqueous pancreatic extracts and noted "a slight diminution of glycosuria", but was unable to convince his director of his work's value; it was shut down. Israel Kleiner demonstrated similar effects at Rockefeller University in 1915, but World War I interrupted his work and he did not return to it.[111]
|
132 |
+
|
133 |
+
In 1916, Nicolae Paulescu developed an aqueous pancreatic extract which, when injected into a diabetic dog, had a normalizing effect on blood-sugar levels. He had to interrupt his experiments because of World War I, and in 1921 he wrote four papers about his work carried out in Bucharest and his tests on a diabetic dog. Later that year, he published "Research on the Role of the Pancreas in Food Assimilation".[112][113]
|
134 |
+
|
135 |
+
The name "insulin" was coined by Edward Albert Sharpey-Schafer in 1916 for a hypothetical molecule produced by pancreatic islets of Langerhans (Latin insula for islet or island) that controls glucose metabolism. Unbeknown to Sharpey-Schafer, Jean de Meyer had introduced very similar word "insuline" in 1909 for the same molecule.[114][115]
|
136 |
+
|
137 |
+
In October 1920, Canadian Frederick Banting concluded that the digestive secretions that Minkowski had originally studied were breaking down the islet secretion, thereby making it impossible to extract successfully. A surgeon by training, Banting knew certain arteries could be tied off that would lead most of the pancreas to atrophy, while leaving the islets of Langerhans intact. He reasoned that a relatively pure extract could be made from the islets once most of the rest of the pancreas was gone. He jotted a note to himself: "Ligate pancreatic ducts of the dog. Keep dogs alive till acini degenerate leaving islets. Try to isolate internal secretion of these and relieve glycosuria."[116][117]
|
138 |
+
|
139 |
+
In the spring of 1921, Banting traveled to Toronto to explain his idea to J.J.R. Macleod, Professor of Physiology at the University of Toronto. Macleod was initially skeptical, since Banting had no background in research and was not familiar with the latest literature, but he agreed to provide lab space for Banting to test out his ideas. Macleod also arranged for two undergraduates to be Banting's lab assistants that summer, but Banting required only one lab assistant. Charles Best and Clark Noble flipped a coin; Best won the coin toss and took the first shift. This proved unfortunate for Noble, as Banting kept Best for the entire summer and eventually shared half his Nobel Prize money and credit for the discovery with Best.[118] On 30 July 1921, Banting and Best successfully isolated an extract ("isleton") from the islets of a duct-tied dog and injected it into a diabetic dog, finding that the extract reduced its blood sugar by 40% in 1 hour.[119][117]
|
140 |
+
|
141 |
+
Banting and Best presented their results to Macleod on his return to Toronto in the fall of 1921, but Macleod pointed out flaws with the experimental design, and suggested the experiments be repeated with more dogs and better equipment. He moved Banting and Best into a better laboratory and began paying Banting a salary from his research grants. Several weeks later, the second round of experiments was also a success, and Macleod helped publish their results privately in Toronto that November. Bottlenecked by the time-consuming task of duct-tying dogs and waiting several weeks to extract insulin, Banting hit upon the idea of extracting insulin from the fetal calf pancreas, which had not yet developed digestive glands. By December, they had also succeeded in extracting insulin from the adult cow pancreas. Macleod discontinued all other research in his laboratory to concentrate on the purification of insulin. He invited biochemist James Collip to help with this task, and the team felt ready for a clinical test within a month.[117]
|
142 |
+
|
143 |
+
On January 11, 1922, Leonard Thompson, a 14-year-old diabetic who lay dying at the Toronto General Hospital, was given the first injection of insulin.[120][121][122][123] However, the extract was so impure that Thompson suffered a severe allergic reaction, and further injections were cancelled. Over the next 12 days, Collip worked day and night to improve the ox-pancreas extract. A second dose was injected on January 23, completely eliminating the glycosuria that was typical of diabetes without causing any obvious side-effects. The first American patient was Elizabeth Hughes, the daughter of U.S. Secretary of State Charles Evans Hughes.[124][125] The first patient treated in the U.S. was future woodcut artist James D. Havens;[126] Dr. John Ralston Williams imported insulin from Toronto to Rochester, New York, to treat Havens.[127]
|
144 |
+
|
145 |
+
Banting and Best never worked well with Collip, regarding him as something of an interloper, and Collip left the project soon after. Over the spring of 1922, Best managed to improve his techniques to the point where large quantities of insulin could be extracted on demand, but the preparation remained impure. The drug firm Eli Lilly and Company had offered assistance not long after the first publications in 1921, and they took Lilly up on the offer in April. In November, Lilly's head chemist, George B. Walden discovered isoelectric precipitation and was able to produce large quantities of highly refined insulin. Shortly thereafter, insulin was offered for sale to the general public.
|
146 |
+
|
147 |
+
Toward the end of January 1922, tensions mounted between the four "co-discoverers" of insulin and Collip briefly threatened to separately patent his purification process. John G. FitzGerald, director of the non-commercial public health institution Connaught Laboratories, therefore stepped in as peacemaker. The resulting agreement of 25 January 1922 established two key conditions: 1) that the collaborators would sign a contract agreeing not to take out a patent with a commercial pharmaceutical firm during an initial working period with Connaught; and 2) that no changes in research policy would be allowed unless first discussed among FitzGerald and the four collaborators.[128] It helped contain disagreement and tied the research to Connaught's public mandate.
|
148 |
+
|
149 |
+
Initially, Macleod and Banting were particularly reluctant to patent their process for insulin on grounds of medical ethics. However, concerns remained that a private third-party would hijack and monopolize the research (as Eli Lilly and Company had hinted[129]), and that safe distribution would be difficult to guarantee without capacity for quality control. To this end, Edward Calvin Kendall gave valuable advice. He had isolated thyroxin at the Mayo Clinic in 1914 and patented the process through an arrangement between himself, the brothers Mayo, and the University of Minnesota, transferring the patent to the public university.[130] On April 12, Banting, Best, Collip, Macleod, and FitzGerald wrote jointly to the president of the University of Toronto to propose a similar arrangement with the aim of assigning a patent to the Board of Governors of the University.[131] The letter emphasized that:[132]
|
150 |
+
|
151 |
+
The patent would not be used for any other purpose than to prevent the taking out of a patent by other persons. When the details of the method of preparation are published anyone would be free to prepare the extract, but no one could secure a profitable monopoly.
|
152 |
+
|
153 |
+
The assignment to the University of Toronto Board of Governors was completed on 15 January 1923, for the token payment of $1.00.[133] The arrangement was congratulated in The World's Work in 1923 as "a step forward in medical ethics".[134] It has also received much media attention in the 2010s regarding the issue of healthcare and drug affordability.
|
154 |
+
|
155 |
+
Following further concern regarding Eli Lilly's attempts to separately patent parts of the manufacturing process, Connaught's Assistant Director and Head of the Insulin Division Robert Defries established a patent pooling policy which would require producers to freely share any improvements to the manufacturing process without compromising affordability.[135]
|
156 |
+
|
157 |
+
Purified animal-sourced insulin was initially the only type of insulin available for experiments and diabetics. John Jacob Abel was the first to produce the crystallised form in 1926.[136] Evidence of the protein nature was first given by Michael Somogyi, Edward A. Doisy, and Philip A. Shaffer in 1924.[137] It was fully proven when Hans Jensen and Earl A. Evans Jr. isolated the amino acids phenylalanine and proline in 1935.[138]
|
158 |
+
|
159 |
+
The amino acid structure of insulin was first characterized in 1951 by Frederick Sanger,[17][139] and the first synthetic insulin was produced simultaneously in the labs of Panayotis Katsoyannis at the University of Pittsburgh and Helmut Zahn at RWTH Aachen University in the mid-1960s.[140][141][142] [143][144] Synthetic crystalline bovine insulin was achieved by Chinese researchers in 1965.[145] The complete 3-dimensional structure of insulin was determined by X-ray crystallography in Dorothy Hodgkin's laboratory in 1969.[146]
|
160 |
+
|
161 |
+
The first genetically engineered, synthetic "human" insulin was produced using E. coli in 1978 by Arthur Riggs and Keiichi Itakura at the Beckman Research Institute of the City of Hope in collaboration with Herbert Boyer at Genentech.[13][14] Genentech, founded by Swanson, Boyer and Eli Lilly and Company, went on in 1982 to sell the first commercially available biosynthetic human insulin under the brand name Humulin.[14] The vast majority of insulin currently used worldwide is now biosynthetic recombinant "human" insulin or its analogues.[15] Recently, another approach has been used by a pioneering group of Canadian researchers, using an easily grown safflower plant, for the production of much cheaper insulin.[147]
|
162 |
+
|
163 |
+
Recombinant insulin is produced either in yeast (usually Saccharomyces cerevisiae) or E. coli.[148] In yeast, insulin may be engineered as a single-chain protein with a KexII endoprotease (a yeast homolog of PCI/PCII) site that separates the insulin A chain from a C-terminally truncated insulin B chain. A chemically synthesized C-terminal tail is then grafted onto insulin by reverse proteolysis using the inexpensive protease trypsin; typically the lysine on the C-terminal tail is protected with a chemical protecting group to prevent proteolysis. The ease of modular synthesis and the relative safety of modifications in that region accounts for common insulin analogs with C-terminal modifications (e.g. lispro, aspart, glulisine). The Genentech synthesis and completely chemical synthesis such as that by Bruce Merrifield are not preferred because the efficiency of recombining the two insulin chains is low, primarily due to competition with the precipitation of insulin B chain.
|
164 |
+
|
165 |
+
The Nobel Prize committee in 1923 credited the practical extraction of insulin to a team at the University of Toronto and awarded the Nobel Prize to two men: Frederick Banting and J.J.R. Macleod.[149] They were awarded the Nobel Prize in Physiology or Medicine in 1923 for the discovery of insulin. Banting, incensed that Best was not mentioned,[150] shared his prize with him, and Macleod immediately shared his with James Collip. The patent for insulin was sold to the University of Toronto for one dollar.
|
166 |
+
|
167 |
+
Two other Nobel Prizes have been awarded for work on insulin. British molecular biologist Frederick Sanger, who determined the primary structure of insulin in 1955, was awarded the 1958 Nobel Prize in Chemistry.[17] Rosalyn Sussman Yalow received the 1977 Nobel Prize in Medicine for the development of the radioimmunoassay for insulin.
|
168 |
+
|
169 |
+
Several Nobel Prizes also have an indirect connection with insulin. George Minot, co-recipient of the 1934 Nobel Prize for the development of the first effective treatment for pernicious anemia, had diabetes mellitus. Dr. William Castle observed that the 1921 discovery of insulin, arriving in time to keep Minot alive, was therefore also responsible for the discovery of a cure for pernicious anemia.[151] Dorothy Hodgkin was awarded a Nobel Prize in Chemistry in 1964 for the development of crystallography, the technique she used for deciphering the complete molecular structure of insulin in 1969.[146]
|
170 |
+
|
171 |
+
The work published by Banting, Best, Collip and Macleod represented the preparation of purified insulin extract suitable for use on human patients.[152] Although Paulescu discovered the principles of the treatment, his saline extract could not be used on humans; he was not mentioned in the 1923 Nobel Prize. Professor Ian Murray was particularly active in working to correct "the historical wrong" against Nicolae Paulescu. Murray was a professor of physiology at the Anderson College of Medicine in Glasgow, Scotland, the head of the department of Metabolic Diseases at a leading Glasgow hospital, vice-president of the British Association of Diabetes, and a founding member of the International Diabetes Federation. Murray wrote:
|
172 |
+
|
173 |
+
Insufficient recognition has been given to Paulescu, the distinguished Romanian scientist, who at the time when the Toronto team were commencing their research had already succeeded in extracting the antidiabetic hormone of the pancreas and proving its efficacy in reducing the hyperglycaemia in diabetic dogs.[153]
|
174 |
+
|
175 |
+
In a private communication, Professor Arne Tiselius, former head of the Nobel Institute, expressed his personal opinion that Paulescu was equally worthy of the award in 1923.[154]
|
176 |
+
|
177 |
+
* Dr Robert I Misbin, INSULIN History from an FDA Insider, June 1 2020 published as ebook on Amazon
|
178 |
+
|
179 |
+
1ai0: R6 HUMAN INSULIN HEXAMER (NON-SYMMETRIC), NMR, 10 STRUCTURES
|
180 |
+
|
181 |
+
1aiy: R6 HUMAN INSULIN HEXAMER (SYMMETRIC), NMR, 10 STRUCTURES
|
182 |
+
|
183 |
+
1aph: CONFORMATIONAL CHANGES IN CUBIC INSULIN CRYSTALS IN THE PH RANGE 7-11
|
184 |
+
|
185 |
+
1b17: PH AFFECTS GLU B13 SWITCHING AND SULFATE BINDING IN CUBIC INSULIN CRYSTALS (PH 5.00 COORDINATES)
|
186 |
+
|
187 |
+
1b18: PH AFFECTS GLU B13 SWITCHING AND SULFATE BINDING IN CUBIC INSULIN CRYSTALS (PH 5.53 COORDINATES)
|
188 |
+
|
189 |
+
1b19: PH AFFECTS GLU B13 SWITCHING AND SULFATE BINDING IN CUBIC INSULIN CRYSTALS (PH 5.80 COORDINATES)
|
190 |
+
|
191 |
+
1b2a: PH AFFECTS GLU B13 SWITCHING AND SULFATE BINDING IN CUBIC INSULIN CRYSTALS (PH 6.00 COORDINATES)
|
192 |
+
|
193 |
+
1b2b: PH AFFECTS GLU B13 SWITCHING AND SULFATE BINDING IN CUBIC INSULIN CRYSTALS (PH 6.16 COORDINATES)
|
194 |
+
|
195 |
+
1b2c: PH AFFECTS GLU B13 SWITCHING AND SULFATE BINDING IN CUBIC INSULIN CRYSTALS (PH 6.26 COORDINATES)
|
196 |
+
|
197 |
+
1b2d: PH AFFECTS GLU B13 SWITCHING AND SULFATE BINDING IN CUBIC INSULIN CRYSTALS (PH 6.35 COORDINATES)
|
198 |
+
|
199 |
+
1b2e: PH AFFECTS GLU B13 SWITCHING AND SULFATE BINDING IN CUBIC INSULIN CRYSTALS (PH 6.50 COORDINATES)
|
200 |
+
|
201 |
+
1b2f: PH AFFECTS GLU B13 SWITCHING AND SULFATE BINDING IN CUBIC INSULIN CRYSTALS (PH 6.98 COORDINATES)
|
202 |
+
|
203 |
+
1b2g: PH AFFECTS GLU B13 SWITCHING AND SULFATE BINDING IN CUBIC INSULIN CRYSTALS (PH 9.00 COORDINATES)
|
204 |
+
|
205 |
+
1b9e: HUMAN INSULIN MUTANT SERB9GLU
|
206 |
+
|
207 |
+
1ben: INSULIN COMPLEXED WITH 4-HYDROXYBENZAMIDE
|
208 |
+
|
209 |
+
1bph: CONFORMATIONAL CHANGES IN CUBIC INSULIN CRYSTALS IN THE PH RANGE 7-11
|
210 |
+
|
211 |
+
1cph: CONFORMATIONAL CHANGES IN CUBIC INSULIN CRYSTALS IN THE PH RANGE 7-11
|
212 |
+
|
213 |
+
1dph: CONFORMATIONAL CHANGES IN CUBIC INSULIN CRYSTALS IN THE PH RANGE 7-11
|
214 |
+
|
215 |
+
1ev3: Structure of the rhombohedral form of the M-cresol/insulin R6 hexamer
|
216 |
+
|
217 |
+
1ev6: Structure of the monoclinic form of the M-cresol/insulin R6 hexamer
|
218 |
+
|
219 |
+
1evr: The structure of the resorcinol/insulin R6 hexamer
|
220 |
+
|
221 |
+
1fu2: FIRST PROTEIN STRUCTURE DETERMINED FROM X-RAY POWDER DIFFRACTION DATA
|
222 |
+
|
223 |
+
1fub: FIRST PROTEIN STRUCTURE DETERMINED FROM X-RAY POWDER DIFFRACTION DATA
|
224 |
+
|
225 |
+
1g7a: 1.2 A structure of T3R3 human insulin at 100 K
|
226 |
+
|
227 |
+
1g7b: 1.3 A STRUCTURE OF T3R3 HUMAN INSULIN AT 100 K
|
228 |
+
|
229 |
+
1guj: INSULIN AT PH 2: STRUCTURAL ANALYSIS OF THE CONDITIONS PROMOTING INSULIN FIBRE FORMATION.
|
230 |
+
|
231 |
+
1hiq: PARADOXICAL STRUCTURE AND FUNCTION IN A MUTANT HUMAN INSULIN ASSOCIATED WITH DIABETES MELLITUS
|
232 |
+
|
233 |
+
1hit: RECEPTOR BINDING REDEFINED BY A STRUCTURAL SWITCH IN A MUTANT HUMAN INSULIN
|
234 |
+
|
235 |
+
1hls: NMR STRUCTURE OF THE HUMAN INSULIN-HIS(B16)
|
236 |
+
|
237 |
+
1htv: CRYSTAL STRUCTURE OF DESTRIPEPTIDE (B28-B30) INSULIN
|
238 |
+
|
239 |
+
1iza: ROLE OF B13 GLU IN INSULIN ASSEMBLY: THE HEXAMER STRUCTURE OF RECOMBINANT MUTANT (B13 GLU-> GLN) INSULIN
|
240 |
+
|
241 |
+
1izb: ROLE OF B13 GLU IN INSULIN ASSEMBLY: THE HEXAMER STRUCTURE OF RECOMBINANT MUTANT (B13 GLU-> GLN) INSULIN
|
242 |
+
|
243 |
+
1j73: Crystal structure of an unstable insulin analog with native activity.
|
244 |
+
|
245 |
+
1jca: Non-standard Design of Unstable Insulin Analogues with Enhanced Activity
|
246 |
+
|
247 |
+
1jco: Solution structure of the monomeric [Thr(B27)->Pro,Pro(B28)->Thr] insulin mutant (PT insulin)
|
248 |
+
|
249 |
+
1lph: LYS(B28)PRO(B29)-HUMAN INSULIN
|
250 |
+
|
251 |
+
1m5a: Crystal Structure of 2-Co(2+)-Insulin at 1.2A Resolution
|
252 |
+
|
253 |
+
1mhi: THREE-DIMENSIONAL SOLUTION STRUCTURE OF AN INSULIN DIMER. A STUDY OF THE B9(ASP) MUTANT OF HUMAN INSULIN USING NUCLEAR MAGNETIC RESONANCE DISTANCE GEOMETRY AND RESTRAINED MOLECULAR DYNAMICS
|
254 |
+
|
255 |
+
1mhj: SOLUTION STRUCTURE OF THE SUPERACTIVE MONOMERIC DES-[PHE(B25)] HUMAN INSULIN MUTANT. ELUCIDATION OF THE STRUCTURAL BASIS FOR THE MONOMERIZATION OF THE DES-[PHE(B25)] INSULIN AND THE DIMERIZATION OF NATIVE INSULIN
|
256 |
+
|
257 |
+
1mpj: X-RAY CRYSTALLOGRAPHIC STUDIES ON HEXAMERIC INSULINS IN THE PRESENCE OF HELIX-STABILIZING AGENTS, THIOCYANATE, METHYLPARABEN AND PHENOL
|
258 |
+
|
259 |
+
1mso: T6 Human Insulin at 1.0 A Resolution
|
260 |
+
|
261 |
+
1os3: Dehydrated T6 human insulin at 100 K
|
262 |
+
|
263 |
+
1os4: Dehydrated T6 human insulin at 295 K
|
264 |
+
|
265 |
+
1q4v: CRYSTAL STRUCTURE OF ALLO-ILEA2-INSULIN, AN INACTIVE CHIRAL ANALOGUE: IMPLICATIONS FOR THE MECHANISM OF RECEPTOR
|
266 |
+
|
267 |
+
1qiy: HUMAN INSULIN HEXAMERS WITH CHAIN B HIS MUTATED TO TYR COMPLEXED WITH PHENOL
|
268 |
+
|
269 |
+
1qiz: HUMAN INSULIN HEXAMERS WITH CHAIN B HIS MUTATED TO TYR COMPLEXED WITH RESORCINOL
|
270 |
+
|
271 |
+
1qj0: HUMAN INSULIN HEXAMERS WITH CHAIN B HIS MUTATED TO TYR
|
272 |
+
|
273 |
+
1rwe: Enhancing the activity of insulin at receptor edge: crystal structure and photo-cross-linking of A8 analogues
|
274 |
+
|
275 |
+
1sf1: NMR STRUCTURE OF HUMAN INSULIN under Amyloidogenic Condition, 15 STRUCTURES
|
276 |
+
|
277 |
+
1t0c: Solution Structure of Human Proinsulin C-Peptide
|
278 |
+
|
279 |
+
1trz: CRYSTALLOGRAPHIC EVIDENCE FOR DUAL COORDINATION AROUND ZINC IN THE T3R3 HUMAN INSULIN HEXAMER
|
280 |
+
|
281 |
+
1tyl: THE STRUCTURE OF A COMPLEX OF HEXAMERIC INSULIN AND 4'-HYDROXYACETANILIDE
|
282 |
+
|
283 |
+
1tym: THE STRUCTURE OF A COMPLEX OF HEXAMERIC INSULIN AND 4'-HYDROXYACETANILIDE
|
284 |
+
|
285 |
+
1uz9: CRYSTALLOGRAPHIC AND SOLUTION STUDIES OF N-LITHOCHOLYL INSULIN: A NEW GENERATION OF PROLONGED-ACTING INSULINS.
|
286 |
+
|
287 |
+
1w8p: STRUCTURAL PROPERTIES OF THE B25TYR-NME-B26PHE INSULIN MUTANT.
|
288 |
+
|
289 |
+
1wav: CRYSTAL STRUCTURE OF FORM B MONOCLINIC CRYSTAL OF INSULIN
|
290 |
+
|
291 |
+
1xda: STRUCTURE OF INSULIN
|
292 |
+
|
293 |
+
1xgl: HUMAN INSULIN DISULFIDE ISOMER, NMR, 10 STRUCTURES
|
294 |
+
|
295 |
+
1xw7: Diabetes-Associated Mutations in Human Insulin: Crystal Structure and Photo-Cross-Linking Studies of A-Chain Variant Insulin Wakayama
|
296 |
+
|
297 |
+
1zeg: STRUCTURE OF B28 ASP INSULIN IN COMPLEX WITH PHENOL
|
298 |
+
|
299 |
+
1zeh: STRUCTURE OF INSULIN
|
300 |
+
|
301 |
+
1zni: INSULIN
|
302 |
+
|
303 |
+
1znj: INSULIN, MONOCLINIC CRYSTAL FORM
|
304 |
+
|
305 |
+
2a3g: The structure of T6 bovine insulin
|
306 |
+
|
307 |
+
2aiy: R6 HUMAN INSULIN HEXAMER (SYMMETRIC), NMR, 20 STRUCTURES
|
308 |
+
|
309 |
+
2bn1: INSULIN AFTER A HIGH DOSE X-RAY BURN
|
310 |
+
|
311 |
+
2bn3: INSULIN BEFORE A HIGH DOSE X-RAY BURN
|
312 |
+
|
313 |
+
2c8q: INSULINE(1SEC) AND UV LASER EXCITED FLUORESCENCE
|
314 |
+
|
315 |
+
2c8r: INSULINE(60SEC) AND UV LASER EXCITED FLUORESCENCE
|
316 |
+
|
317 |
+
2g4m: Insulin collected at 2.0 A wavelength
|
318 |
+
|
319 |
+
2g54: Crystal structure of Zn-bound human insulin-degrading enzyme in complex with insulin B chain
|
320 |
+
|
321 |
+
2g56: crystal structure of human insulin-degrading enzyme in complex with insulin B chain
|
322 |
+
|
323 |
+
2hiu: NMR STRUCTURE OF HUMAN INSULIN IN 20% ACETIC ACID, ZINC-FREE, 10 STRUCTURES
|
324 |
+
|
325 |
+
2ins: THE STRUCTURE OF DES-PHE B1 BOVINE INSULIN
|
326 |
+
|
327 |
+
2omg: Structure of human insulin cocrystallized with protamine and urea
|
328 |
+
|
329 |
+
2omh: Structure of human insulin cocrystallized with ARG-12 peptide in presence of urea
|
330 |
+
|
331 |
+
2omi: Structure of human insulin cocrystallized with protamine
|
332 |
+
|
333 |
+
2tci: X-RAY CRYSTALLOGRAPHIC STUDIES ON HEXAMERIC INSULINS IN THE PRESENCE OF HELIX-STABILIZING AGENTS, THIOCYANATE, METHYLPARABEN AND PHENOL
|
334 |
+
|
335 |
+
3aiy: R6 HUMAN INSULIN HEXAMER (SYMMETRIC), NMR, REFINED AVERAGE STRUCTURE
|
336 |
+
|
337 |
+
3ins: STRUCTURE OF INSULIN. RESULTS OF JOINT NEUTRON AND X-RAY REFINEMENT
|
338 |
+
|
339 |
+
3mth: X-RAY CRYSTALLOGRAPHIC STUDIES ON HEXAMERIC INSULINS IN THE PRESENCE OF HELIX-STABILIZING AGENTS, THIOCYANATE, METHYLPARABEN AND PHENOL
|
340 |
+
|
341 |
+
4aiy: R6 HUMAN INSULIN HEXAMER (SYMMETRIC), NMR, 'GREEN' SUBSTATE, AVERAGE STRUCTURE
|
342 |
+
|
343 |
+
4ins: THE STRUCTURE OF 2ZN PIG INSULIN CRYSTALS AT 1.5 ANGSTROMS RESOLUTION
|
344 |
+
|
345 |
+
5aiy: R6 HUMAN INSULIN HEXAMER (SYMMETRIC), NMR, 'RED' SUBSTATE, AVERAGE STRUCTURE
|
346 |
+
|
347 |
+
6ins: X-RAY ANALYSIS OF THE SINGLE CHAIN /B29-A1$ PEPTIDE-LINKED INSULIN MOLECULE. A COMPLETELY INACTIVE ANALOGUE
|
348 |
+
|
349 |
+
7ins: STRUCTURE OF PORCINE INSULIN COCRYSTALLIZED WITH CLUPEINE Z
|
350 |
+
|
351 |
+
9ins: MONOVALENT CATION BINDING IN CUBIC INSULIN CRYSTALS
|
en/2749.html.txt
ADDED
@@ -0,0 +1,333 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Coordinates: 37°23′16″N 121°57′49″W / 37.38778°N 121.96361°W / 37.38778; -121.96361
|
2 |
+
|
3 |
+
Intel Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California, in Silicon Valley. It is the world's largest and highest valued semiconductor chip manufacturer based on revenue,[4][5] and is the inventor of the x86 series of microprocessors, the processors found in most personal computers (PCs). Intel ranked No. 46 in the 2018 Fortune 500 list of the largest United States corporations by total revenue.[6] Intel is incorporated in Delaware.[7]
|
4 |
+
|
5 |
+
Intel supplies microprocessors for computer system manufacturers such as Apple, Lenovo, HP, and Dell. Intel also manufactures motherboard chipsets, network interface controllers and integrated circuits, flash memory, graphics chips, embedded processors and other devices related to communications and computing.
|
6 |
+
|
7 |
+
Intel Corporation was founded on July 18, 1968, by semiconductor pioneers Robert Noyce and Gordon Moore (of Moore's law), and is associated with the executive leadership and vision of Andrew Grove. The company's name was conceived as portmanteau of the words integrated and electronics, with co-founder Noyce having been a key inventor of the integrated circuit (the microchip). The fact that "intel" is the term for intelligence information also made the name appropriate.[8] Intel was an early developer of SRAM and DRAM memory chips, which represented the majority of its business until 1981. Although Intel created the world's first commercial microprocessor chip in 1971, it was not until the success of the personal computer (PC) that this became its primary business.
|
8 |
+
|
9 |
+
During the 1990s, Intel invested heavily in new microprocessor designs fostering the rapid growth of the computer industry. During this period Intel became the dominant supplier of microprocessors for PCs and was known for aggressive and anti-competitive tactics in defense of its market position, particularly against Advanced Micro Devices (AMD), as well as a struggle with Microsoft for control over the direction of the PC industry.[9][10]
|
10 |
+
|
11 |
+
The Open Source Technology Center at Intel hosts PowerTOP and LatencyTOP, and supports other open-source projects such as Wayland, Mesa3D, Intel Array Building Blocks, Threading Building Blocks (TBB), and Xen.[11]
|
12 |
+
|
13 |
+
In 2017, Dell accounted for about 16% of Intel's total revenues, Lenovo accounted for 13% of total revenues, and HP Inc. accounted for 11% of total revenues.[12]
|
14 |
+
|
15 |
+
According to IDC, while Intel enjoyed the biggest market share in both the overall worldwide PC microprocessor market (73.3%) and the mobile PC microprocessor (80.4%) in the second quarter of 2011, the numbers decreased by 1.5% and 1.9% compared to the first quarter of 2011.[13][14]
|
16 |
+
|
17 |
+
Intels market share decreased significantly in the enthusiast market as of 2019.[15] Intel has faced delays for their 10 nm products.According to Intel CEO Bob Swan, that delay was caused by the company's overly aggressive strategy for moving to its next node.[16] Some OEMs,for example Microsoft, started newly shipping products with AMD CPUs.[17]
|
18 |
+
|
19 |
+
In the 1980s Intel was among the top ten sellers of semiconductors (10th in 1987) in the world. In 1992,[18] Intel became the biggest chip maker by revenue and has held the position ever since. Other top semiconductor companies include TSMC, Advanced Micro Devices, Samsung, Texas Instruments, Toshiba and STMicroelectronics.
|
20 |
+
|
21 |
+
Competitors in PC chipsets include Advanced Micro Devices, VIA Technologies, Silicon Integrated Systems, and Nvidia. Intel's competitors in networking include NXP Semiconductors, Infineon, Broadcom Limited, Marvell Technology Group and Applied Micro Circuits Corporation, and competitors in flash memory include Spansion, Samsung, Qimonda, Toshiba, STMicroelectronics, and SK Hynix.
|
22 |
+
|
23 |
+
The only major competitor in the x86 processor market is Advanced Micro Devices (AMD), with which Intel has had full cross-licensing agreements since 1976: each partner can use the other's patented technological innovations without charge after a certain time.[19] However, the cross-licensing agreement is canceled in the event of an AMD bankruptcy or takeover.[20]
|
24 |
+
|
25 |
+
Some smaller competitors such as VIA Technologies produce low-power x86 processors for small factor computers and portable equipment. However, the advent of such mobile computing devices, in particular, smartphones, has in recent years led to a decline in PC sales.[21] Since over 95% of the world's smartphones currently use processors designed by ARM Holdings, ARM has become a major competitor for Intel's processor market. ARM is also planning to make inroads into the PC and server market.[22]
|
26 |
+
|
27 |
+
Intel has been involved in several disputes regarding violation of antitrust laws, which are noted below.
|
28 |
+
|
29 |
+
Intel was founded in Mountain View, California, in 1968 by Gordon E. Moore (known for "Moore's law"), a chemist, and Robert Noyce, a physicist and co-inventor of the integrated circuit. Arthur Rock (investor and venture capitalist) helped them find investors, while Max Palevsky was on the board from an early stage.[23] Moore and Noyce had left Fairchild Semiconductor to found Intel. Rock was not an employee, but he was an investor and was chairman of the board.[24][25] The total initial investment in Intel was $2.5 million in convertible debentures (equivalent to $18.4 million in 2019) and $10,000 from Rock. Just 2 years later, Intel became a public company via an initial public offering (IPO), raising $6.8 million ($23.50 per share).[24] Intel's third employee was Andy Grove,[26] a chemical engineer, who later ran the company through much of the 1980s and the high-growth 1990s.
|
30 |
+
|
31 |
+
In deciding on a name, Moore and Noyce quickly rejected "Moore Noyce",[27] near homophone for "more noise" – an ill-suited name for an electronics company, since noise in electronics is usually undesirable and typically associated with bad interference. Instead, they founded the company as N M Electronics on July 18, 1968, but by the end of the month had changed the name to Intel which stood for Integrated Electronics.[note 1] Since "Intel" was already trademarked by the hotel chain Intelco, they had to buy the rights for the name.[24][33]
|
32 |
+
|
33 |
+
At its founding, Intel was distinguished by its ability to make logic circuits using semiconductor devices. The founders' goal was the semiconductor memory market, widely predicted to replace magnetic-core memory. Its first product, a quick entry into the small, high-speed memory market in 1969, was the 3101 Schottky TTL bipolar 64-bit static random-access memory (SRAM), which was nearly twice as fast as earlier Schottky diode implementations by Fairchild and the Electrotechnical Laboratory in Tsukuba, Japan.[34][35] In the same year, Intel also produced the 3301 Schottky bipolar 1024-bit read-only memory (ROM)[36] and the first commercial metal–oxide–semiconductor field-effect transistor (MOSFET) silicon gate SRAM chip, the 256-bit 1101.[24][37][38] While the 1101 was a significant advance, its complex static cell structure made it too slow and costly for mainframe memories. The three-transistor cell implemented in the first commercially available dynamic random-access memory (DRAM), the 1103 released in 1970, solved these issues. The 1103 was the bestselling semiconductor memory chip in the world by 1972, as it replaced core memory in many applications.[39][40] Intel's business grew during the 1970s as it expanded and improved its manufacturing processes and produced a wider range of products, still dominated by various memory devices.
|
34 |
+
|
35 |
+
Intel created the first commercially available microprocessor (Intel 4004) in 1971.[24] The microprocessor represented a notable advance in the technology of integrated circuitry, as it miniaturized the central processing unit of a computer, which then made it possible for small machines to perform calculations that in the past only very large machines could do. Considerable technological innovation was needed before the microprocessor could actually become the basis of what was first known as a "mini computer" and then known as a "personal computer".[41] Intel also created one of the first microcomputers in 1973.[37][42] Intel opened its first international manufacturing facility in 1972, in Malaysia, which would host multiple Intel operations, before opening assembly facilities and semiconductor plants in Singapore and Jerusalem in the early 1980s, and manufacturing and development centres in China, India and Costa Rica in the 1990s.[43] By the early 1980s, its business was dominated by dynamic random-access memory (DRAM) chips. However, increased competition from Japanese semiconductor manufacturers had, by 1983, dramatically reduced the profitability of this market. The growing success of the IBM personal computer, based on an Intel microprocessor, was among factors that convinced Gordon Moore (CEO since 1975) to shift the company's focus to microprocessors and to change fundamental aspects of that business model. Moore's decision to sole-source Intel's 386 chip played into the company's continuing success.
|
36 |
+
|
37 |
+
By the end of the 1980s, buoyed by its fortuitous position as microprocessor supplier to IBM and IBM's competitors within the rapidly growing personal computer market, Intel embarked on a 10-year period of unprecedented growth as the primary (and most profitable) hardware supplier to the PC industry, part of the winning 'Wintel' combination. Moore handed over to Andy Grove in 1987. By launching its Intel Inside marketing campaign in 1991, Intel was able to associate brand loyalty with consumer selection, so that by the end of the 1990s, its line of Pentium processors had become a household name.
|
38 |
+
|
39 |
+
After 2000, growth in demand for high-end microprocessors slowed. Competitors, notably AMD (Intel's largest competitor in its primary x86 architecture market), garnered significant market share, initially in low-end and mid-range processors but ultimately across the product range, and Intel's dominant position in its core market was greatly reduced,[44] mostly due to controversial NetBurst microarchitecture. In the early 2000s then-CEO, Craig Barrett attempted to diversify the company's business beyond semiconductors, but few of these activities were ultimately successful.
|
40 |
+
|
41 |
+
Intel had also for a number of years been embroiled in litigation. US law did not initially recognize intellectual property rights related to microprocessor topology (circuit layouts), until the Semiconductor Chip Protection Act of 1984, a law sought by Intel and the Semiconductor Industry Association (SIA).[45] During the late 1980s and 1990s (after this law was passed), Intel also sued companies that tried to develop competitor chips to the 80386 CPU.[46] The lawsuits were noted to significantly burden the competition with legal bills, even if Intel lost the suits.[46] Antitrust allegations had been simmering since the early 1990s and had been the cause of one lawsuit against Intel in 1991. In 2004 and 2005, AMD brought further claims against Intel related to unfair competition.
|
42 |
+
|
43 |
+
In 2005, CEO Paul Otellini reorganized the company to refocus its core processor and chipset business on platforms (enterprise, digital home, digital health, and mobility).
|
44 |
+
|
45 |
+
In 2006, Intel unveiled its Core microarchitecture to widespread critical acclaim;[47] the product range was perceived as an exceptional leap in processor performance that at a stroke regained much of its leadership of the field.[48][49] In 2008, Intel had another "tick" when it introduced the Penryn microarchitecture, which was 45 nm. Later that year, Intel released a processor with the Nehalem architecture. Nehalem had positive reviews.[50]
|
46 |
+
|
47 |
+
On June 27, 2006, the sale of Intel's XScale assets was announced. Intel agreed to sell the XScale processor business to Marvell Technology Group for an estimated $600 million and the assumption of unspecified liabilities. The move was intended to permit Intel to focus its resources on its core x86 and server businesses, and the acquisition completed on November 9, 2006.[51]
|
48 |
+
|
49 |
+
In 2010, Intel purchased McAfee, a manufacturer of computer security technology, for $7.68 billion.[52] As a condition for regulatory approval of the transaction, Intel agreed to provide rival security firms with all necessary information that would allow their products to use Intel's chips and personal computers.[53] After the acquisition, Intel had about 90,000 employees, including about 12,000 software engineers.[54] In September 2016, Intel sold a majority stake in its computer-security unit to TPG Capital, reversing the five-year-old McAfee acquisition.[55]
|
50 |
+
|
51 |
+
In August 2010, Intel and Infineon Technologies announced that Intel would acquire Infineon's Wireless Solutions business.[56] Intel planned to use Infineon's technology in laptops, smart phones, netbooks, tablets and embedded computers in consumer products, eventually integrating its wireless modem into Intel's silicon chips.[57]
|
52 |
+
|
53 |
+
In March 2011, Intel bought most of the assets of Cairo-based SySDSoft.[58]
|
54 |
+
|
55 |
+
In July 2011, Intel announced that it had agreed to acquire Fulcrum Microsystems Inc., a company specializing in network switches.[59] The company used to be included on the EE Times list of 60 Emerging Startups.[59]
|
56 |
+
|
57 |
+
In October 2011, Intel reached a deal to acquire Telmap, an Israeli-based navigation software company. The purchase price was not disclosed, but Israeli media reported values around $300 million to $350 million.[60]
|
58 |
+
|
59 |
+
In July 2012, Intel agreed to buy 10% of the shares of ASML Holding NV for $2.1 billion and another $1 billion for 5% of the shares that need shareholder approval to fund relevant research and development efforts, as part of a EUR3.3 billion ($4.1 billion) deal to accelerate the development of 450-millimeter wafer technology and extreme ultra-violet lithography by as much as two years.[61]
|
60 |
+
|
61 |
+
In July 2013, Intel confirmed the acquisition of Omek Interactive, an Israeli company that makes technology for gesture-based interfaces, without disclosing the monetary value of the deal. An official statement from Intel read: "The acquisition of Omek Interactive will help increase Intel's capabilities in the delivery of more immersive perceptual computing experiences." One report estimated the value of the acquisition between US$30 million and $50 million.[62]
|
62 |
+
|
63 |
+
The acquisition of a Spanish natural language recognition startup, Indisys was announced in September 2013. The terms of the deal were not disclosed but an email from an Intel representative stated: "Intel has acquired Indisys, a privately held company based in Seville, Spain. The majority of Indisys employees joined Intel. We signed the agreement to acquire the company on May 31 and the deal has been completed." Indysis explains that its artificial intelligence (AI) technology "is a human image, which converses fluently and with common sense in multiple languages and also works in different platforms."[63]
|
64 |
+
|
65 |
+
In December 2014, Intel bought PasswordBox.[64]
|
66 |
+
|
67 |
+
In January 2015, Intel purchased a 30% stake in Vuzix, a smart glasses manufacturer. The deal was worth $24.8 million.[65]
|
68 |
+
|
69 |
+
In February 2015, Intel announced its agreement to purchase German network chipmaker Lantiq, to aid in its expansion of its range of chips in devices with Internet connection capability.[66]
|
70 |
+
|
71 |
+
In June 2015, Intel announced its agreement to purchase FPGA design company Altera for $16.7 billion, in its largest acquisition to date.[67] The acquisition completed in December 2015.[68]
|
72 |
+
|
73 |
+
In October 2015, Intel bought cognitive computing company Saffron Technology for an undisclosed price.[69]
|
74 |
+
|
75 |
+
In August 2016, Intel purchased deep-learning startup Nervana Systems for $350 million.[70]
|
76 |
+
|
77 |
+
In December 2016, Intel acquired computer vision startup Movidius for an undisclosed price.[71]
|
78 |
+
|
79 |
+
In March 2017, Intel announced that they had agreed to purchase Mobileye, an Israeli developer of "autonomous driving" systems for US$15.3 billion.[72]
|
80 |
+
|
81 |
+
In June 2017, Intel Corporation announced an investment of over Rs.1100 crore ($170 million) for its upcoming Research and Development (R&D) centre in Bangalore.[73]
|
82 |
+
|
83 |
+
In January 2019, Intel announced an investment of over $11 billion on a new Israeli chip plant, as told by the Israeli Finance Minister.[74]
|
84 |
+
|
85 |
+
In 2008, Intel spun off key assets of a solar startup business effort to form an independent company, SpectraWatt Inc. In 2011, SpectraWatt filed for bankruptcy.[101]
|
86 |
+
|
87 |
+
In February 2011, Intel began to build a new microprocessor manufacturing facility in Chandler, Arizona, completed in 2013 at a cost of $5 billion.[102] The building was never used.[103] The company produces three-quarters of its products in the United States, although three-quarters of its revenue come from overseas.[104]
|
88 |
+
|
89 |
+
In April 2011, Intel began a pilot project with ZTE Corporation to produce smartphones using the Intel Atom processor for China's domestic market.
|
90 |
+
|
91 |
+
In December 2011, Intel announced that it reorganized several of its business units into a new mobile and communications group[105] that would be responsible for the company's smartphone, tablet, and wireless efforts.
|
92 |
+
|
93 |
+
Finding itself with excess fab capacity after the failure of the Ultrabook to gain market traction and with PC sales declining, in 2013 Intel reached a foundry agreement to produce chips for Altera using 14-nm process. General Manager of Intel's custom foundry division Sunit Rikhi indicated that Intel would pursue further such deals in the future.[106] This was after poor sales of Windows 8 hardware caused a major retrenchment for most of the major semiconductor manufacturers, except for Qualcomm, which continued to see healthy purchases from its largest customer, Apple.[107]
|
94 |
+
|
95 |
+
As of July 2013, five companies were using Intel's fabs via the Intel Custom Foundry division: Achronix, Tabula, Netronome, Microsemi, and Panasonic – most are field-programmable gate array (FPGA) makers, but Netronome designs network processors. Only Achronix began shipping chips made by Intel using the 22-nm Tri-Gate process.[108][109] Several other customers also exist but were not announced at the time.[110]
|
96 |
+
|
97 |
+
The Alliance for Affordable Internet (A4AI) was launched in October 2013 and Intel is part of the coalition of public and private organisations that also includes Facebook, Google, and Microsoft. Led by Sir Tim Berners-Lee, the A4AI seeks to make Internet access more affordable so that access is broadened in the developing world, where only 31% of people are online. Google will help to decrease Internet access prices so that they fall below the UN Broadband Commission's worldwide target of 5% of monthly income.[111]
|
98 |
+
|
99 |
+
In October 2018, Arm Holdings partnered with Intel in order to share code for embedded systems through the Yocto Project.[112]
|
100 |
+
|
101 |
+
On July 25, 2019, Apple and Intel announced an agreement for Apple to acquire the smartphone modem business of Intel Mobile Communications for US$1 billion.[113]
|
102 |
+
|
103 |
+
Intel's first products were shift register memory and random-access memory integrated circuits, and Intel grew to be a leader in the fiercely competitive DRAM, SRAM, and ROM markets throughout the 1970s. Concurrently, Intel engineers Marcian Hoff, Federico Faggin, Stanley Mazor and Masatoshi Shima invented Intel's first microprocessor. Originally developed for the Japanese company Busicom to replace a number of ASICs in a calculator already produced by Busicom, the Intel 4004 was introduced to the mass market on November 15, 1971, though the microprocessor did not become the core of Intel's business until the mid-1980s. (Note: Intel is usually given credit with Texas Instruments for the almost-simultaneous invention of the microprocessor)
|
104 |
+
|
105 |
+
In 1983, at the dawn of the personal computer era, Intel's profits came under increased pressure from Japanese memory-chip manufacturers, and then-president Andy Grove focused the company on microprocessors. Grove described this transition in the book Only the Paranoid Survive. A key element of his plan was the notion, then considered radical, of becoming the single source for successors to the popular 8086 microprocessor.
|
106 |
+
|
107 |
+
Until then, the manufacture of complex integrated circuits was not reliable enough for customers to depend on a single supplier, but Grove began producing processors in three geographically distinct factories,[which?] and ceased licensing the chip designs to competitors such as Zilog and AMD.[citation needed] When the PC industry boomed in the late 1980s and 1990s, Intel was one of the primary beneficiaries.
|
108 |
+
|
109 |
+
Despite the ultimate importance of the microprocessor, the 4004 and its successors the 8008 and the 8080 were never major revenue contributors at Intel. As the next processor, the 8086 (and its variant the 8088) was completed in 1978, Intel embarked on a major marketing and sales campaign for that chip nicknamed "Operation Crush", and intended to win as many customers for the processor as possible. One design win was the newly created IBM PC division, though the importance of this was not fully realized at the time.
|
110 |
+
|
111 |
+
IBM introduced its personal computer in 1981, and it was rapidly successful. In 1982, Intel created the 80286 microprocessor, which, two years later, was used in the IBM PC/AT. Compaq, the first IBM PC "clone" manufacturer, produced a desktop system based on the faster 80286 processor in 1985 and in 1986 quickly followed with the first 80386-based system, beating IBM and establishing a competitive market for PC-compatible systems and setting up Intel as a key component supplier.
|
112 |
+
|
113 |
+
In 1975, the company had started a project to develop a highly advanced 32-bit microprocessor, finally released in 1981 as the Intel iAPX 432. The project was too ambitious and the processor was never able to meet its performance objectives, and it failed in the marketplace. Intel extended the x86 architecture to 32 bits instead.[114][115]
|
114 |
+
|
115 |
+
During this period Andrew Grove dramatically redirected the company, closing much of its DRAM business and directing resources to the microprocessor business. Of perhaps greater importance was his decision to "single-source" the 386 microprocessor. Prior to this, microprocessor manufacturing was in its infancy, and manufacturing problems frequently reduced or stopped production, interrupting supplies to customers. To mitigate this risk, these customers typically insisted that multiple manufacturers produce chips they could use to ensure a consistent supply. The 8080 and 8086-series microprocessors were produced by several companies, notably AMD, with which Intel had a technology-sharing contract. Grove made the decision not to license the 386 design to other manufacturers, instead, producing it in three geographically distinct factories: Santa Clara, California; Hillsboro, Oregon; and Chandler, a suburb of Phoenix, Arizona. He convinced customers that this would ensure consistent delivery. In doing this, Intel breached its contract with AMD, which sued and was paid millions of dollars in damages but could not manufacture new Intel CPU designs any longer. (Instead, AMD started to develop and manufacture its own competing x86 designs.) As the success of Compaq's Deskpro 386 established the 386 as the dominant CPU choice, Intel achieved a position of near-exclusive dominance as its supplier. Profits from this funded rapid development of both higher-performance chip designs and higher-performance manufacturing capabilities, propelling Intel to a position of unquestioned leadership by the early 1990s.
|
116 |
+
|
117 |
+
Intel introduced the 486 microprocessor in 1989, and in 1990 established a second design team, designing the processors code-named "P5" and "P6" in parallel and committing to a major new processor every two years, versus the four or more years such designs had previously taken. Engineers Vinod Dham and Rajeev Chandrasekhar (Member of Parliament, India) were key figures on the core team that invented the 486 chip and later, Intel's signature Pentium chip. The P5 project was earlier known as "Operation Bicycle," referring to the cycles of the processor through two parallel execution pipelines. The P5 was introduced in 1993 as the Intel Pentium, substituting a registered trademark name for the former part number (numbers, such as 486, cannot be legally registered as trademarks in the United States). The P6 followed in 1995 as the Pentium Pro and improved into the Pentium II in 1997. New architectures were developed alternately in Santa Clara, California and Hillsboro, Oregon.
|
118 |
+
|
119 |
+
The Santa Clara design team embarked in 1993 on a successor to the x86 architecture, codenamed "P7". The first attempt was dropped a year later but quickly revived in a cooperative program with Hewlett-Packard engineers, though Intel soon took over primary design responsibility. The resulting implementation of the IA-64 64-bit architecture was the Itanium, finally introduced in June 2001. The Itanium's performance running legacy x86 code did not meet expectations, and it failed to compete effectively with x86-64, which was AMD's 64-bit extension of the 32-bit x86 architecture (Intel uses the name Intel 64, previously EM64T). In 2017, Intel announced that the Itanium 9700 series (Kittson) would be the last Itanium chips produced.[116][117]
|
120 |
+
|
121 |
+
The Hillsboro team designed the Willamette processors (initially code-named P68), which were marketed as the Pentium 4.
|
122 |
+
|
123 |
+
In June 1994, Intel engineers discovered a flaw in the floating-point math subsection of the P5 Pentium microprocessor. Under certain data-dependent conditions, the low-order bits of the result of a floating-point division would be incorrect. The error could compound in subsequent calculations. Intel corrected the error in a future chip revision, and under public pressure it issued a total recall and replaced the defective Pentium CPUs (which were limited to some 60, 66, 75, 90, and 100 MHz models[118]) on customer request.
|
124 |
+
|
125 |
+
The bug was discovered independently in October 1994 by Thomas Nicely, Professor of Mathematics at Lynchburg College. He contacted Intel but received no response. On October 30, he posted a message about his finding on the Internet.[119] Word of the bug spread quickly and reached the industry press. The bug was easy to replicate; a user could enter specific numbers into the calculator on the operating system. Consequently, many users did not accept Intel's statements that the error was minor and "not even an erratum." During Thanksgiving, in 1994, The New York Times ran a piece by journalist John Markoff spotlighting the error. Intel changed its position and offered to replace every chip, quickly putting in place a large end-user support organization. This resulted in a $475 million charge against Intel's 1994 revenue.[120] Dr. Nicely later learned that Intel had discovered the FDIV bug in its own testing a few months before him (but had decided not to inform customers).[121]
|
126 |
+
|
127 |
+
The "Pentium flaw" incident, Intel's response to it, and the surrounding media coverage propelled Intel from being a technology supplier generally unknown to most computer users to a household name. Dovetailing with an uptick in the "Intel Inside" campaign, the episode is considered to have been a positive event for Intel, changing some of its business practices to be more end-user focused and generating substantial public awareness, while avoiding a lasting negative impression.[122]
|
128 |
+
|
129 |
+
During this period, Intel undertook two major supporting advertising campaigns. The first campaign, the 1991 "Intel Inside" marketing and branding campaign, is widely known and has become synonymous with Intel itself. The idea of "ingredient branding" was new at the time, with only NutraSweet and a few others making attempts to do so.[123] This campaign established Intel, which had been a component supplier little-known outside the PC industry, as a household name.
|
130 |
+
|
131 |
+
The second campaign, Intel's Systems Group, which began in the early 1990s, showcased manufacturing of PC motherboards, the main board component of a personal computer, and the one into which the processor (CPU) and memory (RAM) chips are plugged.[124] The Systems Group campaign was lesser known than the Intel Inside campaign.
|
132 |
+
|
133 |
+
Shortly after, Intel began manufacturing fully configured "white box" systems for the dozens of PC clone companies that rapidly sprang up.[citation needed] At its peak in the mid-1990s, Intel manufactured over 15% of all PCs, making it the third-largest supplier at the time.[citation needed]
|
134 |
+
|
135 |
+
During the 1990s, Intel Architecture Labs (IAL) was responsible for many of the hardware innovations for the PC, including the PCI Bus, the PCI Express (PCIe) bus, and Universal Serial Bus (USB). IAL's software efforts met with a more mixed fate; its video and graphics software was important in the development of software digital video,[citation needed] but later its efforts were largely overshadowed by competition from Microsoft. The competition between Intel and Microsoft was revealed in testimony by then IAL Vice-President Steven McGeady at the Microsoft antitrust trial (United States v. Microsoft Corp.).
|
136 |
+
|
137 |
+
In early January 2018, it was reported that all Intel processors made since 1995[125][126] (besides Intel Itanium and pre-2013 Intel Atom) have been subject to two security flaws dubbed Meltdown and Spectre.[127][128]
|
138 |
+
|
139 |
+
The impact on performance resulting from software patches is "workload-dependent". Several procedures to help protect home computers and related devices from the Spectre and Meltdown security vulnerabilities have been published.[129][130][131][132] Spectre patches have been reported to significantly slow down performance, especially on older computers; on the newer 8th generation Core platforms, benchmark performance drops of 2–14 percent have been measured.[133] Meltdown patches may also produce performance loss.[134][135][136] It is believed that "hundreds of millions" of systems could be affected by these flaws.[126][137]
|
140 |
+
|
141 |
+
On March 15, 2018, Intel reported that it will redesign its CPU processors (performance losses to be determined) to protect against the Spectre security vulnerability, and expects to release the newly redesigned processors later in 2018.[138][139]
|
142 |
+
|
143 |
+
On May 3, 2018, eight additional Spectre-class flaws were reported. Intel reported that they are preparing new patches to mitigate these flaws.[140]
|
144 |
+
|
145 |
+
On August 14, 2018, Intel disclosed three additional chip flaws referred to as L1 Terminal Fault (L1TF). They reported that previously released microcode updates, along with new, pre-release microcode updates can be used to mitigate these flaws.[141][142]
|
146 |
+
|
147 |
+
On January 18, 2019, Intel disclosed three new vulnerabilities affecting all Intel CPUs, named "Fallout", "RIDL", and "ZombieLoad", allowing a program to read information recently written, read data in the line-fill buffers and load ports, and leak information from other processes and virtual machines.[143][144][145] Recent Coffeelake-series CPUs are even more vulnerable, due to hardware mitigations for Spectre.[citation needed]
|
148 |
+
|
149 |
+
On March 5, 2020, computer security experts reported another Intel chip security flaw, besides the Meltdown and Spectre flaws, with the systematic name CVE-2019-0090 (or, "Intel CSME Bug").[146] This newly found flaw is not fixable with a firmware update, and affects nearly "all Intel chips released in the past five years".[147][148][149]
|
150 |
+
|
151 |
+
Intel has decided to discontinue with their recent Intel Remote Keyboard Android app after encountering several security bugs. This app was launched in early 2015 to help users control Intel single-board computers and Intel NUC. The company has asked Remote Keyboard Users to delete the app at their first convenience.[150]
|
152 |
+
|
153 |
+
In 2008, Intel began shipping mainstream solid-state drives (SSDs) with up to 160 GB storage capacities.[151] As with their CPUs, Intel develops SSD chips using ever-smaller nanometer processes. These SSDs make use of industry standards such as NAND flash,[152] mSATA,[153] PCIe, and NVMe. In 2017, Intel introduced SSDs based on 3D XPoint technology under the Optane brand name.[154]
|
154 |
+
|
155 |
+
The Intel Scientific Computers division was founded in 1984 by Justin Rattner, to design and produce parallel computers based on Intel microprocessors connected in hypercube internetwork topology.[155] In 1992, the name was changed to the Intel Supercomputing Systems Division, and development of the iWarp architecture was also subsumed.[156] The division designed several supercomputer systems, including the Intel iPSC/1, iPSC/2, iPSC/860, Paragon and ASCI Red. In November 2014, Intel revealed that it is going to use light beams to speed up supercomputers.[157]
|
156 |
+
|
157 |
+
In 2007, Intel formed the Moblin project to create an open source Linux operating system for x86-based mobile devices. Following the success of Google's Android platform which ran exclusively on ARM processors, Intel announced on February 15, 2010, that it would partner with Nokia and merge Moblin with Nokia's ARM-based Maemo project to create MeeGo.[158] MeeGo was supported by the Linux Foundation.[159]
|
158 |
+
|
159 |
+
In February 2011, Nokia left the project after partnering with Microsoft, leaving Intel in sole charge of MeeGo. An Intel spokeswoman said it was "disappointed" by Nokia's decision but that Intel was committed to MeeGo.[160] In September 2011 Intel stopped working on MeeGo and partnered with Samsung to create Tizen, a new project hosted by the Linux Foundation.[161] Intel has since been co-developing the Tizen operating system which runs on several Samsung devices.
|
160 |
+
|
161 |
+
Two factors combined to end this dominance: the slowing of PC demand growth beginning in 2000 and the rise of the low-cost PC. By the end of the 1990s, microprocessor performance had outstripped software demand for that CPU power. Aside from high-end server systems and software, whose demand dropped with the end of the "dot-com bubble", consumer systems ran effectively on increasingly low-cost systems after 2000. Intel's strategy of producing ever-more-powerful processors and obsoleting their predecessors stumbled,[citation needed] leaving an opportunity for rapid gains by competitors, notably AMD. This, in turn, lowered the profitability[citation needed] of the processor line and ended an era of unprecedented dominance of the PC hardware by Intel.[citation needed]
|
162 |
+
|
163 |
+
Intel's dominance in the x86 microprocessor market led to numerous charges of antitrust violations over the years, including FTC investigations in both the late 1980s and in 1999, and civil actions such as the 1997 suit by Digital Equipment Corporation (DEC) and a patent suit by Intergraph. Intel's market dominance (at one time[when?] it controlled over 85% of the market for 32-bit x86 microprocessors) combined with Intel's own hardball legal tactics (such as its infamous 338 patent suit versus PC manufacturers)[162] made it an attractive target for litigation, but few of the lawsuits ever amounted to anything.[clarification needed]
|
164 |
+
|
165 |
+
A case of industrial espionage arose in 1995 that involved both Intel and AMD. Bill Gaede, an Argentine formerly employed both at AMD and at Intel's Arizona plant, was arrested for attempting in 1993 to sell the i486 and P5 Pentium designs to AMD and to certain foreign powers.[163] Gaede videotaped data from his computer screen at Intel and mailed it to AMD, which immediately alerted Intel and authorities, resulting in Gaede's arrest. Gaede was convicted and sentenced to 33 months in prison in June 1996.[164][165]
|
166 |
+
|
167 |
+
On June 6, 2005, Steve Jobs, then CEO of Apple, announced that Apple would be transitioning from its long favored PowerPC architecture to the Intel x86 architecture because the future PowerPC road map was unable to satisfy Apple's needs. The first Macintosh computers containing Intel CPUs were announced on January 10, 2006, and Apple had its entire line of consumer Macs running on Intel processors by early August 2006. The Apple Xserve server was updated to Intel Xeon processors from November 2006 and was offered in a configuration similar to Apple's Mac Pro.[166]
|
168 |
+
|
169 |
+
On June 22, 2020, during the virtual WWDC, Apple announced that they would be switching some of their Mac line to their own ARM-based designs.
|
170 |
+
|
171 |
+
In July 2007, the company released a print advertisement for its Intel Core 2 Duo processor featuring six black runners appearing to bow down to a Caucasian male inside of an office setting (due to the posture taken by runners on starting blocks). According to Nancy Bhagat, Vice President of Intel Corporate Marketing, viewers found the ad to be "insensitive and insulting", and several Intel executives made public apologies.[167]
|
172 |
+
|
173 |
+
The Classmate PC is the company's first low-cost netbook computer.[168] In 2014, the company released an updated version of the Classmate PC.[169]
|
174 |
+
|
175 |
+
In June 2011, Intel introduced the first Pentium mobile processor based on the Sandy Bridge core. The B940, clocked at 2 GHz, is faster than existing or upcoming mobile Celerons, although it is almost identical to dual-core Celeron CPUs in all other aspects.[170] According to IHS iSuppli's report on September 28, 2011, Sandy Bridge chips have helped Intel increase its market share in global processor market to 81.8%, while AMD's market share dropped to 10.4%.[171]
|
176 |
+
|
177 |
+
Intel planned to introduce Medfield – a processor for tablets and smartphones – to the market in 2012, as an effort to compete with ARM.[172] As a 32-nanometer processor, Medfield is designed to be energy-efficient, which is one of the core features in ARM's chips.[173]
|
178 |
+
|
179 |
+
At the Intel Developers Forum (IDF) 2011 in San Francisco, Intel's partnership with Google was announced. By January 2012, Google's Android 2.3 will use Intel's Atom microprocessor.[174][175][176]
|
180 |
+
|
181 |
+
In July 2011, Intel announced that its server chips, the Xeon series, will use new sensors that can improve data center cooling efficiency.[177]
|
182 |
+
|
183 |
+
In 2011, Intel announced the Ivy Bridge processor family at the Intel Developer Forum.[178] Ivy Bridge supports both DDR3 memory and DDR3L chips.
|
184 |
+
|
185 |
+
As part of its efforts in the Positive Energy Buildings Consortium, Intel has been developing an application, called Personal Office Energy Monitor (POEM), to help office buildings to be more energy-efficient. With this application, employees can get the power consumption info for their office machines, so that they can figure out a better way to save energy in their working environment.[179]
|
186 |
+
|
187 |
+
Intel has introduced some simulation games, starting in 2009 with web-based IT Manager 3: Unseen Forces. In it, the player manages a company's IT department. The goal is to apply technology and skill to enable the company to grow from a small business into a global enterprise.[180][better source needed] The game has since been discontinued and succeeded in 2012 by the web-based multiplayer game IT Manager: Duels, which is no longer available.[citation needed]
|
188 |
+
|
189 |
+
In 2011, Intel announced that it is working on a car security system that connects to smartphones via an application. The application works by streaming video to a cloud service if a car armed with the system is broken into.[181]
|
190 |
+
|
191 |
+
Intel also developed High-Bandwidth Digital Content Protection (HDCP) to prevent access of digital audio and video content as it travels across connections.
|
192 |
+
|
193 |
+
In 2013, Intel's Kirk Skaugen said that Intel's exclusive focus on Microsoft platforms was a thing of the past and that they would now support all "tier-one operating systems" such as Linux, Android, iOS, and Chrome.[182]
|
194 |
+
|
195 |
+
In 2014, Intel cut thousands of employees in response to "evolving market trends",[183] and offered to subsidize manufacturers for the extra costs involved in using Intel chips in their tablets.[184]
|
196 |
+
|
197 |
+
In June 2013, Intel unveiled its fourth generation of Intel Core processors (Haswell) in an event named Computex in Taipei.[185]
|
198 |
+
|
199 |
+
On January 6, 2014, Intel announced that it was "teaming with the Council of Fashion Designers of America, Barneys New York and Opening Ceremony around the wearable tech field."[186]
|
200 |
+
|
201 |
+
Intel developed a reference design for wearable smart earbuds that provide biometric and fitness information. The Intel smart earbuds provide full stereo audio, and monitor heart rate, while the applications on the user's phone keep track of run distance and calories burned.
|
202 |
+
|
203 |
+
CNBC reported that Intel eliminated the division that worked on health wearables in 2017.[187]
|
204 |
+
|
205 |
+
On November 19, 2015, Intel, alongside ARM Holdings, Dell, Cisco Systems, Microsoft, and Princeton University, founded the OpenFog Consortium, to promote interests and development in fog computing.[188] Intel's Chief Strategist for the IoT Strategy and Technology Office, Jeff Faders, became the consortium's first president.[189]
|
206 |
+
|
207 |
+
In 2009, Intel announced that it planned to undertake an effort to remove conflict resources—materials sourced from mines whose profits are used to fund armed militant groups, particularly within the Democratic Republic of the Congo—from its supply chain. Intel sought conflict-free sources of the precious metals common to electronics from within the country, using a system of first- and third-party audits, as well as input from the Enough Project and other organizations. During a keynote address at Consumer Electronics Show 2014, Intel CEO at the time, Brian Krzanich, announced that the company's microprocessors would henceforth be conflict free. In 2016, Intel stated that it had expected its entire supply chain to be conflict-free by the end of the year.[190][191][192]
|
208 |
+
|
209 |
+
Intel is one of the biggest stakeholders in the self-driving car industry, having joined the race in mid 2017[193] after joining forces with Mobileye.[194] The company is also one of the first in the sector to research consumer acceptance, after an AAA report quoted a 78% nonacceptance rate of the technology in the US.[195]
|
210 |
+
|
211 |
+
Safety levels of the technology, the thought of abandoning control to a machine, and psychological comfort of passengers in such situations were the major discussion topics initially. The commuters also stated that they did not want to see everything the car was doing. This was primarily a referral to the auto-steering wheel with no one sitting in the driving seat. Intel also learned that voice control regulator is vital, and the interface between the humans and machine eases the discomfort condition, and brings some sense of control back.[196] It is important to mention that Intel included only 10 people in this study, which makes the study less credible.[195] In a video posted on YouTube,[197] Intel accepted this fact and called for further testing.
|
212 |
+
|
213 |
+
Robert Noyce was Intel's CEO at its founding in 1968, followed by co-founder Gordon Moore in 1975. Andy Grove became the company's president in 1979 and added the CEO title in 1987 when Moore became chairman. In 1998, Grove succeeded Moore as Chairman, and Craig Barrett, already company president, took over. On May 18, 2005, Barrett handed the reins of the company over to Paul Otellini, who had been the company president and COO and who was responsible for Intel's design win in the original IBM PC. The board of directors elected Otellini as President and CEO, and Barrett replaced Grove as Chairman of the Board. Grove stepped down as chairman but is retained as a special adviser. In May 2009, Barrett stepped down as chairman of the Board and was succeeded by Jane Shaw. In May 2012, Intel vice chairman Andy Bryant, who had held the posts of CFO (1994) and Chief Administrative Officer (2007) at Intel, succeeded Shaw as executive chairman.[198]
|
214 |
+
|
215 |
+
In November 2012, president and CEO Paul Otellini announced that he would step down in May 2013 at the age of 62, three years before the company's mandatory retirement age. During a six-month transition period, Intel's board of directors commenced a search process for the next CEO, in which it considered both internal managers and external candidates such as Sanjay Jha and Patrick Gelsinger.[199] Financial results revealed that, under Otellini, Intel's revenue increased by 55.8 percent (US$34.2 to 53.3 billion), while its net income increased by 46.7% (US$7.5 billion to 11 billion).[200]
|
216 |
+
|
217 |
+
On May 2, 2013, Executive Vice President and COO Brian Krzanich was elected as Intel's sixth CEO,[201] a selection that became effective on May 16, 2013, at the company's annual meeting. Reportedly, the board concluded that an insider could proceed with the role and exert an impact more quickly, without the need to learn Intel's processes, and Krzanich was selected on such a basis.[202] Intel's software head Renée James was selected as president of the company, a role that is second to the CEO position.[203]
|
218 |
+
|
219 |
+
As of May 2013, Intel's board of directors consists of Andy Bryant, John Donahoe, Frank Yeary, Ambassador Charlene Barshefsky, Susan Decker, Reed Hundt, Paul Otellini, James Plummer, David Pottruck, and David Yoffie and Creative director will.i.am. The board was described by former Financial Times journalist Tom Foremski as "an exemplary example of corporate governance of the highest order" and received a rating of ten from GovernanceMetrics International, a form of recognition that has only been awarded to twenty-one other corporate boards worldwide.[204]
|
220 |
+
|
221 |
+
On June 21, 2018, Intel announced the resignation of Brian Krzanich as CEO, with the exposure of a relationship he had with an employee. Bob Swan was named interim CEO, as the Board began a search for a permanent CEO.
|
222 |
+
|
223 |
+
On January 31, 2019, Swan transitioned from his role as CFO and interim CEO and was named by the Board as the 7th CEO to lead the company.[205]
|
224 |
+
|
225 |
+
As of 17 May 2020:[206]
|
226 |
+
|
227 |
+
As of 2017 Intel shares are mainly held by institutional investors (The Vanguard Group, BlackRock, Capital Group Companies, State Street Corporation and others[207])
|
228 |
+
|
229 |
+
The firm promotes very heavily from within, most notably in its executive suite. The company has resisted the trend toward outsider CEOs. Paul Otellini was a 30-year veteran of the company when he assumed the role of CEO. All of his top lieutenants have risen through the ranks after many years with the firm. In many cases, Intel's top executives have spent their entire working careers with Intel.[citation needed]
|
230 |
+
|
231 |
+
Intel has a mandatory retirement policy for its CEOs when they reach age 65. Andy Grove retired at 62, while both Robert Noyce and Gordon Moore retired at 58. Grove retired as Chairman and as a member of the board of directors in 2005 at age 68.
|
232 |
+
|
233 |
+
Intel's headquarters are located in Santa Clara, California, and the company has operations around the world. Its largest workforce concentration anywhere is in Washington County, Oregon[209] (in the Portland metropolitan area's "Silicon Forest"), with 18,600 employees at several facilities.[210] Outside the United States, the company has facilities in China, Costa Rica, Malaysia, Israel, Ireland, India, Russia, Argentina and Vietnam, in 63 countries and regions internationally. In the U.S. Intel employs significant numbers of people in California, Colorado, Massachusetts, Arizona, New Mexico, Oregon, Texas, Washington and Utah. In Oregon, Intel is the state's largest private employer.[210][211] The company is the largest industrial employer in New Mexico while in Arizona the company has over 10,000 employees.[citation needed]
|
234 |
+
|
235 |
+
Intel invests heavily in research in China and about 100 researchers – or 10% of the total number of researchers from Intel – are located in Beijing.[212]
|
236 |
+
|
237 |
+
In 2011, the Israeli government offered Intel $290 million to expand in the country. As a condition, Intel would employ 1,500 more workers in Kiryat Gat and between 600–1000 workers in the north.[213]
|
238 |
+
|
239 |
+
In January 2014, it was reported that Intel would cut about 5,000 jobs from its work force of 107,000. The announcement was made a day after it reported earnings that missed analyst targets.[214]
|
240 |
+
|
241 |
+
In March 2014, it was reported that Intel would embark upon a $6 billion plan to expand its activities in Israel. The plan calls for continued investment in existing and new Intel plants until 2030. As of 2014[update], Intel employs 10,000 workers at four development centers and two production plants in Israel.[215]
|
242 |
+
|
243 |
+
Intel has a Diversity Initiative, including employee diversity groups as well as supplier diversity programs.[216] Like many companies with employee diversity groups, they include groups based on race and nationality as well as sexual identity and religion. In 1994, Intel sanctioned one of the earliest corporate Gay, Lesbian, Bisexual, and Transgender employee groups,[217] and supports a Muslim employees group,[218] a Jewish employees group,[219] and a Bible-based Christian group.[220][221]
|
244 |
+
|
245 |
+
Intel has received a 100% rating on numerous Corporate Equality Indices released by the Human Rights Campaign including the first one released in 2002. In addition, the company is frequently named one of the 100 Best Companies for Working Mothers by Working Mother magazine.
|
246 |
+
|
247 |
+
In January 2015, Intel announced the investment of $300 million over the next five years to enhance gender and racial diversity in their own company as well as the technology industry as a whole.[222][223][224][225][226]
|
248 |
+
|
249 |
+
In February 2016, Intel released its Global Diversity & Inclusion 2015 Annual Report.[227] The male-female mix of US employees was reported as 75.2% men and 24.8% women. For US employees in technical roles, the mix was reported as 79.8% male and 20.1% female.[227] NPR reports that Intel is facing a retention problem (particularly for African Americans), not just a pipeline problem.[228]
|
250 |
+
|
251 |
+
In 2011, ECONorthwest conducted an economic impact analysis of Intel's economic contribution to the state of Oregon. The report found that in 2009 "the total economic impacts attributed to Intel's operations, capital spending, contributions and taxes amounted to almost $14.6 billion in activity, including $4.3 billion in personal income and 59,990 jobs".[229] Through multiplier effects, every 10 Intel jobs supported, on average, was found to create 31 jobs in other sectors of the economy.[230]
|
252 |
+
|
253 |
+
In Rio Rancho, New Mexico, Intel is the leading employer.[231] In 1997, a community partnership between Sandoval County and Intel Corporation funded and built Rio Rancho High School.[232][233]
|
254 |
+
|
255 |
+
In 2011, Intel Capital announced a new fund to support startups working on technologies in line with the company's concept for next generation notebooks.[234] The company is setting aside a $300 million fund to be spent over the next three to four years in areas related to ultrabooks.[234] Intel announced the ultrabook concept at Computex in 2011. The ultrabook is defined as a thin (less than 0.8 inches [~2 cm] thick[235]) notebook that utilizes Intel processors[235] and also incorporates tablet features such as a touch screen and long battery life.[234][235]
|
256 |
+
|
257 |
+
At the Intel Developers Forum in 2011, four Taiwan ODMs showed prototype ultrabooks that used Intel's Ivy Bridge chips.[236] Intel plans to improve power consumption of its chips for ultrabooks, like new Ivy Bridge processors in 2013, which will only have 10W default thermal design power.[237]
|
258 |
+
|
259 |
+
Intel's goal for Ultrabook's price is below $1000;[235] however, according to two presidents from Acer and Compaq, this goal will not be achieved if Intel does not lower the price of its chips.[238]
|
260 |
+
|
261 |
+
Intel has become one of the world's most recognizable computer brands following its long-running Intel Inside campaign. The idea for "Intel Inside" came out of a meeting between Intel and one of the major computer resellers, MicroAge.[239]
|
262 |
+
|
263 |
+
In the late 1980s, Intel's market share was being seriously eroded by upstart competitors such as Advanced Micro Devices (now AMD), Zilog, and others who had started to sell their less expensive microprocessors to computer manufacturers. This was because, by using cheaper processors, manufacturers could make cheaper computers and gain more market share in an increasingly price-sensitive market. In 1989, Intel's Dennis Carter visited MicroAge's headquarters in Tempe, Arizona, to meet with MicroAge's VP of Marketing, Ron Mion. MicroAge had become one of the largest distributors of Compaq, IBM, HP, and others and thus was a primary – although indirect – driver of demand for microprocessors. Intel wanted MicroAge to petition its computer suppliers to favor Intel chips. However, Mion felt that the marketplace should decide which processors they wanted. Intel's counterargument was that it would be too difficult to educate PC buyers on why Intel microprocessors were worth paying more for ... and they were right.[239] But Mion felt that the public didn't really need to fully understand why Intel chips were better, they just needed to feel they were better. So Mion proposed a market test. Intel would pay for a MicroAge billboard somewhere saying, "If you're buying a personal computer, make sure it has Intel inside." In turn, MicroAge would put "Intel Inside" stickers on the Intel-based computers in their stores in that area. To make the test easier to monitor, Mion decided to do the test in Boulder, Colorado, where it had a single store. Virtually overnight, the sales of personal computers in that store dramatically shifted to Intel-based PCs. Intel very quickly adopted "Intel Inside" as its primary branding and rolled it out worldwide.[239]
|
264 |
+
|
265 |
+
As is often the case with computer lore, other tidbits have been combined to explain how things evolved. "Intel Inside" has not escaped that tendency and there are other "explanations" that had been floating around.
|
266 |
+
|
267 |
+
Intel's branding campaign started with "The Computer Inside" tagline in 1990 in the US and Europe. The Japan chapter of Intel proposed an "Intel in it" tagline and kicked off the Japanese campaign by hosting EKI-KON (meaning "Station Concert" in Japanese) at the Tokyo railway station dome on Christmas Day, December 25, 1990. Several months later, "The Computer Inside" incorporated the Japan idea to become "Intel Inside" which eventually elevated to the worldwide branding campaign in 1991, by Intel marketing manager Dennis Carter.[240] A case study, "Inside Intel Inside", was put together by Harvard Business School.[241] The five-note jingle was introduced in 1994 and by its tenth anniversary was being heard in 130 countries around the world. The initial branding agency for the "Intel Inside" campaign was DahlinSmithWhite Advertising of Salt Lake City. The Intel swirl logo was the work of DahlinSmithWhite art director Steve Grigg under the direction of Intel president and CEO Andy Grove.[citation needed]
|
268 |
+
|
269 |
+
The Intel Inside advertising campaign sought public brand loyalty and awareness of Intel processors in consumer computers.[242] Intel paid some of the advertiser's costs for an ad that used the Intel Inside logo and xylo-marimba jingle.[243]
|
270 |
+
|
271 |
+
In 2008, Intel planned to shift the emphasis of its Intel Inside campaign from traditional media such as television and print to newer media such as the Internet.[244] Intel required that a minimum of 35% of the money it provided to the companies in its co-op program be used for online marketing.[244] The Intel 2010 annual financial report indicated that $1.8 billion (6% of the gross margin and nearly 16% of the total net income) was allocated to all advertising with Intel Inside being part of that.[245]
|
272 |
+
|
273 |
+
The famous D♭ D♭ G♭ D♭ A♭ xylophone/xylomarimba jingle, sonic logo, tag, audio mnemonic was produced by Musikvergnuegen and written by Walter Werzowa, once a member of the Austrian 1980s sampling band Edelweiss.[246] The sonic Intel logo was remade in 1999 to coincide with the launch of the Pentium III, and a second time in 2004 to coincide with the new logo change (although it overlapped with the 1999 version and was not mainstreamed until the launch of the Core processors in 2006), with the melody unchanged. Advertisements for products featuring Intel processors with prominent MMX branding featured a version of the jingle with an embellishment (shining sound) after the final note.
|
274 |
+
|
275 |
+
In 2006, Intel expanded its promotion of open specification platforms beyond Centrino, to include the Viiv media center PC and the business desktop Intel vPro.
|
276 |
+
|
277 |
+
In mid-January 2006, Intel announced that they were dropping the long running Pentium name from their processors. The Pentium name was first used to refer to the P5 core Intel processors and was done to comply with court rulings that prevent the trademarking of a string of numbers, so competitors could not just call their processor the same name, as had been done with the prior 386 and 486 processors (both of which had copies manufactured by IBM and AMD). They phased out the Pentium names from mobile processors first, when the new Yonah chips, branded Core Solo and Core Duo, were released. The desktop processors changed when the Core 2 line of processors were released. By 2009, Intel was using a good-better-best strategy with Celeron being good, Pentium better, and the Intel Core family representing the best the company has to offer.[247]
|
278 |
+
|
279 |
+
According to spokesman Bill Calder, Intel has maintained only the Celeron brand, the Atom brand for netbooks and the vPro lineup for businesses. Since late 2009, Intel's mainstream processors have been called Celeron, Pentium, Core i3, Core i5, Core i7, and Core i9 in order of performance from lowest to highest. The first generation core products carry a 3 digit name, such as i5 750, and the second generation products carry a 4 digit name, such as the i5 2500. In both cases, a K at the end of it shows that it is an unlocked processor, enabling additional overclocking abilities (for instance, 2500K). vPro products will carry the Intel Core i7 vPro processor or the Intel Core i5 vPro processor name.[248] In October 2011, Intel started to sell its Core i7-2700K "Sandy Bridge" chip to customers worldwide.[249]
|
280 |
+
|
281 |
+
Since 2010, "Centrino" is only being applied to Intel's WiMAX and Wi-Fi technologies.[248]
|
282 |
+
|
283 |
+
Neo Sans Intel is a customized version of Neo Sans based on the Neo Sans and Neo Tech, designed by Sebastian Lester in 2004.[250]
|
284 |
+
|
285 |
+
Intel Clear is a global font announced in 2014 designed for to be used across all communications.[251][252] The font family was designed by Red Peek Branding and Daltan Maag Ltd.[253][253] Initially available in Latin, Greek and Cyrillic scripts, it replaced Neo Sans Intel as the company's corporate typeface.[254][255] Intel Clear Hebrew, Intel Clear Arabic were added by Daltan Maag Ltd.[256]
|
286 |
+
|
287 |
+
It is a book produced by Red Peak Branding as part of new brand identity campaign, celebrating Intel's achievements while setting the new standard for what Intel looks, feels and sounds like.[257]
|
288 |
+
|
289 |
+
Intel has a significant participation in the open source communities since 1999.[258][self-published source] For example, in 2006 Intel released MIT-licensed X.org drivers for their integrated graphic cards of the i965 family of chipsets. Intel released FreeBSD drivers for some networking cards,[259] available under a BSD-compatible license,[260] which were also ported to OpenBSD.[260] Binary firmware files for non-wireless Ethernet devices were also released under a BSD licence allowing free redistribution.[261] Intel ran the Moblin project until April 23, 2009, when they handed the project over to the Linux Foundation. Intel also runs the LessWatts.org campaigns.[262]
|
290 |
+
|
291 |
+
However, after the release of the wireless products called Intel Pro/Wireless 2100, 2200BG/2225BG/2915ABG and 3945ABG in 2005, Intel was criticized for not granting free redistribution rights for the firmware that must be included in the operating system for the wireless devices to operate.[263] As a result of this, Intel became a target of campaigns to allow free operating systems to include binary firmware on terms acceptable to the open source community. Linspire-Linux creator Michael Robertson outlined the difficult position that Intel was in releasing to open source, as Intel did not want to upset their large customer Microsoft.[264] Theo de Raadt of OpenBSD also claimed that Intel is being "an Open Source fraud" after an Intel employee presented a distorted view of the situation at an open-source conference.[265] In spite of the significant negative attention Intel received as a result of the wireless dealings, the binary firmware still has not gained a license compatible with free software principles.[266]
|
292 |
+
|
293 |
+
The Firmware Support Package (FSP) is a proprietary firmware library developed by Intel for platform initialization and can be integrated into other firmware.[267]
|
294 |
+
|
295 |
+
Due to declining PC sales, in 2016 Intel cut 12,000 jobs.[268]
|
296 |
+
|
297 |
+
In October 2006, a Transmeta lawsuit was filed against Intel for patent infringement on computer architecture and power efficiency technologies.[269] The lawsuit was settled in October 2007, with Intel agreeing to pay US$150 million initially and US$20 million per year for the next five years. Both companies agreed to drop lawsuits against each other, while Intel was granted a perpetual non-exclusive license to use current and future patented Transmeta technologies in its chips for 10 years.[270]
|
298 |
+
|
299 |
+
In September 2005, Intel filed a response to an AMD lawsuit,[271] disputing AMD's claims, and claiming that Intel's business practices are fair and lawful. In a rebuttal, Intel deconstructed AMD's offensive strategy and argued that AMD struggled largely as a result of its own bad business decisions, including underinvestment in essential manufacturing capacity and excessive reliance on contracting out chip foundries.[272] Legal analysts predicted the lawsuit would drag on for a number of years since Intel's initial response indicated its unwillingness to settle with AMD.[273][274] In 2008 a court date was finally set,[275] but in 2009, Intel settled with a $1.25 billion payout to AMD (see below).[276]
|
300 |
+
|
301 |
+
On November 4, 2009, New York's attorney general filed an antitrust lawsuit against Intel Corp, claiming the company used "illegal threats and collusion" to dominate the market for computer microprocessors.
|
302 |
+
|
303 |
+
On November 12, 2009, AMD agreed to drop the antitrust lawsuit against Intel in exchange for $1.25 billion.[276] A joint press release published by the two chip makers stated "While the relationship between the two companies has been difficult in the past, this agreement ends the legal disputes and enables the companies to focus all of our efforts on product innovation and development."[277][278]
|
304 |
+
|
305 |
+
An antitrust lawsuit[279] and a class-action suit relating to cold calling employees of other companies has been settled.[280]
|
306 |
+
|
307 |
+
In 2005, the local Fair Trade Commission found that Intel violated the Japanese Antimonopoly Act. The commission ordered Intel to eliminate discounts that had discriminated against AMD. To avoid a trial, Intel agreed to comply with the order.[281][282][283][284]
|
308 |
+
|
309 |
+
In July 2007, the European Commission accused Intel of anti-competitive practices, mostly against AMD.[285] The allegations, going back to 2003, include giving preferential prices to computer makers buying most or all of their chips from Intel, paying computer makers to delay or cancel the launch of products using AMD chips, and providing chips at below standard cost to governments and educational institutions.[286] Intel responded that the allegations were unfounded and instead qualified its market behavior as consumer-friendly.[286] General counsel Bruce Sewell responded that the Commission had misunderstood some factual assumptions as to pricing and manufacturing costs.[287]
|
310 |
+
|
311 |
+
In February 2008, Intel stated that its office in Munich had been raided by European Union regulators. Intel reported that it was cooperating with investigators.[288] Intel faced a fine of up to 10% of its annual revenue, if found guilty of stifling competition.[289] AMD subsequently launched a website promoting these allegations.[290][291] In June 2008, the EU filed new charges against Intel.[292] In May 2009, the EU found that Intel had engaged in anti-competitive practices and subsequently fined Intel €1.06 billion (US$1.44 billion), a record amount. Intel was found to have paid companies, including Acer, Dell, HP, Lenovo and NEC,[293] to exclusively use Intel chips in their products, and therefore harmed other companies including AMD.[293][294][295] The European Commission said that Intel had deliberately acted to keep competitors out of the computer chip market and in doing so had made a "serious and sustained violation of the EU's antitrust rules".[293] In addition to the fine, Intel was ordered by the Commission to immediately cease all illegal practices.[293] Intel has stated that they will appeal against the Commission's verdict. In June 2014, the General Court, which sits below the European Court of Justice, rejected the appeal.[293]
|
312 |
+
|
313 |
+
In September 2007, South Korean regulators accused Intel of breaking antitrust law. The investigation began in February 2006, when officials raided Intel's South Korean offices. The company risked a penalty of up to 3% of its annual sales, if found guilty.[296] In June 2008, the Fair Trade Commission ordered Intel to pay a fine of US$25.5 million for taking advantage of its dominant position to offer incentives to major Korean PC manufacturers on the condition of not buying products from AMD.[297]
|
314 |
+
|
315 |
+
New York started an investigation of Intel in January 2008 on whether the company violated antitrust laws in pricing and sales of its microprocessors.[298] In June 2008, the Federal Trade Commission also began an antitrust investigation of the case.[299] In December 2009, the FTC announced it would initiate an administrative proceeding against Intel in September 2010.[300][301][302][303]
|
316 |
+
|
317 |
+
In November 2009, following a two-year investigation, New York Attorney General Andrew Cuomo sued Intel, accusing them of bribery and coercion, claiming that Intel bribed computer makers to buy more of their chips than those of their rivals, and threatened to withdraw these payments if the computer makers were perceived as working too closely with its competitors. Intel has denied these claims.[304]
|
318 |
+
|
319 |
+
On July 22, 2010, Dell agreed to a settlement with the U.S. Securities and Exchange Commission (SEC) to pay $100M in penalties resulting from charges that Dell did not accurately disclose accounting information to investors. In particular, the SEC charged that from 2002 to 2006, Dell had an agreement with Intel to receive rebates in exchange for not using chips manufactured by AMD. These substantial rebates were not disclosed to investors, but were used to help meet investor expectations regarding the company's financial performance; "These exclusivity payments grew from 10 percent of Dell's operating income in FY 2003 to 38 percent in FY 2006, and peaked at 76 percent in the first quarter of FY 2007."[305] Dell eventually did adopt AMD as a secondary supplier in 2006, and Intel subsequently stopped their rebates, causing Dell's financial performance to fall.[306][307][308]
|
320 |
+
|
321 |
+
Intel has been accused by some residents of Rio Rancho, New Mexico of allowing VOCs to be released in excess of their pollution permit. One resident claimed that a release of 1.4 tons of carbon tetrachloride was measured from one acid scrubber during the fourth quarter of 2003 but an emission factor allowed Intel to report no carbon tetrachloride emissions for all of 2003.[309]
|
322 |
+
|
323 |
+
Another resident alleges that Intel was responsible for the release of other VOCs from their Rio Rancho site and that a necropsy of lung tissue from two deceased dogs in the area indicated trace amounts of toluene, hexane, ethylbenzene, and xylene isomers,[310] all of which are solvents used in industrial settings but also commonly found in gasoline, retail paint thinners and retail solvents. During a sub-committee meeting of the New Mexico Environment Improvement Board, a resident claimed that Intel's own reports documented more than 1,580 pounds (720 kg) of VOCs were released in June and July 2006.[311]
|
324 |
+
|
325 |
+
Intel's environmental performance is published annually in their corporate responsibility report.[312]
|
326 |
+
|
327 |
+
In its 2012 rankings on the progress of consumer electronics companies relating to conflict minerals, the Enough Project rated Intel the best of 24 companies, calling it a "Pioneer of progress".[313] In 2014, chief executive Brian Krzanich urged the rest of the industry to follow Intel's lead by also shunning conflict minerals.[314]
|
328 |
+
|
329 |
+
Intel has faced complaints of age discrimination in firing and layoffs. Intel was sued in 1993 by nine former employees, over allegations that they were laid off because they were over the age of 40.[315]
|
330 |
+
|
331 |
+
A group called FACE Intel (Former and Current Employees of Intel) claims that Intel weeds out older employees. FACE Intel claims that more than 90 percent of people who have been laid off or fired from Intel are over the age of 40. Upside magazine requested data from Intel breaking out its hiring and firing by age, but the company declined to provide any.[316] Intel has denied that age plays any role in Intel's employment practices.[317] FACE Intel was founded by Ken Hamidi, who was fired from Intel in 1995 at the age of 47.[316] Hamidi was blocked in a 1999 court decision from using Intel's email system to distribute criticism of the company to employees,[318] which overturned in 2003 in Intel Corp. v. Hamidi.
|
332 |
+
|
333 |
+
In August 2016, Indian officials of the Bruhat Bengaluru Mahanagara Palike (BBMP) parked garbage trucks on Intel's campus and threatened to dump them for evading payment of property taxes between 2007 and 2008, to the tune of 340 million Indian rupees (US$4.9 million). Intel had reportedly been paying taxes as a non-air-conditioned office, when the campus in fact had central air conditioning. Other factors, such as land acquisition and construction improvements, added to the tax burden. Previously, Intel had appealed the demand in the Karnataka high court in July, during which the court ordered Intel to pay BBMP half the owed amount (170 million rupees, or US$2.4 million) plus arrears by August 28 of that year.[319][320]
|
en/275.html.txt
ADDED
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
An antibiotic is a type of antimicrobial substance active against bacteria. It is the most important type of antibacterial agent for fighting bacterial infections, and antibiotic medications are widely used in the treatment and prevention of such infections.[1][2] They may either kill or inhibit the growth of bacteria. A limited number of antibiotics also possess antiprotozoal activity.[3][4] Antibiotics are not effective against viruses such as the common cold or influenza [5]; drugs which inhibit viruses are termed antiviral drugs or antivirals rather than antibiotics.
|
4 |
+
|
5 |
+
Sometimes, the term antibiotic—literally "opposing life", from the Greek roots ἀντι anti, "against" and βίος bios, "life"—is broadly used to refer to any substance used against microbes, but in the usual medical usage, antibiotics (such as penicillin) are those produced naturally (by one microorganism fighting another), whereas nonantibiotic antibacterials (such as sulfonamides and antiseptics) are fully synthetic. However, both classes have the same goal of killing or preventing the growth of microorganisms, and both are included in antimicrobial chemotherapy. "Antibacterials" include antiseptic drugs, antibacterial soaps, and chemical disinfectants, whereas antibiotics are an important class of antibacterials used more specifically in medicine[6] and sometimes in livestock feed.
|
6 |
+
|
7 |
+
Antibiotics have been used since ancient times. Many civilizations used topical application of mouldy bread, with many references to its beneficial effects arising from ancient Egypt, Nubia, China, Serbia, Greece, and Rome. The first person to directly document the use of moulds to treat infections was John Parkinson (1567–1650). Antibiotics revolutionized medicine in the 20th century. Alexander Fleming (1881–1955) discovered modern day penicillin in 1928, the widespread use of which proved significantly beneficial during wartime. However, the effectiveness and easy access to antibiotics have also led to their overuse[7] and some bacteria have evolved resistance to them.[1][8][9][10] The World Health Organization has classified antimicrobial resistance as a widespread "serious threat [that] is no longer a prediction for the future, it is happening right now in every region of the world and has the potential to affect anyone, of any age, in any country".[11]
|
8 |
+
|
9 |
+
Antibiotics are used to treat or prevent bacterial infections,[12] and sometimes protozoan infections. (Metronidazole is effective against a number of parasitic diseases). When an infection is suspected of being responsible for an illness but the responsible pathogen has not been identified, an empiric therapy is adopted.[13] This involves the administration of a broad-spectrum antibiotic based on the signs and symptoms presented and is initiated pending laboratory results that can take several days.[12][13]
|
10 |
+
|
11 |
+
When the responsible pathogenic microorganism is already known or has been identified, definitive therapy can be started. This will usually involve the use of a narrow-spectrum antibiotic. The choice of antibiotic given will also be based on its cost. Identification is critically important as it can reduce the cost and toxicity of the antibiotic therapy and also reduce the possibility of the emergence of antimicrobial resistance.[13] To avoid surgery, antibiotics may be given for non-complicated acute appendicitis.[14]
|
12 |
+
|
13 |
+
Antibiotics may be given as a preventive measure and this is usually limited to at-risk populations such as those with a weakened immune system (particularly in HIV cases to prevent pneumonia), those taking immunosuppressive drugs, cancer patients, and those having surgery.[12] Their use in surgical procedures is to help prevent infection of incisions. They have an important role in dental antibiotic prophylaxis where their use may prevent bacteremia and consequent infective endocarditis. Antibiotics are also used to prevent infection in cases of neutropenia particularly cancer-related.[15][16]
|
14 |
+
|
15 |
+
There are many different routes of administration for antibiotic treatment. Antibiotics are usually taken by mouth. In more severe cases, particularly deep-seated systemic infections, antibiotics can be given intravenously or by injection.[1][13] Where the site of infection is easily accessed, antibiotics may be given topically in the form of eye drops onto the conjunctiva for conjunctivitis or ear drops for ear infections and acute cases of swimmer's ear. Topical use is also one of the treatment options for some skin conditions including acne and cellulitis.[17] Advantages of topical application include achieving high and sustained concentration of antibiotic at the site of infection; reducing the potential for systemic absorption and toxicity, and total volumes of antibiotic required are reduced, thereby also reducing the risk of antibiotic misuse.[18] Topical antibiotics applied over certain types of surgical wounds have been reported to reduce the risk of surgical site infections.[19] However, there are certain general causes for concern with topical administration of antibiotics. Some systemic absorption of the antibiotic may occur; the quantity of antibiotic applied is difficult to accurately dose, and there is also the possibility of local hypersensitivity reactions or contact dermatitis occurring.[18]. It is recommended to administer antibiotics as soon as possible, especially in life-threatening infections. Many emergency departments stock antibiotics for this purpose.[20]
|
16 |
+
|
17 |
+
Antibiotic consumption varies widely between countries. The WHO report on surveillance of antibiotic consumption’ published in 2018 analysed 2015 data from 65 countries. As measured in defined daily doses per 1,000 inhabitants per day. Mongolia had the highest consumption with a rate of 64.4. Burundi had the lowest at 4.4. Amoxicillin and amoxicillin/clavulanic acid were the most frequently consumed.[21]
|
18 |
+
|
19 |
+
Antibiotics are screened for any negative effects before their approval for clinical use, and are usually considered safe and well tolerated. However, some antibiotics have been associated with a wide extent of adverse side effects ranging from mild to very severe depending on the type of antibiotic used, the microbes targeted, and the individual patient.[22][23] Side effects may reflect the pharmacological or toxicological properties of the antibiotic or may involve hypersensitivity or allergic reactions.[4] Adverse effects range from fever and nausea to major allergic reactions, including photodermatitis and anaphylaxis.[24] Safety profiles of newer drugs are often not as well established as for those that have a long history of use.[22]
|
20 |
+
|
21 |
+
Common side-effects include diarrhea, resulting from disruption of the species composition in the intestinal flora, resulting, for example, in overgrowth of pathogenic bacteria, such as Clostridium difficile.[25] Antibacterials can also affect the vaginal flora, and may lead to overgrowth of yeast species of the genus Candida in the vulvo-vaginal area.[26] Additional side effects can result from interaction with other drugs, such as the possibility of tendon damage from the administration of a quinolone antibiotic with a systemic corticosteroid.[27]
|
22 |
+
|
23 |
+
Some antibiotics may also damage the mitochondrion, a bacteria-derived organelle found in eukaryotic, including human, cells. Mitochondrial damage cause oxidative stress in cells and has been suggested as a mechanism for side effects from fluoroquinolones.[28] They are also known to affect chloroplasts.[29]
|
24 |
+
|
25 |
+
Exposure to antibiotics early in life is associated with increased body mass in humans and mouse models.[30] Early life is a critical period for the establishment of the intestinal microbiota and for metabolic development.[31] Mice exposed to subtherapeutic antibiotic treatment – with either penicillin, vancomycin, or chlortetracycline had altered composition of the gut microbiota as well as its metabolic capabilities.[32] One study has reported that mice given low-dose penicillin (1 μg/g body weight) around birth and throughout the weaning process had an increased body mass and fat mass, accelerated growth, and increased hepatic expression of genes involved in adipogenesis, compared to control mice.[33] In addition, penicillin in combination with a high-fat diet increased fasting insulin levels in mice.[33] However, it is unclear whether or not antibiotics cause obesity in humans. Studies have found a correlation between early exposure of antibiotics (<6 months) and increased body mass (at 10 and 20 months).[34] Another study found that the type of antibiotic exposure was also significant with the highest risk of being overweight in those given macrolides compared to penicillin and cephalosporin.[35] Therefore, there is correlation between antibiotic exposure in early life and obesity in humans, but whether or not there is a causal relationship remains unclear. Although there is a correlation between antibiotic use in early life and obesity, the effect of antibiotics on obesity in humans needs to be weighed against the beneficial effects of clinically indicated treatment with antibiotics in infancy.[31]
|
26 |
+
|
27 |
+
There are few well-controlled studies on whether antibiotic use increases the risk of oral contraceptive failure.[36] The majority of studies indicate antibiotics do not interfere with birth control pills,[37] such as clinical studies that suggest the failure rate of contraceptive pills caused by antibiotics is very low (about 1%).[38] Situations that may increase the risk of oral contraceptive failure include non-compliance (missing taking the pill), vomiting, or diarrhea. Gastrointestinal disorders or interpatient variability in oral contraceptive absorption affecting ethinylestradiol serum levels in the blood.[36] Women with menstrual irregularities may be at higher risk of failure and should be advised to use backup contraception during antibiotic treatment and for one week after its completion. If patient-specific risk factors for reduced oral contraceptive efficacy are suspected, backup contraception is recommended.[36]
|
28 |
+
|
29 |
+
In cases where antibiotics have been suggested to affect the efficiency of birth control pills, such as for the broad-spectrum antibiotic rifampicin, these cases may be due to an increase in the activities of hepatic liver enzymes' causing increased breakdown of the pill's active ingredients.[37] Effects on the intestinal flora, which might result in reduced absorption of estrogens in the colon, have also been suggested, but such suggestions have been inconclusive and controversial.[39][40] Clinicians have recommended that extra contraceptive measures be applied during therapies using antibiotics that are suspected to interact with oral contraceptives.[37] More studies on the possible interactions between antibiotics and birth control pills (oral contraceptives) are required as well as careful assessment of patient-specific risk factors for potential oral contractive pill failure prior to dismissing the need for backup contraception.[36]
|
30 |
+
|
31 |
+
Interactions between alcohol and certain antibiotics may occur and may cause side effects and decreased effectiveness of antibiotic therapy.[41][42] While moderate alcohol consumption is unlikely to interfere with many common antibiotics, there are specific types of antibiotics, with which alcohol consumption may cause serious side effects.[43] Therefore, potential risks of side effects and effectiveness depend on the type of antibiotic administered.[44]
|
32 |
+
|
33 |
+
Antibiotics such as metronidazole, tinidazole, cephamandole, latamoxef, cefoperazone, cefmenoxime, and furazolidone, cause a disulfiram-like chemical reaction with alcohol by inhibiting its breakdown by acetaldehyde dehydrogenase, which may result in vomiting, nausea, and shortness of breath.[43] In addition, the efficacy of doxycycline and erythromycin succinate may be reduced by alcohol consumption.[45] Other effects of alcohol on antibiotic activity include altered activity of the liver enzymes that break down the antibiotic compound.[46]
|
34 |
+
|
35 |
+
The successful outcome of antimicrobial therapy with antibacterial compounds depends on several factors. These include host defense mechanisms, the location of infection, and the pharmacokinetic and pharmacodynamic properties of the antibacterial.[47] A bactericidal activity of antibacterials may depend on the bacterial growth phase, and it often requires ongoing metabolic activity and division of bacterial cells.[48] These findings are based on laboratory studies, and in clinical settings have also been shown to eliminate bacterial infection.[47][49] Since the activity of antibacterials depends frequently on its concentration,[50] in vitro characterization of antibacterial activity commonly includes the determination of the minimum inhibitory concentration and minimum bactericidal concentration of an antibacterial.[47][51]
|
36 |
+
To predict clinical outcome, the antimicrobial activity of an antibacterial is usually combined with its pharmacokinetic profile, and several pharmacological parameters are used as markers of drug efficacy.[52]
|
37 |
+
|
38 |
+
In important infectious diseases, including tuberculosis, combination therapy (i.e., the concurrent application of two or more antibiotics) has been used to delay or prevent the emergence of resistance. In acute bacterial infections, antibiotics as part of combination therapy are prescribed for their synergistic effects to improve treatment outcome as the combined effect of both antibiotics is better than their individual effect.[53][54] Methicillin-resistant Staphylococcus aureus infections may be treated with a combination therapy of fusidic acid and rifampicin.[53] Antibiotics used in combination may also be antagonistic and the combined effects of the two antibiotics may be less than if the individual antibiotic was given as part of a monotherapy.[53] For example, chloramphenicol and tetracyclines are antagonists to penicillins and U.S.s. However, this can vary depending on the species of bacteria.[55] In general, combinations of a bacteriostatic antibiotic and bactericidal antibiotic are antagonistic.[53][54]
|
39 |
+
|
40 |
+
Antibiotics are commonly classified based on their mechanism of action, chemical structure, or spectrum of activity. Most target bacterial functions or growth processes.[56] Those that target the bacterial cell wall (penicillins and cephalosporins) or the cell membrane (polymyxins), or interfere with essential bacterial enzymes (rifamycins, lipiarmycins, quinolones, and sulfonamides) have bactericidal activities. Protein synthesis inhibitors (macrolides, lincosamides, and tetracyclines) are usually bacteriostatic (with the exception of bactericidal aminoglycosides).[57] Further categorization is based on their target specificity. "Narrow-spectrum" antibiotics target specific types of bacteria, such as gram-negative or gram-positive, whereas broad-spectrum antibiotics affect a wide range of bacteria. Following a 40-year break in discovering new classes of antibacterial compounds, four new classes of antibiotics have been brought into clinical use in the late 2000s and early 2010s: cyclic lipopeptides (such as daptomycin), glycylcyclines (such as tigecycline), oxazolidinones (such as linezolid), and lipiarmycins (such as fidaxomicin).[58][59]
|
41 |
+
|
42 |
+
With advances in medicinal chemistry, most modern antibacterials are semisynthetic modifications of various natural compounds.[60] These include, for example, the beta-lactam antibiotics, which include the penicillins (produced by fungi in the genus Penicillium), the cephalosporins, and the carbapenems. Compounds that are still isolated from living organisms are the aminoglycosides, whereas other antibacterials—for example, the sulfonamides, the quinolones, and the oxazolidinones—are produced solely by chemical synthesis.[60] Many antibacterial compounds are relatively small molecules with a molecular weight of less than 1000 daltons.[61]
|
43 |
+
|
44 |
+
Since the first pioneering efforts of Howard Florey and Chain in 1939, the importance of antibiotics, including antibacterials, to medicine has led to intense research into producing antibacterials at large scales. Following screening of antibacterials against a wide range of bacteria, production of the active compounds is carried out using fermentation, usually in strongly aerobic conditions.[citation needed]
|
45 |
+
|
46 |
+
The emergence of resistance of bacteria to antibiotics is a common phenomenon. Emergence of resistance often reflects evolutionary processes that take place during antibiotic therapy. The antibiotic treatment may select for bacterial strains with physiologically or genetically enhanced capacity to survive high doses of antibiotics. Under certain conditions, it may result in preferential growth of resistant bacteria, while growth of susceptible bacteria is inhibited by the drug.[62] For example, antibacterial selection for strains having previously acquired antibacterial-resistance genes was demonstrated in 1943 by the Luria–Delbrück experiment.[63] Antibiotics such as penicillin and erythromycin, which used to have a high efficacy against many bacterial species and strains, have become less effective, due to the increased resistance of many bacterial strains.[64]
|
47 |
+
|
48 |
+
Resistance may take the form of biodegredation of pharmaceuticals, such as sulfamethazine-degrading soil bacteria introduced to sulfamethazine through medicated pig feces.[65]
|
49 |
+
The survival of bacteria often results from an inheritable resistance,[66] but the growth of resistance to antibacterials also occurs through horizontal gene transfer. Horizontal transfer is more likely to happen in locations of frequent antibiotic use.[67]
|
50 |
+
|
51 |
+
Antibacterial resistance may impose a biological cost, thereby reducing fitness of resistant strains, which can limit the spread of antibacterial-resistant bacteria, for example, in the absence of antibacterial compounds. Additional mutations, however, may compensate for this fitness cost and can aid the survival of these bacteria.[68]
|
52 |
+
|
53 |
+
Paleontological data show that both antibiotics and antibiotic resistance are ancient compounds and mechanisms.[69] Useful antibiotic targets are those for which mutations negatively impact bacterial reproduction or viability.[70]
|
54 |
+
|
55 |
+
Several molecular mechanisms of antibacterial resistance exist. Intrinsic antibacterial resistance may be part of the genetic makeup of bacterial strains.[71][72] For example, an antibiotic target may be absent from the bacterial genome. Acquired resistance results from a mutation in the bacterial chromosome or the acquisition of extra-chromosomal DNA.[71] Antibacterial-producing bacteria have evolved resistance mechanisms that have been shown to be similar to, and may have been transferred to, antibacterial-resistant strains.[73][74] The spread of antibacterial resistance often occurs through vertical transmission of mutations during growth and by genetic recombination of DNA by horizontal genetic exchange.[66] For instance, antibacterial resistance genes can be exchanged between different bacterial strains or species via plasmids that carry these resistance genes.[66][75] Plasmids that carry several different resistance genes can confer resistance to multiple antibacterials.[75] Cross-resistance to several antibacterials may also occur when a resistance mechanism encoded by a single gene conveys resistance to more than one antibacterial compound.[75]
|
56 |
+
|
57 |
+
Antibacterial-resistant strains and species, sometimes referred to as "superbugs", now contribute to the emergence of diseases that were for a while well controlled. For example, emergent bacterial strains causing tuberculosis that are resistant to previously effective antibacterial treatments pose many therapeutic challenges. Every year, nearly half a million new cases of multidrug-resistant tuberculosis (MDR-TB) are estimated to occur worldwide.[76] For example, NDM-1 is a newly identified enzyme conveying bacterial resistance to a broad range of beta-lactam antibacterials.[77] The United Kingdom's Health Protection Agency has stated that "most isolates with NDM-1 enzyme are resistant to all standard intravenous antibiotics for treatment of severe infections."[78] On 26 May 2016 an E coli bacteria "superbug" was identified in the United States resistant to colistin, "the last line of defence" antibiotic.[79][80]
|
58 |
+
|
59 |
+
Per The ICU Book "The first rule of antibiotics is to try not to use them, and the second rule is try not to use too many of them."[81] Inappropriate antibiotic treatment and overuse of antibiotics have contributed to the emergence of antibiotic-resistant bacteria. Self-prescribing of antibiotics is an example of misuse.[82] Many antibiotics are frequently prescribed to treat symptoms or diseases that do not respond to antibiotics or that are likely to resolve without treatment. Also, incorrect or suboptimal antibiotics are prescribed for certain bacterial infections.[22][82] The overuse of antibiotics, like penicillin and erythromycin, has been associated with emerging antibiotic resistance since the 1950s.[64][83] Widespread usage of antibiotics in hospitals has also been associated with increases in bacterial strains and species that no longer respond to treatment with the most common antibiotics.[83]
|
60 |
+
|
61 |
+
Common forms of antibiotic misuse include excessive use of prophylactic antibiotics in travelers and failure of medical professionals to prescribe the correct dosage of antibiotics on the basis of the patient's weight and history of prior use. Other forms of misuse include failure to take the entire prescribed course of the antibiotic, incorrect dosage and administration, or failure to rest for sufficient recovery. Inappropriate antibiotic treatment, for example, is their prescription to treat viral infections such as the common cold. One study on respiratory tract infections found "physicians were more likely to prescribe antibiotics to patients who appeared to expect them".[84] Multifactorial interventions aimed at both physicians and patients can reduce inappropriate prescription of antibiotics.[85][86] The lack of rapid point of care diagnostic tests, particularly in resource-limited settings is considered as one of the drivers of antibiotic misuse.[87]
|
62 |
+
|
63 |
+
Several organizations concerned with antimicrobial resistance are lobbying to eliminate the unnecessary use of antibiotics.[82] The issues of misuse and overuse of antibiotics have been addressed by the formation of the US Interagency Task Force on Antimicrobial Resistance. This task force aims to actively address antimicrobial resistance, and is coordinated by the US Centers for Disease Control and Prevention, the Food and Drug Administration (FDA), and the National Institutes of Health, as well as other US agencies.[88] A non-governmental organization campaign group is Keep Antibiotics Working.[89] In France, an "Antibiotics are not automatic" government campaign started in 2002 and led to a marked reduction of unnecessary antibiotic prescriptions, especially in children.[90]
|
64 |
+
|
65 |
+
The emergence of antibiotic resistance has prompted restrictions on their use in the UK in 1970 (Swann report 1969), and the European Union has banned the use of antibiotics as growth-promotional agents since 2003.[91] Moreover, several organizations (including the World Health Organization, the National Academy of Sciences, and the U.S. Food and Drug Administration) have advocated restricting the amount of antibiotic use in food animal production.[92] However, commonly there are delays in regulatory and legislative actions to limit the use of antibiotics, attributable partly to resistance against such regulation by industries using or selling antibiotics, and to the time required for research to test causal links between their use and resistance to them. Two federal bills (S.742[93] and H.R. 2562[94]) aimed at phasing out nontherapeutic use of antibiotics in US food animals were proposed, but have not passed.[93][94] These bills were endorsed by public health and medical organizations, including the American Holistic Nurses' Association, the American Medical Association, and the American Public Health Association.[95]
|
66 |
+
|
67 |
+
Despite pledges by food companies and restaurants to reduce or eliminate meat that comes from animals treated with antibiotics, the purchase of antibiotics for use on farm animals has been increasing every year.[96]
|
68 |
+
|
69 |
+
There has been extensive use of antibiotics in animal husbandry. In the United States, the question of emergence of antibiotic-resistant bacterial strains due to use of antibiotics in livestock was raised by the US Food and Drug Administration (FDA) in 1977. In March 2012, the United States District Court for the Southern District of New York, ruling in an action brought by the Natural Resources Defense Council and others, ordered the FDA to revoke approvals for the use of antibiotics in livestock, which violated FDA regulations.[97]
|
70 |
+
|
71 |
+
Before the early 20th century, treatments for infections were based primarily on medicinal folklore. Mixtures with antimicrobial properties that were used in treatments of infections were described over 2,000 years ago.[98] Many ancient cultures, including the ancient Egyptians and ancient Greeks, used specially selected mold and plant materials to treat infections.[99][100] Nubian mummies studied in the 1990s were found to contain significant levels of Tetracycline. The beer brewed at that time was conjectured to have been the source.[101]
|
72 |
+
|
73 |
+
The use of antibiotics in modern medicine began with the discovery of synthetic antibiotics derived from dyes.[56][102][103][104][105]
|
74 |
+
|
75 |
+
Synthetic antibiotic chemotherapy as a science and development of antibacterials began in Germany with Paul Ehrlich in the late 1880s.[56] Ehrlich noted certain dyes would color human, animal, or bacterial cells, whereas others did not. He then proposed the idea that it might be possible to create chemicals that would act as a selective drug that would bind to and kill bacteria without harming the human host. After screening hundreds of dyes against various organisms, in 1907, he discovered a medicinally useful drug, the first synthetic antibacterial organoarsenic compound salvarsan,[56][102][103] now called arsphenamine.
|
76 |
+
|
77 |
+
This heralded the era of antibacterial treatment that was begun with the discovery of a series of arsenic-derived synthetic antibiotics by both Alfred Bertheim and Ehrlich in 1907.[104][105] Ehrlich and Bertheim had experimented with various chemicals derived from dyes to treat trypanosomiasis in mice and spirochaeta infection in rabbits. While their early compounds were too toxic, Ehrlich and Sahachiro Hata, a Japanese bacteriologist working with Erlich in the quest for a drug to treat syphilis, achieved success with the 606th compound in their series of experiments. In 1910 Ehrlich and Hata announced their discovery, which they called drug "606", at the Congress for Internal Medicine at Wiesbaden.[106] The Hoechst company began to market the compound toward the end of 1910 under the name Salvarsan, now known as arsphenamine.[106] The drug was used to treat syphilis in the first half of the 20th century. In 1908, Ehrlich received the Nobel Prize in Physiology or Medicine for his contributions to immunology.[107] Hata was nominated for the Nobel Prize in Chemistry in 1911 and for the Nobel Prize in Physiology or Medicine in 1912 and 1913.[108]
|
78 |
+
|
79 |
+
The first sulfonamide and the first systemically active antibacterial drug, Prontosil, was developed by a research team led by Gerhard Domagk in 1932 or 1933 at the Bayer Laboratories of the IG Farben conglomerate in Germany,[105][109][103] for which Domagk received the 1939 Nobel Prize in Physiology or Medicine.[110] Sulfanilamide, the active drug of Prontosil, was not patentable as it had already been in use in the dye industry for some years.[109] Prontosil had a relatively broad effect against Gram-positive cocci, but not against enterobacteria. Research was stimulated apace by its success. The discovery and development of this sulfonamide drug opened the era of antibacterials.[111][112]
|
80 |
+
|
81 |
+
Observations about the growth of some microorganisms inhibiting the growth of other microorganisms have been reported since the late 19th century. These observations of antibiosis between microorganisms led to the discovery of natural antibacterials. Louis Pasteur observed, "if we could intervene in the antagonism observed between some bacteria, it would offer perhaps the greatest hopes for therapeutics".[113]
|
82 |
+
|
83 |
+
In 1874, physician Sir William Roberts noted that cultures of the mold Penicillium glaucum that is used in the making of some types of blue cheese did not display bacterial contamination.[114] In 1876, physicist John Tyndall also contributed to this field.[115] Pasteur conducted research showing that Bacillus anthracis would not grow in the presence of the related mold Penicillium notatum.
|
84 |
+
|
85 |
+
In 1895 Vincenzo Tiberio, Italian physician, published a paper on the antibacterial power of some extracts of mold.[116]
|
86 |
+
|
87 |
+
In 1897, doctoral student Ernest Duchesne submitted a dissertation, "Contribution à l'étude de la concurrence vitale chez les micro-organismes: antagonisme entre les moisissures et les microbes" (Contribution to the study of vital competition in micro-organisms: antagonism between molds and microbes),[117] the first known scholarly work to consider the therapeutic capabilities of molds resulting from their anti-microbial activity. In his thesis, Duchesne proposed that bacteria and molds engage in a perpetual battle for survival. Duchesne observed that E. coli was eliminated by Penicillium glaucum when they were both grown in the same culture. He also observed that when he inoculated laboratory animals with lethal doses of typhoid bacilli together with Penicillium glaucum, the animals did not contract typhoid. Unfortunately Duchesne's army service after getting his degree prevented him from doing any further research.[118] Duchesne died of tuberculosis, a disease now treated by antibiotics.[118]
|
88 |
+
|
89 |
+
In 1928, Sir Alexander Fleming postulated the existence of penicillin, a molecule produced by certain molds that kills or stops the growth of certain kinds of bacteria. Fleming was working on a culture of disease-causing bacteria when he noticed the spores of a green mold, Penicillium chrysogenum, in one of his culture plates. He observed that the presence of the mold killed or prevented the growth of the bacteria.[119] Fleming postulated that the mold must secrete an antibacterial substance, which he named penicillin in 1928. Fleming believed that its antibacterial properties could be exploited for chemotherapy. He initially characterized some of its biological properties, and attempted to use a crude preparation to treat some infections, but he was unable to pursue its further development without the aid of trained chemists.[120][121]
|
90 |
+
|
91 |
+
Ernst Chain, Howard Florey and Edward Abraham succeeded in purifying the first penicillin, penicillin G, in 1942, but it did not become widely available outside the Allied military before 1945. Later, Norman Heatley developed the back extraction technique for efficiently purifying penicillin in bulk. The chemical structure of penicillin was first proposed by Abraham in 1942[122] and then later confirmed by Dorothy Crowfoot Hodgkin in 1945. Purified penicillin displayed potent antibacterial activity against a wide range of bacteria and had low toxicity in humans. Furthermore, its activity was not inhibited by biological constituents such as pus, unlike the synthetic sulfonamides. (see below) The development of penicillin led to renewed interest in the search for antibiotic compounds with similar efficacy and safety.[123] For their successful development of penicillin, which Fleming had accidentally discovered but could not develop himself, as a therapeutic drug, Chain and Florey shared the 1945 Nobel Prize in Medicine with Fleming.[124]
|
92 |
+
|
93 |
+
Florey credited Rene Dubos with pioneering the approach of deliberately and systematically searching for antibacterial compounds, which had led to the discovery of gramicidin and had revived Florey's research in penicillin.[125] In 1939, coinciding with the start of World War II, Dubos had reported the discovery of the first naturally derived antibiotic, tyrothricin, a compound of 20% gramicidin and 80% tyrocidine, from Bacillus brevis. It was one of the first commercially manufactured antibiotics and was very effective in treating wounds and ulcers during World War II.[125] Gramicidin, however, could not be used systemically because of toxicity. Tyrocidine also proved too toxic for systemic usage. Research results obtained during that period were not shared between the Axis and the Allied powers during World War II and limited access during the Cold War.[126]
|
94 |
+
|
95 |
+
During the mid-20th century, the number of new antibiotic substances introduced for medical use increased significantly. From 1935 to 1968, 12 new classes were launched. However, after this, the number of new classes dropped markedly, with only two new classes introduced between 1969 and 2003.[127]
|
96 |
+
|
97 |
+
The term 'antibiosis', meaning "against life", was introduced by the French bacteriologist Jean Paul Vuillemin as a descriptive name of the phenomenon exhibited by these early antibacterial drugs.[56][128][129] Antibiosis was first described in 1877 in bacteria when Louis Pasteur and Robert Koch observed that an airborne bacillus could inhibit the growth of Bacillus anthracis.[128][130] These drugs were later renamed antibiotics by Selman Waksman, an American microbiologist, in 1942.[56][128][131]
|
98 |
+
|
99 |
+
The term antibiotic was first used in 1942 by Selman Waksman and his collaborators in journal articles to describe any substance produced by a microorganism that is antagonistic to the growth of other microorganisms in high dilution.[128][131] This definition excluded substances that kill bacteria but that are not produced by microorganisms (such as gastric juices and hydrogen peroxide). It also excluded synthetic antibacterial compounds such as the sulfonamides. In current usage, the term "antibiotic" is applied to any medication that kills bacteria or inhibits their growth, regardless of whether that medication is produced by a microorganism or not.[132][133]
|
100 |
+
|
101 |
+
The term "antibiotic" derives from anti + βιωτικός (biōtikos), "fit for life, lively",[134] which comes from βίωσις (biōsis), "way of life",[135] and that from βίος (bios), "life".[46][136] The term "antibacterial" derives from Greek ἀντί (anti), "against"[137] + βακτήριον (baktērion), diminutive of βακτηρία (baktēria), "staff, cane",[138] because the first bacteria to be discovered were rod-shaped.[139]
|
102 |
+
|
103 |
+
The increase in bacterial strains that are resistant to conventional antibacterial therapies together with decreasing number of new antibiotics currently being developed in the drug pipeline has prompted the development of bacterial disease treatment strategies that are alternatives to conventional antibacterials.[140][141] Non-compound approaches (that is, products other than classical antibacterial agents) that target bacteria or approaches that target the host including phage therapy and vaccines are also being investigated to combat the problem.[142]
|
104 |
+
|
105 |
+
One strategy to address bacterial drug resistance is the discovery and application of compounds that modify resistance to common antibacterials. Resistance modifying agents are capable of partly or completely suppressing bacterial resistance mechanisms.[143] For example, some resistance-modifying agents may inhibit multidrug resistance mechanisms, such as drug efflux from the cell, thus increasing the susceptibility of bacteria to an antibacterial.[143][144] Targets include:
|
106 |
+
|
107 |
+
Metabolic stimuli such as sugar can help eradicate a certain type of antibiotic-tolerant bacteria by keeping their metabolism active.[146]
|
108 |
+
|
109 |
+
Vaccines rely on immune modulation or augmentation. Vaccination either excites or reinforces the immune competence of a host to ward off infection, leading to the activation of macrophages, the production of antibodies, inflammation, and other classic immune reactions. Antibacterial vaccines have been responsible for a drastic reduction in global bacterial diseases.[147] Vaccines made from attenuated whole cells or lysates have been replaced largely by less reactogenic, cell-free vaccines consisting of purified components, including capsular polysaccharides and their conjugates, to protein carriers, as well as inactivated toxins (toxoids) and proteins.[148]
|
110 |
+
|
111 |
+
Phage therapy is another method for treating antibiotic-resistant strains of bacteria. Phage therapy infects pathogenic bacteria with their own viruses. Bacteriophages and their host ranges are extremely specific for certain bacteria, thus, unlike antibiotics, they do not disturb the host organism and intestinal microflora.[149] Bacteriophages, also known simply as phages, infect and can kill bacteria and affect bacterial growth primarily during lytic cycles.[149][150] Phages insert their DNA into the bacterium, where it is transcribed and used to make new phages, after which the cell will lyse, releasing new phage that are able to infect and destroy further bacteria of the same strain.[150] The high specificity of phage protects "good" bacteria from destruction.
|
112 |
+
|
113 |
+
Some disadvantages to the use of bacteriophages also exist, however. Bacteriophages may harbour virulence factors or toxic genes in their genomes and, prior to use, it may be prudent to identify genes with similarity to known virulence factors or toxins by genomic sequencing. In addition, the oral and IV administration of phages for the eradication of bacterial infections poses a much higher safety risk than topical application. Also, there is the additional concern of uncertain immune responses to these large antigenic cocktails.
|
114 |
+
|
115 |
+
There are considerable regulatory hurdles that must be cleared for such therapies.[149] Despite numerous challenges, the use of bacteriophages as a replacement for antimicrobial agents against MDR pathogens that no longer respond to conventional antibiotics, remains an attractive option.[149][151]
|
116 |
+
|
117 |
+
Plants are an important source of antimicrobial compounds and traditional healers have long used plants to prevent or cure infectious diseases.[152][153] There is a recent renewed interest into the use of natural products for the identification of new members of the 'antibiotic-ome' (defined as natural products with antibiotic activity), and their application in antibacterial drug discovery in the genomics era.[140][154] Phytochemicals are the active biological component of plants and some phytochemicals including tannins, alkaloids, terpenoids, and flavonoids possess antimicrobial activity.[152][155][156] Some antioxidant dietary supplements also contain phytochemicals (polyphenols), such as grape seed extract, and demonstrate in vitro anti-bacterial properties.[157][158][159] Phytochemicals are able to inhibit peptidoglycan synthesis, damage microbial membrane structures, modify bacterial membrane surface hydrophobicity and also modulate quorum sensing.[155] With increasing antibiotic resistance in recent years, the potential of new plant-derived antibiotics is under investigation.[154]
|
118 |
+
|
119 |
+
Both the WHO and the Infectious Disease Society of America reported that the weak antibiotic pipeline does not match bacteria's increasing ability to develop resistance.[160][161] The Infectious Disease Society of America report noted that the number of new antibiotics approved for marketing per year had been declining and identified seven antibiotics against the Gram-negative bacilli currently in phase 2 or phase 3 clinical trials. However, these drugs did not address the entire spectrum of resistance of Gram-negative bacilli.[162][163] According to the WHO fifty one new therapeutic entities - antibiotics (including combinations), are in phase 1-3 clinical trials as of May 2017.[160] Recent entries in the clinical pipeline targeting multidrug-resistant Gram-positive pathogens has improved the treatment options due to marketing approval of new antibiotic classes, the oxazolidinones and cyclic lipopeptides. However, resistance to these antibiotics is certainly likely to occur, the need for the development new antibiotics against those pathogens still remains a high priority.[164][160] Recent drugs in development that target Gram-negative bacteria have focused on re-working existing drugs to target specific microorganisms or specific types of resistance.[160]
|
120 |
+
|
121 |
+
A few antibiotics have received marketing authorization in the last seven years. The cephalosporin ceftaroline and the lipoglycopeptides oritavancin and telavancin for the treatment of acute bacterial skin and skin structure infection and community-acquired bacterial pneumonia.[165] The lipoglycopeptide dalbavancin and the oxazolidinone tedizolid has also been approved for use for the treatment of acute bacterial skin and skin structure infection. The first in a new class of narrow spectrum macrocyclic antibiotics, fidaxomicin, has been approved for the treatment of C. difficile colitis.[165] New cephalosporin-lactamase inhibitor combinations also approved include ceftazidime-avibactam and ceftolozane-avibactam for complicated urinary tract infection and intra-abdominal infection.[165]
|
122 |
+
|
123 |
+
Streptomyces research is expected to provide new antibiotics,[166][167] including treatment against MRSA and infections resistant to commonly used medication. Efforts of John Innes Centre and universities in the UK, supported by BBSRC, resulted in the creation of spin-out companies, for example Novacta Biosystems, which has designed the type-b lantibiotic-based compound NVB302 (in phase 1) to treat Clostridium difficile infections.[168]
|
124 |
+
Possible improvements include clarification of clinical trial regulations by FDA. Furthermore, appropriate economic incentives could persuade pharmaceutical companies to invest in this endeavor.[163] In the US, the Antibiotic Development to Advance Patient Treatment (ADAPT) Act was introduced with the aim of fast tracking the drug development of antibiotics to combat the growing threat of 'superbugs'. Under this Act, FDA can approve antibiotics and antifungals treating life-threatening infections based on smaller clinical trials. The CDC will monitor the use of antibiotics and the emerging resistance, and publish the data. The FDA antibiotics labeling process, 'Susceptibility Test Interpretive Criteria for Microbial Organisms' or 'breakpoints', will provide accurate data to healthcare professionals.[169] According to Allan Coukell, senior director for health programs at The Pew Charitable Trusts, "By allowing drug developers to rely on smaller datasets, and clarifying FDA's authority to tolerate a higher level of uncertainty for these drugs when making a risk/benefit calculation, ADAPT would make the clinical trials more feasible."[170]
|
en/2750.html.txt
ADDED
@@ -0,0 +1,198 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The Internet is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP) to communicate between networks and devices. It is a network of networks that consists of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries a vast range of information resources and services, such as the inter-linked hypertext documents and applications of the World Wide Web (WWW), electronic mail, telephony, and file sharing.
|
6 |
+
|
7 |
+
The origins of the Internet date back to the development of packet switching and research commissioned by the United States Department of Defense in the 1960s to enable time-sharing of computers.[1] The primary precursor network, the ARPANET, initially served as a backbone for interconnection of regional academic and military networks in the 1970s. The funding of the National Science Foundation Network as a new backbone in the 1980s, as well as private funding for other commercial extensions, led to worldwide participation in the development of new networking technologies, and the merger of many networks.[2] The linking of commercial networks and enterprises by the early 1990s marked the beginning of the transition to the modern Internet,[3] and generated a sustained exponential growth as generations of institutional, personal, and mobile computers were connected to the network. Although the Internet was widely used by academia in the 1980s, commercialization incorporated its services and technologies into virtually every aspect of modern life.
|
8 |
+
|
9 |
+
Most traditional communication media, including telephony, radio, television, paper mail and newspapers are reshaped, redefined, or even bypassed by the Internet, giving birth to new services such as email, Internet telephony, Internet television, online music, digital newspapers, and video streaming websites. Newspaper, book, and other print publishing are adapting to website technology, or are reshaped into blogging, web feeds and online news aggregators. The Internet has enabled and accelerated new forms of personal interactions through instant messaging, Internet forums, and social networking. Online shopping has grown exponentially both for major retailers and small businesses and entrepreneurs, as it enables firms to extend their "brick and mortar" presence to serve a larger market or even sell goods and services entirely online. Business-to-business and financial services on the Internet affect supply chains across entire industries.
|
10 |
+
|
11 |
+
The Internet has no single centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own policies.[4] The overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.[5] In November 2006, the Internet was included on USA Today's list of New Seven Wonders.[6]
|
12 |
+
|
13 |
+
When the term Internet is used to refer to the specific global system of interconnected Internet Protocol (IP) networks, the word is a proper noun according to the Chicago Manual of Style[7] that should be written with an initial capital letter. In common use and the media, it is often not capitalized, viz. the internet. Some guides specify that the word should be capitalized when used as a noun, but not capitalized when used as an adjective.[8] The Internet is also often referred to as the Net, as a short form of network. Historically, as early as 1849, the word internetted was used uncapitalized as an adjective, meaning interconnected or interwoven.[9] The designers of early computer networks used internet both as a noun and as a verb in shorthand form of internetwork or internetworking, meaning interconnecting computer networks.[10][11]
|
14 |
+
|
15 |
+
The terms Internet and World Wide Web are often used interchangeably in everyday speech; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web or the Web is only one of a large number of Internet services.[12][13] The Web is a collection of interconnected documents (web pages) and other web resources, linked by hyperlinks and URLs.[14] The term Interweb is a portmanteau of Internet and World Wide Web typically used sarcastically to parody a technically unsavvy user.
|
16 |
+
|
17 |
+
The Advanced Research Projects Agency (ARPA) of the United States Department of Defense funded research into time-sharing of computers in the 1960s.[15][16][17] Meanwhile, research into packet switching, one of the fundamental Internet technologies, started in the work of Paul Baran in the early 1960s and, independently, Donald Davies in 1965.[1][18] Packet switching was incorporated into the proposed design for the ARPANET in 1967 and other networks such as the NPL network, the Merit Network, and CYCLADES, which were developed in the late 1960s and early 1970s.[19]
|
18 |
+
|
19 |
+
ARPANET development began with two network nodes which were interconnected between the Network Measurement Center at the University of California, Los Angeles (UCLA) Henry Samueli School of Engineering and Applied Science directed by Leonard Kleinrock, and the NLS system at SRI International (SRI) by Douglas Engelbart in Menlo Park, California, on 29 October 1969.[20] The third site was the Culler-Fried Interactive Mathematics Center at the University of California, Santa Barbara, followed by the University of Utah Graphics Department. In a sign of future growth, fifteen sites were connected to the young ARPANET by the end of 1971.[21][22] These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing.
|
20 |
+
|
21 |
+
Early international collaborations for the ARPANET were rare. Connections were made in 1973 to the Norwegian Seismic Array (NORSAR) via a satellite station in Tanum, Sweden, and to Peter Kirstein's research group at University College London which provided a gateway to British academic networks.[23][24] The ARPANET project and international working groups led to the development of various protocols and standards by which multiple separate networks could become a single network or "a network of networks".[25] In 1974, Vint Cerf and Bob Kahn used the term internet as a shorthand for internetwork in RFC 675,[11] and later RFCs repeated this use.[26] Cerf and Khan credit Louis Pouzin with important influences on TCP/IP design.[27] Commercial PTT providers were concerned with developing X.25 public data networks.[28]
|
22 |
+
|
23 |
+
Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which permitted worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s.[29] The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89.[30][31][32][33] Although other network protocols such as UUCP had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers (ISPs) emerged in 1989 in the United States and Australia.[34] The ARPANET was decommissioned in 1990.
|
24 |
+
|
25 |
+
Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet.[35] Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites.[36] Six months later Tim Berners-Lee would begin writing WorldWideWeb, the first web browser after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9,[37] the HyperText Markup Language (HTML), the first Web browser (which was also a HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server,[38] and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994.[39] In 1996 OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe.[40] By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic.[41]
|
26 |
+
|
27 |
+
As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser lightwave systems, and noise performance.[44]
|
28 |
+
|
29 |
+
Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web[45] with its discussion forums, blogs, social networking, and online shopping sites. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1-Gbit/s, 10-Gbit/s, or more. The Internet continues to grow, driven by ever greater amounts of online information and knowledge, commerce, entertainment and social networking.[46] During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%.[47] This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network.[48] As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30.2% of world population).[49] It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication, by 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet.[50]
|
30 |
+
|
31 |
+
The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. ICANN coordinates the assignment of unique identifiers for use on the Internet, including domain names, Internet Protocol (IP) addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet.[51]
|
32 |
+
|
33 |
+
Regional Internet registries (RIRs) were established for five regions of the world. The African Network Information Center (AfriNIC) for Africa, the American Registry for Internet Numbers (ARIN) for North America, the Asia-Pacific Network Information Centre (APNIC) for Asia and the Pacific region, the Latin American and Caribbean Internet Addresses Registry (LACNIC) for Latin America and the Caribbean region, and the Réseaux IP Européens – Network Coordination Centre (RIPE NCC) for Europe, the Middle East, and Central Asia were delegated to assign Internet Protocol address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region.
|
34 |
+
|
35 |
+
The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016.[52][53][54][55] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world".[56] Its members include individuals (anyone may join) as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.
|
36 |
+
|
37 |
+
The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, modems etc. However, as an example of internetworking, many of the network nodes are not necessarily internet equipment per se, the internet packets are carried by other full-fledged networking protocols with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.
|
38 |
+
|
39 |
+
Internet service providers (ISPs) establish the worldwide connectivity between individual networks at various levels of scope. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fibre optic cables and governed by peering agreements. Tier 2 and lower level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET. Both the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks.[57][disputed (for: unclear whether citation supports claim empirically) – discuss] Computers and routers use routing tables in their operating system to direct IP packets to the next-hop router or destination. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.
|
40 |
+
|
41 |
+
An estimated 70 percent of the world's Internet traffic passes through Ashburn, Virginia.[58][59][60][61]
|
42 |
+
|
43 |
+
Common methods of Internet access by users include dial-up with a computer modem via telephone circuits, broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology (e.g. 3G, 4G). The Internet may often be accessed from computers in libraries and Internet cafes. Internet access points exist in many public places such as airport halls and coffee shops. Various terms are used, such as public Internet kiosk, public access terminal, and Web payphone. Many hotels also have public terminals that are usually fee-based. These terminals are widely accessed for various usages, such as ticket booking, bank deposit, or online payment. Wi-Fi provides wireless access to the Internet via local computer networks. Hotspots providing such access include Wi-Fi cafes, where users need to bring their own wireless devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based.
|
44 |
+
|
45 |
+
Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh, where the Internet can then be accessed from places such as a park bench.[62] Experiments have also been conducted with proprietary mobile wireless networks like Ricochet, various high-speed data services over cellular networks, and fixed wireless services. Modern smartphones can also access the Internet through the cellular carrier network. For Web browsing, these devices provide applications such as Google Chrome, Safari, and Firefox and a wide variety of other Internet software may be installed from app-stores. Internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016.[63]
|
46 |
+
|
47 |
+
The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012.[64] Mobile Internet connectivity has played an important role in expanding access in recent years especially in Asia and the Pacific and in Africa.[65] The number of unique mobile cellular subscriptions increased from 3.89 billion in 2012 to 4.83 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The number of subscriptions is predicted to rise to 5.69 billion users in 2020.[66] As of 2016[update], almost 60% of the world's population had access to a 4G broadband cellular network, up from almost 50% in 2015 and 11% in 2012[disputed – discuss].[66] The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect poorest users the most.[65]
|
48 |
+
|
49 |
+
Zero-rating, the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost, has offered opportunities to surmount economic hurdles, but has also been accused by its critics as creating a two-tiered Internet. To address the issues with zero-rating, an alternative model has emerged in the concept of 'equal rating' and is being tested in experiments by Mozilla and Orange in Africa. Equal rating prevents prioritization of one type of content and zero-rates all content up to a specified data cap. A study published by Chatham House, 15 out of 19 countries researched in Latin America had some kind of hybrid or zero-rated product offered. Some countries in the region had a handful of plans to choose from (across all mobile network operators) while others, such as Colombia, offered as many as 30 pre-paid and 34 post-paid plans.[67]
|
50 |
+
|
51 |
+
A study of eight countries in the Global South found that zero-rated data plans exist in every country, although there is a great range in the frequency with which they are offered and actually used in each.[68] The study looked at the top three to five carriers by market share in Bangladesh, Colombia, Ghana, India, Kenya, Nigeria, Peru and Philippines. Across the 181 plans examined, 13 per cent were offering zero-rated services. Another study, covering Ghana, Kenya, Nigeria and South Africa, found Facebook's Free Basics and Wikipedia Zero to be the most commonly zero-rated content.[69]
|
52 |
+
|
53 |
+
The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a model architecture that divides methods into a layered system of protocols, originally documented in RFC 1122 and RFC 1123.
|
54 |
+
|
55 |
+
The software layers correspond to the environment or scope in which their services operate. At the top is the application layer, space for the application-specific networking methods used in software applications. For example, a web browser program uses the client-server application model and a specific protocol of interaction between servers and clients, while many file-sharing systems use a peer-to-peer paradigm.
|
56 |
+
|
57 |
+
Below this top layer, the transport layer connects applications on different hosts with a logical channel through the network with appropriate data exchange methods. It provides several services including ordered, reliable delivery (TCP), and an unreliable datagram service (UDP).
|
58 |
+
|
59 |
+
Underlying these layers are the networking technologies that interconnect networks at their borders and exchange traffic across them. The Internet layer implements the Internet Protocol which enables computers to identify and locate each other by Internet Protocol (IP) addresses, and route their traffic via intermediate (transit) networks.[70] The internet protocol layer code is independent of the type of network that it is physically running over.
|
60 |
+
|
61 |
+
At the bottom of the architecture is the link layer, which provides logical connectivity between hosts. The link layer code is usually the only software part customized to the type of physical networking link protocol. Many link layers have been implemented and each operates over a type of network link, such as within a local area network (LAN) or wide area network (e.g. Wi-Fi or Ethernet or a dial-up connection, ATM etc.).
|
62 |
+
|
63 |
+
The most prominent component of the Internet model is the Internet Protocol (IP). IP enables internetworking and, in essence, establishes the Internet itself. Two versions of the Internet Protocol exist, IPV4 and IPV6.
|
64 |
+
|
65 |
+
For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via DHCP, or are configured.
|
66 |
+
|
67 |
+
However the network also supports other addressing systems. Users generally enter domain names (e.g. "en.wikipedia.org") instead of IP addresses because they are easier to remember, they are converted by the Domain Name System (DNS) into IP addresses which are more efficient for routing purposes.
|
68 |
+
|
69 |
+
Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number.[71] Internet Protocol Version 4 (IPv4) is the initial version used on the first generation of the Internet and is still in dominant use. It was designed to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011,[72] when the global IPv4 address allocation pool was exhausted.
|
70 |
+
|
71 |
+
Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998.[73][74][75] IPv6 deployment has been ongoing since the mid-2000s. IPv6 is currently in growing deployment around the world, since Internet address registries (RIRs) began to urge all resource managers to plan rapid adoption and conversion.[76]
|
72 |
+
|
73 |
+
IPv6 is not directly interoperable by design with IPv4. In essence, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities must exist for internetworking or nodes must have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol. Network infrastructure, however, has been lagging in this development. Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts, e.g., peering agreements, and by technical specifications or protocols that describe the exchange of data over the network. Indeed, the Internet is defined by its interconnections and routing policies.
|
74 |
+
|
75 |
+
A subnetwork or subnet is a logical subdivision of an IP network.[77]:1,16 The practice of dividing a network into two or more networks is called subnetting.
|
76 |
+
|
77 |
+
Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.
|
78 |
+
|
79 |
+
The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.
|
80 |
+
|
81 |
+
For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.
|
82 |
+
|
83 |
+
Traffic is exchanged between subnetworks through routers when the routing prefixes of the source address and the destination address differ. A router serves as a logical or physical boundary between the subnets.
|
84 |
+
|
85 |
+
The benefits of subnetting an existing network vary with each deployment scenario. In the address allocation architecture of the Internet using CIDR and in large organizations, it is necessary to allocate address space efficiently. Subnetting may also enhance routing efficiency, or have advantages in network management when subnetworks are administratively controlled by different entities in a larger organization. Subnets may be arranged logically in a hierarchical architecture, partitioning an organization's network address space into a tree-like routing structure.
|
86 |
+
|
87 |
+
While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the Internet Engineering Task Force (IETF).[78] The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. Resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices (BCP) when implementing Internet technologies.
|
88 |
+
|
89 |
+
The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services.
|
90 |
+
|
91 |
+
Most servers that provide these services are today hosted in data centers, and content is often accessed through high-performance content delivery networks.
|
92 |
+
|
93 |
+
The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. Hypertext Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistic and is one of many languages or protocols that can be used for communication on the Internet.[79]
|
94 |
+
|
95 |
+
World Wide Web browser software, such as Microsoft's Internet Explorer/Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, lets users navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain any combination of computer data, including graphics, sounds, text, video, multimedia and interactive content that runs while the user is interacting with the page. Client-side software can include animations, games, office applications and scientific demonstrations. Through keyword-driven Internet research using search engines like Yahoo!, Bing and Google, users worldwide have easy, instant access to a vast and diverse amount of online information. Compared to printed media, books, encyclopedias and traditional libraries, the World Wide Web has enabled the decentralization of information on a large scale.
|
96 |
+
|
97 |
+
The Web has enabled individuals and organizations to publish ideas and information to a potentially large audience online at greatly reduced expense and time delay. Publishing a web page, a blog, or building a website involves little initial cost and many cost-free services are available. However, publishing and maintaining large, professional web sites with attractive, diverse and up-to-date information is still a difficult and expensive proposition. Many individuals and some companies and groups use web logs or blogs, which are largely used as easily updatable online diaries. Some commercial organizations encourage staff to communicate advice in their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information, and be attracted to the corporation as a result.
|
98 |
+
|
99 |
+
Advertising on popular web pages can be lucrative, and e-commerce, which is the sale of products and services directly via the Web, continues to grow. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.[80]:19 Many common online advertising practices are controversial and increasingly subject to regulation.
|
100 |
+
|
101 |
+
When the Web developed in the 1990s, a typical web page was stored in completed form on a web server, formatted in HTML, complete for transmission to a web browser in response to a request. Over time, the process of creating and serving web pages has become dynamic, creating a flexible design, layout, and content. Websites are often created using content management software with, initially, very little content. Contributors to these systems, who may be paid staff, members of an organization or the public, fill underlying databases with content using editing pages designed for that purpose while casual visitors view and read this content in HTML form. There may or may not be editorial, approval and security systems built into the process of taking newly entered content and making it available to the target visitors.
|
102 |
+
|
103 |
+
Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet.[81][82] Pictures, documents, and other files are sent as email attachments. Email messages can be cc-ed to multiple email addresses.
|
104 |
+
|
105 |
+
Internet telephony is a common communications service realized with the Internet. The name of the principle internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP). The idea began in the early 1990s with walkie-talkie-like voice applications for personal computers. VoIP systems now dominate many markets, and are as easy to use and as convenient as a traditional telephone. The benefit has been in substantial cost savings over traditional telephone calls, especially over long distances. Cable, ADSL, and mobile data networks provide Internet access in customer premises[83] and inexpensive VoIP network adapters provide the connection for traditional analog telephone sets. The voice quality of VoIP often exceeds that of traditional calls. Remaining problems for VoIP include the situation that emergency services may not be universally available, and that devices rely on a local power supply, while older traditional phones are powered from the local loop, and typically operate during a power failure.
|
106 |
+
|
107 |
+
File sharing is an example of transferring large amounts of data across the Internet. A computer file can be emailed to customers, colleagues and friends as an attachment. It can be uploaded to a website or File Transfer Protocol (FTP) server for easy download by others. It can be put into a "shared location" or onto a file server for instant use by colleagues. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. In any of these cases, access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by digital signatures or by MD5 or other message digests. These simple features of the Internet, over a worldwide basis, are changing the production, sale, and distribution of anything that can be reduced to a computer file for transmission. This includes all manner of print publications, software products, news, music, film, video, photography, graphics and the other arts. This in turn has caused seismic shifts in each of the existing industries that previously controlled the production and distribution of these products.
|
108 |
+
|
109 |
+
Streaming media is the real-time delivery of digital media for the immediate consumption or enjoyment by end users. Many radio and television broadcasters provide Internet feeds of their live audio and video productions. They may also allow time-shift viewing or listening such as Preview, Classic Clips and Listen Again features. These providers have been joined by a range of pure Internet "broadcasters" who never had on-air licenses. This means that an Internet-connected device, such as a computer or something more specific, can be used to access on-line media in much the same way as was previously possible only with a television or radio receiver. The range of available types of content is much wider, from specialized technical webcasts to on-demand popular multimedia services. Podcasting is a variation on this theme, where—usually audio—material is downloaded and played back on a computer or shifted to a portable media player to be listened to on the move. These techniques using simple equipment allow anybody, with little censorship or licensing control, to broadcast audio-visual material worldwide.
|
110 |
+
|
111 |
+
Digital media streaming increases the demand for network bandwidth. For example, standard image quality needs 1 Mbit/s link speed for SD 480p, HD 720p quality requires 2.5 Mbit/s, and the top-of-the-line HDX quality needs 4.5 Mbit/s for 1080p.[84]
|
112 |
+
|
113 |
+
Webcams are a low-cost extension of this phenomenon. While some webcams can give full-frame-rate video, the picture either is usually small or updates slowly. Internet users can watch animals around an African waterhole, ships in the Panama Canal, traffic at a local roundabout or monitor their own premises, live and in real time. Video chat rooms and video conferencing are also popular with many uses being found for personal webcams, with and without two-way sound. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with a vast number of users. It uses an HTML5 based web player by default to stream and show video files.[85] Registered users may upload an unlimited amount of video and build their own personal profile. YouTube claims that its users watch hundreds of millions, and upload hundreds of thousands of videos daily.
|
114 |
+
|
115 |
+
The Internet has enabled new forms of social interaction, activities, and social associations. This phenomenon has given rise to the scholarly study of the sociology of the Internet.
|
116 |
+
|
117 |
+
From 2000 to 2009, the number of Internet users globally rose from 394 million to 1.858 billion.[89] By 2010, 22 percent of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube.[90] In 2014 the world's Internet users surpassed 3 billion or 43.6 percent of world population, but two-thirds of the users came from richest countries, with 78.0 percent of Europe countries population using the Internet, followed by 57.4 percent of the Americas.[91] However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world coming from that region. The number of China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million Internet users.[92] By 2019, China was the world's leading country in terms of Internet users, with more than 800 million users, followed closely by India, with some 700 million users, with the United States a distant third with 275 million users. However, in terms of penetration, China has[when?] a 38.4% penetration rate compared to India's 40% and the United States's 80%.[93] As of 2020, it was estimated that 4.5 billion people use the Internet.[94]
|
118 |
+
|
119 |
+
The prevalent language for communication via the Internet has always been English. This may be a result of the origin of the Internet, as well as the language's role as a lingua franca and as a world language. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet.
|
120 |
+
|
121 |
+
After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%).[95] By region, 42% of the world's Internet users are based in Asia, 24% in Europe, 14% in North America, 10% in Latin America and the Caribbean taken together, 6% in Africa, 3% in the Middle East and 1% in Australia/Oceania.[96] The Internet's technologies have developed enough in recent years, especially in the use of Unicode, that good facilities are available for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain.
|
122 |
+
|
123 |
+
In an American study in 2005, the percentage of men using the Internet was very slightly ahead of the percentage of women, although this difference reversed in those under 30. Men logged on more often, spent more time online, and were more likely to be broadband users, whereas women tended to make more use of opportunities to communicate (such as email). Men were more likely to use the Internet to pay bills, participate in auctions, and for recreation such as downloading music and videos. Men and women were equally likely to use the Internet for shopping and banking.[97]
|
124 |
+
More recent studies indicate that in 2008, women significantly outnumbered men on most social networking sites, such as Facebook and Myspace, although the ratios varied with age.[98] In addition, women watched more streaming content, whereas men downloaded more.[99] In terms of blogs, men were more likely to blog in the first place; among those who blog, men were more likely to have a professional blog, whereas women were more likely to have a personal blog.[100]
|
125 |
+
|
126 |
+
Splitting by country, in 2012 Iceland, Norway, Sweden, the Netherlands, and Denmark had the highest Internet penetration by the number of users, with 93% or more of the population with access.[101]
|
127 |
+
|
128 |
+
Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net")[102] refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech,[103][104] Internaut refers to operators or technically highly capable users of the Internet,[105][106] digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation.[107]
|
129 |
+
|
130 |
+
The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly. Within the limitations imposed by small screens and other limited facilities of such pocket-sized devices, the services of the Internet, including email and the web, may be available. Service providers may restrict the services offered and mobile data charges may be significantly higher than other access methods.
|
131 |
+
|
132 |
+
Educational material at all levels from pre-school to post-doctoral is available from websites. Examples range from CBeebies, through school and high-school revision guides and virtual universities, to access to top-end scholarly literature through the likes of Google Scholar. For distance education, help with homework and other assignments, self-guided learning, whiling away spare time, or just looking up more detail on an interesting fact, it has never been easier for people to access educational information at any level from anywhere. The Internet in general and the World Wide Web in particular are important enablers of both formal and informal education. Further, the Internet allows universities, in particular, researchers from the social and behavioral sciences, to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results.[111]
|
133 |
+
|
134 |
+
The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software. Not only can a group cheaply communicate and share ideas but the wide reach of the Internet allows such groups more easily to form. An example of this is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice). Internet chat, whether using an IRC chat room, an instant messaging system, or a social networking website, allows colleagues to stay in touch in a very convenient way while working at their computers during the day. Messages can be exchanged even more quickly and conveniently than via email. These systems may allow files to be exchanged, drawings and images to be shared, or voice and video contact between team members.
|
135 |
+
|
136 |
+
Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work. Business and project teams can share calendars as well as documents and other information. Such collaboration occurs in a wide variety of areas including scientific research, software development, conference planning, political activism and creative writing. Social and political collaboration is also becoming more widespread as both Internet access and computer literacy spread.
|
137 |
+
|
138 |
+
The Internet allows computer users to remotely access other computers and information stores easily from any access point. Access may be with computer security, i.e. authentication and encryption technologies, depending on the requirements. This is encouraging new ways of working from home, collaboration and information sharing in many industries. An accountant sitting at home can audit the books of a company based in another country, on a server situated in a third country that is remotely maintained by IT specialists in a fourth. These accounts could have been created by home-working bookkeepers, in other remote locations, based on information emailed to them from offices all over the world. Some of these things were possible before the widespread use of the Internet, but the cost of private leased lines would have made many of them infeasible in practice. An office worker away from their desk, perhaps on the other side of the world on a business trip or a holiday, can access their emails, access their data using cloud computing, or open a remote desktop session into their office PC using a secure virtual private network (VPN) connection on the Internet. This can give the worker complete access to all of their normal files and data, including email and other applications, while away from the office. It has been referred to among system administrators as the Virtual Private Nightmare,[112] because it extends the secure perimeter of a corporate network into remote locations and its employees' homes.
|
139 |
+
|
140 |
+
Many people use the World Wide Web to access news, weather and sports reports, to plan and book vacations and to pursue their personal interests. People use chat, messaging and email to make and stay in touch with friends worldwide, sometimes in the same way as some previously had pen pals. Social networking websites such as Facebook, Twitter, and Myspace have created new ways to socialize and interact. Users of these sites are able to add a wide variety of information to pages, to pursue common interests, and to connect with others. It is also possible to find existing acquaintances, to allow communication among existing groups of people. Sites like LinkedIn foster commercial and business connections. YouTube and Flickr specialize in users' videos and photographs. While social networking sites were initially for individuals only, today they are widely used by businesses and other organizations to promote their brands, to market to their customers and to encourage posts to "go viral". "Black hat" social media techniques are also employed by some organizations, such as spam accounts and astroturfing.
|
141 |
+
|
142 |
+
A risk for both individuals and organizations writing posts (especially public posts) on social networking websites, is that especially foolish or controversial posts occasionally lead to an unexpected and possibly large-scale backlash on social media from other Internet users. This is also a risk in relation to controversial offline behavior, if it is widely made known. The nature of this backlash can range widely from counter-arguments and public mockery, through insults and hate speech, to, in extreme cases, rape and death threats. The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment in response to posts they have made on social media, and Twitter in particular has been criticised in the past for not doing enough to aid victims of online abuse.[113]
|
143 |
+
|
144 |
+
For organizations, such a backlash can cause overall brand damage, especially if reported by the media. However, this is not always the case, as any brand damage in the eyes of people with an opposing opinion to that presented by the organization could sometimes be outweighed by strengthening the brand in the eyes of others. Furthermore, if an organization or individual gives in to demands that others perceive as wrong-headed, that can then provoke a counter-backlash.
|
145 |
+
|
146 |
+
Some websites, such as Reddit, have rules forbidding the posting of personal information of individuals (also known as doxxing), due to concerns about such postings leading to mobs of large numbers of Internet users directing harassment at the specific individuals thereby identified. In particular, the Reddit rule forbidding the posting of personal information is widely understood to imply that all identifying photos and names must be censored in Facebook screenshots posted to Reddit. However, the interpretation of this rule in relation to public Twitter posts is less clear, and in any case, like-minded people online have many other ways they can use to direct each other's attention to public social media posts they disagree with.
|
147 |
+
|
148 |
+
Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Children may also encounter material which they may find upsetting, or material which their parents consider to be not age-appropriate. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering, and/or supervise their children's online activities, in an attempt to protect their children from inappropriate material on the Internet. The most popular social networking websites, such as Facebook and Twitter, commonly forbid users under the age of 13. However, these policies are typically trivial to circumvent by registering an account with a false birth date, and a significant number of children aged under 13 join such sites anyway. Social networking sites for younger children, which claim to provide better levels of protection for children, also exist.[114]
|
149 |
+
|
150 |
+
The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic.[citation needed] Many Internet forums have sections devoted to games and funny videos.[citation needed] The Internet pornography and online gambling industries have taken advantage of the World Wide Web. Although many governments have attempted to restrict both industries' use of the Internet, in general, this has failed to stop their widespread popularity.[115]
|
151 |
+
|
152 |
+
Another area of leisure activity on the Internet is multiplayer gaming.[116] This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer.[117] Non-subscribers were limited to certain types of game play or certain games. Many people use the Internet to access and download music, movies and other works for their enjoyment and relaxation. Free and fee-based services exist for all of these activities, using centralized servers and distributed peer-to-peer technologies. Some of these sources exercise more care with respect to the original artists' copyrights than others.
|
153 |
+
|
154 |
+
Internet usage has been correlated to users' loneliness.[118] Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.
|
155 |
+
|
156 |
+
A 2017 book claimed that the Internet consolidates most aspects of human endeavor into singular arenas of which all of humanity are potential members and competitors, with fundamentally negative impacts on mental health as a result. While successes in each field of activity are pervasively visible and trumpeted, they are reserved for an extremely thin sliver of the world's most exceptional, leaving everyone else behind. Whereas, before the Internet, expectations of success in any field were supported by reasonable probabilities of achievement at the village, suburb, city or even state level, the same expectations in the Internet world are virtually certain to bring disappointment today: there is always someone else, somewhere on the planet, who can do better and take the now one-and-only top spot.[119]
|
157 |
+
|
158 |
+
Cybersectarianism is a new organizational form which involves: "highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, on-line chat rooms, and web-based message boards."[120] In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.
|
159 |
+
|
160 |
+
Cyberslacking can become a drain on corporate resources; the average UK employee spent 57 minutes a day surfing the Web while at work, according to a 2003 study by Peninsula Business Services.[121] Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity.[122]
|
161 |
+
|
162 |
+
Electronic business (e-business) encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion for 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales.[123]
|
163 |
+
|
164 |
+
While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide.[124] Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality.[125][126][127]
|
165 |
+
|
166 |
+
Author Andrew Keen, a long-time critic of the social transformations caused by the Internet, has focused on the economic effects of consolidation from Internet businesses. Keen cites a 2013 Institute for Local Self-Reliance report saying brick-and-mortar retailers employ 47 people for every $10 million in sales while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, transportation network company Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people.[128]
|
167 |
+
|
168 |
+
Telecommuting is the performance within a traditional worker and employer relationship when it is facilitated by tools such as groupware, virtual private networks, conference calling, videoconferencing, and voice over IP (VOIP) so that work may be performed from any location, most conveniently the worker's home. It can be efficient and useful for companies as it allows workers to communicate over long distances, saving significant amounts of travel time and cost. As broadband Internet connections become commonplace, more workers have adequate bandwidth at home to use these tools to link their home to their corporate intranet and internal communication networks.
|
169 |
+
|
170 |
+
Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries.[129] In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work.[130] The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park.[131] The English Wikipedia has the largest user base among wikis on the World Wide Web[132] and ranks in the top 10 among all Web sites in terms of traffic.[133]
|
171 |
+
|
172 |
+
The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism, most notably practiced by rebels in the Arab Spring.[134][135] The New York Times suggested that social media websites, such as Facebook and Twitter, helped people organize the political revolutions in Egypt, by helping activists organize protests, communicate grievances, and disseminate information.[136]
|
173 |
+
|
174 |
+
Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies.[citation needed]
|
175 |
+
|
176 |
+
The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. Kiva raises funds for local intermediary microfinance organizations which post stories and updates on behalf of the borrowers. Lenders can contribute as little as $25 to loans of their choice, and receive their money back as borrowers repay. Kiva falls short of being a pure peer-to-peer charity, in that loans are disbursed before being funded by lenders and borrowers do not communicate with lenders themselves.[137][138]
|
177 |
+
|
178 |
+
Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information.
|
179 |
+
|
180 |
+
Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users. Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of cyber warfare using similar methods on a large scale.[citation needed]
|
181 |
+
|
182 |
+
The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet.[139] In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies.[140][141][142] Packet capture is the monitoring of data traffic on a computer network. Computers communicate over the Internet by breaking up messages (emails, images, videos, web pages, files, etc.) into small chunks called "packets", which are routed through a network of computers, until they reach their destination, where they are assembled back into a complete "message" again. Packet Capture Appliance intercepts these packets as they are traveling through the network, in order to examine their contents using other programs. A packet capture is an information gathering tool, but not an analysis tool. That is it gathers "messages" but it does not analyze them and figure out what they mean. Other programs are needed to perform traffic analysis and sift through intercepted data looking for important/useful information. Under the Communications Assistance For Law Enforcement Act all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and voice over Internet protocol (VoIP) traffic.[143]
|
183 |
+
|
184 |
+
The large amount of data gathered from packet capturing requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access of certain types of web sites, or communicating via email or chat with certain parties.[144] Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data.[145] Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software was allegedly installed by German Siemens AG and Finnish Nokia.[146]
|
185 |
+
|
186 |
+
Some governments, such as those of Burma, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters.[152]
|
187 |
+
|
188 |
+
In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret.[153] Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet, but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive websites on individual computers or networks, in order to limit access by children to pornographic material or depiction of violence.
|
189 |
+
|
190 |
+
As the Internet is a heterogeneous network, the physical characteristics, including for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization.[154]
|
191 |
+
|
192 |
+
The volume of Internet traffic is difficult to measure, because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.
|
193 |
+
|
194 |
+
An Internet blackout or outage can be caused by local signalling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to a small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia.[155] Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93%[156] of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests.[157]
|
195 |
+
|
196 |
+
Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB.[158] The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis.[158]
|
197 |
+
|
198 |
+
In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic.[159][160] According to a non-peer reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure.[161] The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files.[162]
|
en/2751.html.txt
ADDED
@@ -0,0 +1,131 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Internet Explorer[a] (formerly Microsoft Internet Explorer[b] and Windows Internet Explorer,[c] commonly abbreviated IE or MSIE) is a series of graphical web browsers developed by Microsoft and included in the Microsoft Windows line of operating systems, starting in 1995. It was first released as part of the add-on package Plus! for Windows 95 that year. Later versions were available as free downloads, or in-service packs, and included in the original equipment manufacturer (OEM) service releases of Windows 95 and later versions of Windows. The browser is discontinued, but still maintained.[4]
|
4 |
+
|
5 |
+
Internet Explorer was once the most widely used web browser, attaining a peak of about 95% usage share by 2003.[5] This came after Microsoft used bundling to win the first browser war against Netscape, which was the dominant browser in the 1990s. Its usage share has since declined with the launch of Firefox (2004) and Google Chrome (2008), and with the growing popularity of operating systems such as Android and iOS that do not support Internet Explorer.
|
6 |
+
|
7 |
+
Estimates for Internet Explorer's market share are about 1.4% across all platforms, or by StatCounter's numbers ranked 8th.[6] On traditional PCs (e.g. excluding mobile and Xbox), the only platform on which it has ever had significant share, it is ranked 5th at 3.26%, after Microsoft Edge, its successor. Edge first overtook Internet Explorer in terms of market share in August 2019. IE and Edge combined rank fourth, after Firefox, previously being second after Chrome.
|
8 |
+
|
9 |
+
Microsoft spent over US$100 million per year on Internet Explorer in the late 1990s,[7] with over 1,000 people involved in the project by 1999.[8][9]
|
10 |
+
|
11 |
+
Versions of Internet Explorer for other operating systems have also been produced, including an Xbox 360 version called Internet Explorer for Xbox and for platforms Microsoft no longer supports: Internet Explorer for Mac and Internet Explorer for UNIX (Solaris and HP-UX), and an embedded OEM version called Pocket Internet Explorer, later rebranded Internet Explorer Mobile made for Windows CE, Windows Phone and previously, based on Internet Explorer 7 for Windows Phone 7.
|
12 |
+
|
13 |
+
On March 17, 2015, Microsoft announced that Microsoft Edge would replace Internet Explorer as the default browser on its Windows 10 devices. This effectively makes Internet Explorer 11 the last release. Internet Explorer, however, remains on Windows 10 and Windows Server 2019 primarily for enterprise purposes.[10][11] Since January 12, 2016, only Internet Explorer 11 has official support for consumers; extended support for Internet Explorer 10 ended on January 31, 2020.[12][13][14] Support varies based on the operating system's technical capabilities and its support life cycle.[15]
|
14 |
+
|
15 |
+
The browser has been scrutinized throughout its development for use of third-party technology (such as the source code of Spyglass Mosaic, used without royalty in early versions) and security and privacy vulnerabilities, and the United States and the European Union have alleged that integration of Internet Explorer with Windows has been to the detriment of fair browser competition.[16]
|
16 |
+
|
17 |
+
The Internet Explorer project was started in the summer of 1994 by Thomas Reardon, who, according to the Massachusetts Institute of Technology Review of 2003,[17] used source code from Spyglass, Inc. Mosaic, which was an early commercial web browser with formal ties to the pioneering National Center for Supercomputing Applications (NCSA) Mosaic browser.[18][19] In late 1994, Microsoft licensed Spyglass Mosaic for a quarterly fee plus a percentage of Microsoft's non-Windows revenues for the software.[19] Although bearing a name like NCSA Mosaic, Spyglass Mosaic had used the NCSA Mosaic source code sparingly.[20]
|
18 |
+
|
19 |
+
The first version, dubbed Microsoft Internet Explorer, was installed as part of the Internet Jumpstart Kit in Microsoft Plus! for Windows 95 and Plus!.[21] The Internet Explorer team began with about six people in early development.[20][22] Internet Explorer 1.5 was released several months later for Windows NT and added support for basic table rendering. By including it free of charge with their operating system, they did not have to pay royalties to Spyglass Inc, resulting in a lawsuit and a US$8 million settlement on January 22, 1997.[18][23]
|
20 |
+
|
21 |
+
Microsoft was sued by Synet Inc. in 1996, over the trademark infringement.[24]
|
22 |
+
|
23 |
+
Internet Explorer 11 is featured in Windows 8.1, which was released on October 17, 2013. It includes an incomplete mechanism for syncing tabs. It is a major update to its developer tools,[25][26] enhanced scaling for high DPI screens,[27] HTML5 prerender and prefetch,[28] hardware-accelerated JPEG decoding,[29] closed captioning, HTML5 full screen,[30] and is the first Internet Explorer to support WebGL[31][32][33] and Google's protocol SPDY (starting at v3).[34] This version of IE has features dedicated to Windows 8.1, including cryptography (WebCrypto),[25] adaptive bitrate streaming (Media Source Extensions)[35] and Encrypted Media Extensions.[30]
|
24 |
+
|
25 |
+
Internet Explorer 11 was made available for Windows 7 users to download on November 7, 2013, with Automatic Updates in the following weeks.[36]
|
26 |
+
|
27 |
+
Internet Explorer 11's user agent string now identifies the agent as "Trident" (the underlying layout engine) instead of "MSIE". It also announces compatibility with Gecko (the layout engine of Firefox).
|
28 |
+
|
29 |
+
Microsoft claimed that Internet Explorer 11, running the WebKit SunSpider JavaScript Benchmark, was the fastest browser as of October 15, 2013.[37]
|
30 |
+
|
31 |
+
Internet Explorer 11 was made available for Windows Server 2012 and Windows Embedded 8 Standard in the spring of 2019.[38]
|
32 |
+
|
33 |
+
Microsoft Edge, officially unveiled on January 21, 2015, has replaced Internet Explorer as the default browser on Windows 10. Internet Explorer is still installed in Windows 10 in order to maintain compatibility with older websites and intranet sites that require ActiveX and other Microsoft legacy web technologies.[39][40][41]
|
34 |
+
|
35 |
+
According to Microsoft, development of new features for Internet Explorer has ceased. However, it will continue to be maintained as part of the support policy for the versions of Windows with which it is included.[4]
|
36 |
+
|
37 |
+
On June 1, 2020, the Internet Archive removed the latest version of Internet Explorer from its list of supported browsers, citing its dated infrastructure that makes it hard to work with, following the suggestion of Microsoft Chief of Security Chris Jackson that users not use it as their browser of choice.[42][43]
|
38 |
+
|
39 |
+
Internet Explorer has been designed to view a broad range of web pages and provide certain features within the operating system, including Microsoft Update. During the heyday of the browser wars, Internet Explorer superseded Netscape only when it caught up technologically to support the progressive features of the time.[44][better source needed]
|
40 |
+
|
41 |
+
Internet Explorer, using the Trident layout engine:
|
42 |
+
|
43 |
+
Internet Explorer uses DOCTYPE sniffing to choose between standards mode and a "quirks mode" in which it deliberately mimicks nonstandard behaviours of old versions of MSIE for HTML and CSS rendering on screen (Internet Explorer always uses standards mode for printing). It also provides its own dialect of ECMAScript called JScript.
|
44 |
+
|
45 |
+
Internet Explorer was criticised by Tim Berners-Lee for its limited support for SVG, which is promoted by W3C.[48]
|
46 |
+
|
47 |
+
Internet Explorer has introduced an array of proprietary extensions to many of the standards, including HTML, CSS, and the DOM. This has resulted in a number of web pages that appear broken in standards-compliant web browsers and has introduced the need for a "quirks mode" to allow for rendering improper elements meant for Internet Explorer in these other browsers.
|
48 |
+
|
49 |
+
Internet Explorer has introduced several extensions to the DOM that have been adopted by other browsers.
|
50 |
+
|
51 |
+
These include the innerHTML property, which provides access to the HTML string within an element, which was part of IE 5 and was standardized as part of HTML 5 roughly 15 years later after all other browsers implemented it for compatibility[49], the XMLHttpRequest object, which allows the sending of HTTP request and receiving of HTTP response, and may be used to perform AJAX, and the designMode attribute of the contentDocument object, which enables rich text editing of HTML documents.[citation needed] Some of these functionalities were not possible until the introduction of the W3C DOM methods. Its Ruby character extension to HTML is also accepted as a module in W3C XHTML 1.1, though it is not found in all versions of W3C HTML.
|
52 |
+
|
53 |
+
Microsoft submitted several other features of IE for consideration by the W3C for standardization. These include the 'behaviour' CSS property, which connects the HTML elements with JScript behaviours (known as HTML Components, HTC); HTML+TIME profile, which adds timing and media synchronization support to HTML documents (similar to the W3C XHTML+SMIL), and the VML vector graphics file format. However, all were rejected, at least in their original forms; VML was subsequently combined with PGML (proposed by Adobe and Sun), resulting in the W3C-approved SVG format, one of the few vector image formats being used on the web, which IE did not support until version 9.[50]
|
54 |
+
|
55 |
+
Other non-standard behaviours include: support for vertical text, but in a syntax different from W3C CSS3 candidate recommendation, support for a variety of image effects[51] and page transitions, which are not found in W3C CSS, support for obfuscated script code, in particular JScript.Encode,[52] as well as support for embedding EOT fonts in web pages.[53]
|
56 |
+
|
57 |
+
Support for favicons was first added in Internet Explorer 5.[54] Internet Explorer supports favicons in PNG, static GIF and native Windows icon formats. In Windows Vista and later, Internet Explorer can display native Windows icons that have embedded PNG files.[55][56]
|
58 |
+
|
59 |
+
Internet Explorer makes use of the accessibility framework provided in Windows. Internet Explorer is also a user interface for FTP, with operations similar to that of Windows Explorer. Pop-up blocking and tabbed browsing were added respectively in Internet Explorer 6 and Internet Explorer 7. Tabbed browsing can also be added to older versions by installing MSN Search Toolbar or Yahoo Toolbar.
|
60 |
+
|
61 |
+
Internet Explorer caches visited content in the Temporary Internet Files folder to allow quicker access (or offline access) to previously visited pages. The content is indexed in a database file, known as Index.dat. Multiple Index.dat files exist which index different content—visited content, web feeds, visited URLs, cookies, etc.[57]
|
62 |
+
|
63 |
+
Prior to IE7, clearing the cache used to clear the index but the files themselves were not reliably removed, posing a potential security and privacy risk. In IE7 and later, when the cache is cleared, the cache files are more reliably removed, and the index.dat file is overwritten with null bytes.
|
64 |
+
|
65 |
+
Caching has been improved in IE9.[58]
|
66 |
+
|
67 |
+
Internet Explorer is fully configurable using Group Policy. Administrators of Windows Server domains (for domain-joined computers) or the local computer can apply and enforce a variety of settings on computers that affect the user interface (such as disabling menu items and individual configuration options), as well as underlying security features such as downloading of files, zone configuration, per-site settings, ActiveX control behaviour and others. Policy settings can be configured for each user and for each machine. Internet Explorer also supports Integrated Windows Authentication.
|
68 |
+
|
69 |
+
Internet Explorer uses a componentized architecture built on the Component Object Model (COM) technology. It consists of several major components, each of which is contained in a separate Dynamic-link library (DLL) and exposes a set of COM programming interfaces hosted by the Internet Explorer main executable, iexplore.exe:[59]
|
70 |
+
|
71 |
+
Internet Explorer does not include any native scripting functionality. Rather, MSHTML.dll exposes an API that permits a programmer to develop a scripting environment to be plugged-in and to access the DOM tree. Internet Explorer 8 includes the bindings for the Active Scripting engine, which is a part of Microsoft Windows and allows any language implemented as an Active Scripting module to be used for client-side scripting. By default, only the JScript and VBScript modules are provided; third party implementations like ScreamingMonkey (for ECMAScript 4 support) can also be used. Microsoft also makes available the Microsoft Silverlight runtime (not supported in Windows RT) that allows CLI languages, including DLR-based dynamic languages like IronPython and IronRuby, to be used for client-side scripting.
|
72 |
+
|
73 |
+
Internet Explorer 8 introduces some major architectural changes, called Loosely Coupled IE (LCIE). LCIE separates the main window process (frame process) from the processes hosting the different web applications in different tabs (tab processes). A frame process can create multiple tab processes, each of which can be of a different integrity level; each tab process can host multiple web sites. The processes use asynchronous Inter-Process Communication to synchronize themselves. Generally, there will be a single frame process for all web sites. In Windows Vista with Protected Mode turned on, however, opening privileged content (such as local HTML pages) will create a new tab process as it will not be constrained by Protected Mode.[61]
|
74 |
+
|
75 |
+
Internet Explorer exposes a set of Component Object Model (COM) interfaces that allows add-ons to extend the functionality of the browser.[59] Extensibility is divided into two types: Browser extensibility and content extensibility. Browser extensibility involves adding context menu entries, toolbars, menu items or Browser Helper Objects (BHO). BHOs are used to extend the feature set of the browser, whereas the other extensibility options are used to expose that feature in the user interface. Content extensibility adds support for non-native content formats.[59] It allows Internet Explorer to handle new file formats and new protocols, e.g. WebM or SPDY.[59] In addition, web pages can integrate widgets known as ActiveX controls which run on Windows only but have vast potentials to extend the content capabilities; Adobe Flash Player and Microsoft Silverlight are examples.[59] Add-ons can be installed either locally, or directly by a web site.
|
76 |
+
|
77 |
+
Since malicious add-ons can compromise the security of a system, Internet Explorer implements several safeguards. Internet Explorer 6 with Service Pack 2 and later feature an Add-on Manager for enabling or disabling individual add-ons, complemented by a "No Add-Ons" mode. Starting with Windows Vista, Internet Explorer and its BHOs run with restricted privileges and are isolated from the rest of the system. Internet Explorer 9 introduced a new component – Add-on Performance Advisor. Add-on Performance Advisor shows a notification when one or more of installed add-ons exceed a pre-set performance threshold. The notification appears in the Notification Bar when the user launches the browser. Windows 8 and Windows RT introduce a Metro-style version of Internet Explorer that is entirely sandboxed and does not run add-ons at all.[62] In addition, Windows RT cannot download or install ActiveX controls at all; although existing ones bundled with Windows RT still run in the traditional version of Internet Explorer.[62]
|
78 |
+
|
79 |
+
Internet Explorer itself can be hosted by other applications via a set of COM interfaces. This can be used to embed the browser functionality inside a computer program or create Internet Explorer shells.[59]
|
80 |
+
|
81 |
+
Internet Explorer uses a zone-based security framework that groups sites based on certain conditions, including whether it is an Internet- or intranet-based site as well as a user-editable whitelist. Security restrictions are applied per zone; all the sites in a zone are subject to the restrictions.
|
82 |
+
|
83 |
+
Internet Explorer 6 SP2 onwards uses the Attachment Execution Service of Microsoft Windows to mark executable files downloaded from the Internet as being potentially unsafe. Accessing files marked as such will prompt the user to make an explicit trust decision to execute the file, as executables originating from the Internet can be potentially unsafe. This helps in preventing accidental installation of malware.
|
84 |
+
|
85 |
+
Internet Explorer 7 introduced the phishing filter, which restricts access to phishing sites unless the user overrides the decision. With version 8, it also blocks access to sites known to host malware. Downloads are also checked to see if they are known to be malware infected.
|
86 |
+
|
87 |
+
In Windows Vista, Internet Explorer by default runs in what is called Protected Mode, where the privileges of the browser itself are severely restricted—it cannot make any system-wide changes. One can optionally turn this mode off, but this is not recommended. This also effectively restricts the privileges of any add-ons. As a result, even if the browser or any add-on is compromised, the damage the security breach can cause is limited.
|
88 |
+
|
89 |
+
Patches and updates to the browser are released periodically and made available through the Windows Update service, as well as through Automatic Updates. Although security patches continue to be released for a range of platforms, most feature additions and security infrastructure improvements are only made available on operating systems which are in Microsoft's mainstream support phase.
|
90 |
+
|
91 |
+
On December 16, 2008, Trend Micro recommended users switch to rival browsers until an emergency patch was released to fix a potential security risk which "could allow outside users to take control of a person's computer and steal their passwords". Microsoft representatives countered this recommendation, claiming that "0.02% of internet sites" were affected by the flaw. A fix for the issue was released the following day with the Security Update for Internet Explorer KB960714, on Microsoft Windows Update.[63][64]
|
92 |
+
|
93 |
+
In 2010, Germany's Federal Office for Information Security, known by its German initials, BSI, advised "temporary use of alternative browsers" because of a "critical security hole" in Microsoft's software that could allow hackers to remotely plant and run malicious code on Windows PCs. [65]
|
94 |
+
|
95 |
+
In 2011, a report by Accuvant, funded by Google, rated the security (based on sandboxing) of Internet Explorer worse than Google Chrome but better than Mozilla Firefox.[66][67]
|
96 |
+
|
97 |
+
A 2017 browser security white paper comparing Google Chrome, Microsoft Edge, and Internet Explorer 11 by X41 D-Sec in 2017 came to similar conclusions, also based on sandboxing and support of legacy web technologies.[68]
|
98 |
+
|
99 |
+
Internet Explorer has been subjected to many security vulnerabilities and concerns: much of the spyware, adware, and computer viruses across the Internet are made possible by exploitable bugs and flaws in the security architecture of Internet Explorer, sometimes requiring nothing more than viewing of a malicious web page in order to install themselves. This is known as a "drive-by install". There are also attempts to trick the user into installing malicious software by misrepresenting the software's true purpose in the description section of an ActiveX security alert.
|
100 |
+
|
101 |
+
A number of security flaws affecting IE originated not in the browser itself, but ActiveX-based add-ons used by it. Because the add-ons have the same privilege as IE, the flaws can be as critical as browser flaws. This has led to the ActiveX-based architecture being criticized for being fault-prone. By 2005, some experts maintained that the dangers of ActiveX have been overstated and there were safeguards in place.[69] In 2006, new techniques using automated testing found more than a hundred vulnerabilities in standard Microsoft ActiveX components.[70] Security features introduced in Internet Explorer 7 mitigated some of these vulnerabilities.
|
102 |
+
|
103 |
+
In 2008, Internet Explorer had a number of published security vulnerabilities. According to research done by security research firm Secunia, Microsoft did not respond as quickly as its competitors in fixing security holes and making patches available.[71] The firm also reported 366 vulnerabilities in ActiveX controls, an increase from the previous year.
|
104 |
+
|
105 |
+
According to an October 2010 report in The Register, researcher Chris Evans had detected a known security vulnerability which, then dating back to 2008, had not been fixed for at least six hundred days.[72] Microsoft says that it had known about this vulnerability, but it was of exceptionally low severity as the victim web site must be configured in a peculiar way for this attack to be feasible at all.[73]
|
106 |
+
|
107 |
+
In December 2010, researchers were able to bypass the "Protected Mode" feature in Internet Explorer.[74]
|
108 |
+
|
109 |
+
In an advisory on January 14, 2010, Microsoft said that attackers targeting Google and other U.S. companies used software that exploits a security hole, which had already been patched, in Internet Explorer. The vulnerability affected Internet Explorer 6 on Windows XP and Server 2003, IE6 SP1 on Windows 2000 SP4, IE7 on Windows Vista, XP, Server 2008 and Server 2003, and IE8 on Windows 7, Vista, XP, Server 2003, and Server 2008 (R2).[76]
|
110 |
+
|
111 |
+
The German government warned users against using Internet Explorer and recommended switching to an alternative web browser, due to the major security hole described above that was exploited in Internet Explorer.[77] The Australian and French Government issued a similar warning a few days later.[78][79][80][81]
|
112 |
+
|
113 |
+
On April 26, 2014, Microsoft issued a security advisory relating to CVE-2014-1776 (use-after-free vulnerability in Microsoft Internet Explorer 6 through 11[82]), a vulnerability that could allow "remote code execution" in Internet Explorer versions 6 to 11.[83] On April 28, 2014, the United States Department of Homeland Security's United States Computer Emergency Readiness Team (US-CERT) released an advisory stating that the vulnerability could result in "the complete compromise" of an affected system.[84] US-CERT recommended reviewing Microsoft's suggestions to mitigate an attack or using an alternate browser until the bug is fixed.[85][86] The UK National Computer Emergency Response Team (CERT-UK) published an advisory announcing similar concerns and for users to take the additional step of ensuring their antivirus software is up to date.[87] Symantec, a cyber security firm, confirmed that "the vulnerability crashes Internet Explorer on Windows XP".[88] The vulnerability was resolved on May 1, 2014, with a security update.[89]
|
114 |
+
|
115 |
+
The adoption rate of Internet Explorer seems to be closely related to that of Microsoft Windows, as it is the default web browser that comes with Windows. Since the integration of Internet Explorer 2.0 with Windows 95 OSR 1 in 1996, and especially after version 4.0's release in 1997, the adoption was greatly accelerated: from below 20% in 1996, to about 40% in 1998, and over 80% in 2000. This made Microsoft the winner in the infamous 'first browser war' against Netscape. Netscape Navigator was the dominant browser during 1995 and until 1997, but rapidly lost share to IE starting in 1998, and eventually slipped behind in 1999. The integration of IE with Windows led to a lawsuit by AOL, Netscape's owner, accusing Microsoft of unfair competition. The infamous case was eventually won by AOL but by then it was too late, as Internet Explorer had already become the dominant browser.
|
116 |
+
|
117 |
+
Internet Explorer peaked during 2002 and 2003, with about 95% share. Its first notable competitor after beating Netscape was Firefox from Mozilla, which itself was an offshoot from Netscape.
|
118 |
+
|
119 |
+
Firefox 1.0 had surpassed Internet Explorer 5 in early 2005, with Firefox 1.0 at 8 percent market share.[90]
|
120 |
+
|
121 |
+
Approximate usage over time based on various usage share counters averaged for the year overall, or for the fourth quarter, or for the last month in the year depending on availability of reference.[91][92][93][94][95][96]
|
122 |
+
|
123 |
+
According to StatCounter Internet Explorer's marketshare fell below 50% in September 2010.[97] In May 2012, Google Chrome overtook Internet Explorer as the most used browser worldwide, according to StatCounter.[98]
|
124 |
+
|
125 |
+
Browser Helper Objects are also used by many search engines companies and third parties for creating add-ons that access their services, such as search engine toolbars. Because of the use of COM, it is possible to embed web-browsing functionality in third-party applications. Hence, there are several Internet Explorer shells, and several content-centric applications like RealPlayer also use Internet Explorer's web browsing module for viewing web pages within the applications.
|
126 |
+
|
127 |
+
While a major upgrade of Internet Explorer can be uninstalled in a traditional way if the user has saved the original application files for installation, the matter of uninstalling the version of the browser that has shipped with an operating system remains a controversial one.
|
128 |
+
|
129 |
+
The idea of removing a stock install of Internet Explorer from a Windows system was proposed during the United States v. Microsoft Corp. case. One of Microsoft's arguments during the trial was that removing Internet Explorer from Windows may result in system instability. Indeed, programs that depend on libraries installed by IE, including Windows help and support system, fail to function without IE. Before Windows Vista, it was not possible to run Windows Update without IE because the service used ActiveX technology, which no other web browser supports.
|
130 |
+
|
131 |
+
The popularity of Internet Explorer has led to the appearance of malware abusing its name. On January 28, 2011, a fake Internet Explorer browser calling itself "Internet Explorer – Emergency Mode" appeared. It closely resembles the real Internet Explorer but has fewer buttons and no search bar. If a user attempts to launch any other browser such as Google Chrome, Mozilla Firefox, Opera, Safari or the real Internet Explorer, this browser will be loaded instead. It also displays a fake error message, claiming that the computer is infected with malware and Internet Explorer has entered "Emergency Mode". It blocks access to legitimate sites such as Google if the user tries to access them.[99][100]
|
en/2752.html.txt
ADDED
@@ -0,0 +1,198 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The Internet is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP) to communicate between networks and devices. It is a network of networks that consists of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries a vast range of information resources and services, such as the inter-linked hypertext documents and applications of the World Wide Web (WWW), electronic mail, telephony, and file sharing.
|
6 |
+
|
7 |
+
The origins of the Internet date back to the development of packet switching and research commissioned by the United States Department of Defense in the 1960s to enable time-sharing of computers.[1] The primary precursor network, the ARPANET, initially served as a backbone for interconnection of regional academic and military networks in the 1970s. The funding of the National Science Foundation Network as a new backbone in the 1980s, as well as private funding for other commercial extensions, led to worldwide participation in the development of new networking technologies, and the merger of many networks.[2] The linking of commercial networks and enterprises by the early 1990s marked the beginning of the transition to the modern Internet,[3] and generated a sustained exponential growth as generations of institutional, personal, and mobile computers were connected to the network. Although the Internet was widely used by academia in the 1980s, commercialization incorporated its services and technologies into virtually every aspect of modern life.
|
8 |
+
|
9 |
+
Most traditional communication media, including telephony, radio, television, paper mail and newspapers are reshaped, redefined, or even bypassed by the Internet, giving birth to new services such as email, Internet telephony, Internet television, online music, digital newspapers, and video streaming websites. Newspaper, book, and other print publishing are adapting to website technology, or are reshaped into blogging, web feeds and online news aggregators. The Internet has enabled and accelerated new forms of personal interactions through instant messaging, Internet forums, and social networking. Online shopping has grown exponentially both for major retailers and small businesses and entrepreneurs, as it enables firms to extend their "brick and mortar" presence to serve a larger market or even sell goods and services entirely online. Business-to-business and financial services on the Internet affect supply chains across entire industries.
|
10 |
+
|
11 |
+
The Internet has no single centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own policies.[4] The overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.[5] In November 2006, the Internet was included on USA Today's list of New Seven Wonders.[6]
|
12 |
+
|
13 |
+
When the term Internet is used to refer to the specific global system of interconnected Internet Protocol (IP) networks, the word is a proper noun according to the Chicago Manual of Style[7] that should be written with an initial capital letter. In common use and the media, it is often not capitalized, viz. the internet. Some guides specify that the word should be capitalized when used as a noun, but not capitalized when used as an adjective.[8] The Internet is also often referred to as the Net, as a short form of network. Historically, as early as 1849, the word internetted was used uncapitalized as an adjective, meaning interconnected or interwoven.[9] The designers of early computer networks used internet both as a noun and as a verb in shorthand form of internetwork or internetworking, meaning interconnecting computer networks.[10][11]
|
14 |
+
|
15 |
+
The terms Internet and World Wide Web are often used interchangeably in everyday speech; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web or the Web is only one of a large number of Internet services.[12][13] The Web is a collection of interconnected documents (web pages) and other web resources, linked by hyperlinks and URLs.[14] The term Interweb is a portmanteau of Internet and World Wide Web typically used sarcastically to parody a technically unsavvy user.
|
16 |
+
|
17 |
+
The Advanced Research Projects Agency (ARPA) of the United States Department of Defense funded research into time-sharing of computers in the 1960s.[15][16][17] Meanwhile, research into packet switching, one of the fundamental Internet technologies, started in the work of Paul Baran in the early 1960s and, independently, Donald Davies in 1965.[1][18] Packet switching was incorporated into the proposed design for the ARPANET in 1967 and other networks such as the NPL network, the Merit Network, and CYCLADES, which were developed in the late 1960s and early 1970s.[19]
|
18 |
+
|
19 |
+
ARPANET development began with two network nodes which were interconnected between the Network Measurement Center at the University of California, Los Angeles (UCLA) Henry Samueli School of Engineering and Applied Science directed by Leonard Kleinrock, and the NLS system at SRI International (SRI) by Douglas Engelbart in Menlo Park, California, on 29 October 1969.[20] The third site was the Culler-Fried Interactive Mathematics Center at the University of California, Santa Barbara, followed by the University of Utah Graphics Department. In a sign of future growth, fifteen sites were connected to the young ARPANET by the end of 1971.[21][22] These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing.
|
20 |
+
|
21 |
+
Early international collaborations for the ARPANET were rare. Connections were made in 1973 to the Norwegian Seismic Array (NORSAR) via a satellite station in Tanum, Sweden, and to Peter Kirstein's research group at University College London which provided a gateway to British academic networks.[23][24] The ARPANET project and international working groups led to the development of various protocols and standards by which multiple separate networks could become a single network or "a network of networks".[25] In 1974, Vint Cerf and Bob Kahn used the term internet as a shorthand for internetwork in RFC 675,[11] and later RFCs repeated this use.[26] Cerf and Khan credit Louis Pouzin with important influences on TCP/IP design.[27] Commercial PTT providers were concerned with developing X.25 public data networks.[28]
|
22 |
+
|
23 |
+
Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which permitted worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s.[29] The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89.[30][31][32][33] Although other network protocols such as UUCP had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers (ISPs) emerged in 1989 in the United States and Australia.[34] The ARPANET was decommissioned in 1990.
|
24 |
+
|
25 |
+
Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet.[35] Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites.[36] Six months later Tim Berners-Lee would begin writing WorldWideWeb, the first web browser after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9,[37] the HyperText Markup Language (HTML), the first Web browser (which was also a HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server,[38] and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994.[39] In 1996 OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe.[40] By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic.[41]
|
26 |
+
|
27 |
+
As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser lightwave systems, and noise performance.[44]
|
28 |
+
|
29 |
+
Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web[45] with its discussion forums, blogs, social networking, and online shopping sites. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1-Gbit/s, 10-Gbit/s, or more. The Internet continues to grow, driven by ever greater amounts of online information and knowledge, commerce, entertainment and social networking.[46] During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%.[47] This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network.[48] As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30.2% of world population).[49] It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication, by 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet.[50]
|
30 |
+
|
31 |
+
The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. ICANN coordinates the assignment of unique identifiers for use on the Internet, including domain names, Internet Protocol (IP) addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet.[51]
|
32 |
+
|
33 |
+
Regional Internet registries (RIRs) were established for five regions of the world. The African Network Information Center (AfriNIC) for Africa, the American Registry for Internet Numbers (ARIN) for North America, the Asia-Pacific Network Information Centre (APNIC) for Asia and the Pacific region, the Latin American and Caribbean Internet Addresses Registry (LACNIC) for Latin America and the Caribbean region, and the Réseaux IP Européens – Network Coordination Centre (RIPE NCC) for Europe, the Middle East, and Central Asia were delegated to assign Internet Protocol address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region.
|
34 |
+
|
35 |
+
The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016.[52][53][54][55] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world".[56] Its members include individuals (anyone may join) as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.
|
36 |
+
|
37 |
+
The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, modems etc. However, as an example of internetworking, many of the network nodes are not necessarily internet equipment per se, the internet packets are carried by other full-fledged networking protocols with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.
|
38 |
+
|
39 |
+
Internet service providers (ISPs) establish the worldwide connectivity between individual networks at various levels of scope. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fibre optic cables and governed by peering agreements. Tier 2 and lower level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET. Both the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks.[57][disputed (for: unclear whether citation supports claim empirically) – discuss] Computers and routers use routing tables in their operating system to direct IP packets to the next-hop router or destination. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.
|
40 |
+
|
41 |
+
An estimated 70 percent of the world's Internet traffic passes through Ashburn, Virginia.[58][59][60][61]
|
42 |
+
|
43 |
+
Common methods of Internet access by users include dial-up with a computer modem via telephone circuits, broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology (e.g. 3G, 4G). The Internet may often be accessed from computers in libraries and Internet cafes. Internet access points exist in many public places such as airport halls and coffee shops. Various terms are used, such as public Internet kiosk, public access terminal, and Web payphone. Many hotels also have public terminals that are usually fee-based. These terminals are widely accessed for various usages, such as ticket booking, bank deposit, or online payment. Wi-Fi provides wireless access to the Internet via local computer networks. Hotspots providing such access include Wi-Fi cafes, where users need to bring their own wireless devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based.
|
44 |
+
|
45 |
+
Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh, where the Internet can then be accessed from places such as a park bench.[62] Experiments have also been conducted with proprietary mobile wireless networks like Ricochet, various high-speed data services over cellular networks, and fixed wireless services. Modern smartphones can also access the Internet through the cellular carrier network. For Web browsing, these devices provide applications such as Google Chrome, Safari, and Firefox and a wide variety of other Internet software may be installed from app-stores. Internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016.[63]
|
46 |
+
|
47 |
+
The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012.[64] Mobile Internet connectivity has played an important role in expanding access in recent years especially in Asia and the Pacific and in Africa.[65] The number of unique mobile cellular subscriptions increased from 3.89 billion in 2012 to 4.83 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The number of subscriptions is predicted to rise to 5.69 billion users in 2020.[66] As of 2016[update], almost 60% of the world's population had access to a 4G broadband cellular network, up from almost 50% in 2015 and 11% in 2012[disputed – discuss].[66] The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect poorest users the most.[65]
|
48 |
+
|
49 |
+
Zero-rating, the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost, has offered opportunities to surmount economic hurdles, but has also been accused by its critics as creating a two-tiered Internet. To address the issues with zero-rating, an alternative model has emerged in the concept of 'equal rating' and is being tested in experiments by Mozilla and Orange in Africa. Equal rating prevents prioritization of one type of content and zero-rates all content up to a specified data cap. A study published by Chatham House, 15 out of 19 countries researched in Latin America had some kind of hybrid or zero-rated product offered. Some countries in the region had a handful of plans to choose from (across all mobile network operators) while others, such as Colombia, offered as many as 30 pre-paid and 34 post-paid plans.[67]
|
50 |
+
|
51 |
+
A study of eight countries in the Global South found that zero-rated data plans exist in every country, although there is a great range in the frequency with which they are offered and actually used in each.[68] The study looked at the top three to five carriers by market share in Bangladesh, Colombia, Ghana, India, Kenya, Nigeria, Peru and Philippines. Across the 181 plans examined, 13 per cent were offering zero-rated services. Another study, covering Ghana, Kenya, Nigeria and South Africa, found Facebook's Free Basics and Wikipedia Zero to be the most commonly zero-rated content.[69]
|
52 |
+
|
53 |
+
The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a model architecture that divides methods into a layered system of protocols, originally documented in RFC 1122 and RFC 1123.
|
54 |
+
|
55 |
+
The software layers correspond to the environment or scope in which their services operate. At the top is the application layer, space for the application-specific networking methods used in software applications. For example, a web browser program uses the client-server application model and a specific protocol of interaction between servers and clients, while many file-sharing systems use a peer-to-peer paradigm.
|
56 |
+
|
57 |
+
Below this top layer, the transport layer connects applications on different hosts with a logical channel through the network with appropriate data exchange methods. It provides several services including ordered, reliable delivery (TCP), and an unreliable datagram service (UDP).
|
58 |
+
|
59 |
+
Underlying these layers are the networking technologies that interconnect networks at their borders and exchange traffic across them. The Internet layer implements the Internet Protocol which enables computers to identify and locate each other by Internet Protocol (IP) addresses, and route their traffic via intermediate (transit) networks.[70] The internet protocol layer code is independent of the type of network that it is physically running over.
|
60 |
+
|
61 |
+
At the bottom of the architecture is the link layer, which provides logical connectivity between hosts. The link layer code is usually the only software part customized to the type of physical networking link protocol. Many link layers have been implemented and each operates over a type of network link, such as within a local area network (LAN) or wide area network (e.g. Wi-Fi or Ethernet or a dial-up connection, ATM etc.).
|
62 |
+
|
63 |
+
The most prominent component of the Internet model is the Internet Protocol (IP). IP enables internetworking and, in essence, establishes the Internet itself. Two versions of the Internet Protocol exist, IPV4 and IPV6.
|
64 |
+
|
65 |
+
For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via DHCP, or are configured.
|
66 |
+
|
67 |
+
However the network also supports other addressing systems. Users generally enter domain names (e.g. "en.wikipedia.org") instead of IP addresses because they are easier to remember, they are converted by the Domain Name System (DNS) into IP addresses which are more efficient for routing purposes.
|
68 |
+
|
69 |
+
Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number.[71] Internet Protocol Version 4 (IPv4) is the initial version used on the first generation of the Internet and is still in dominant use. It was designed to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011,[72] when the global IPv4 address allocation pool was exhausted.
|
70 |
+
|
71 |
+
Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998.[73][74][75] IPv6 deployment has been ongoing since the mid-2000s. IPv6 is currently in growing deployment around the world, since Internet address registries (RIRs) began to urge all resource managers to plan rapid adoption and conversion.[76]
|
72 |
+
|
73 |
+
IPv6 is not directly interoperable by design with IPv4. In essence, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities must exist for internetworking or nodes must have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol. Network infrastructure, however, has been lagging in this development. Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts, e.g., peering agreements, and by technical specifications or protocols that describe the exchange of data over the network. Indeed, the Internet is defined by its interconnections and routing policies.
|
74 |
+
|
75 |
+
A subnetwork or subnet is a logical subdivision of an IP network.[77]:1,16 The practice of dividing a network into two or more networks is called subnetting.
|
76 |
+
|
77 |
+
Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.
|
78 |
+
|
79 |
+
The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.
|
80 |
+
|
81 |
+
For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.
|
82 |
+
|
83 |
+
Traffic is exchanged between subnetworks through routers when the routing prefixes of the source address and the destination address differ. A router serves as a logical or physical boundary between the subnets.
|
84 |
+
|
85 |
+
The benefits of subnetting an existing network vary with each deployment scenario. In the address allocation architecture of the Internet using CIDR and in large organizations, it is necessary to allocate address space efficiently. Subnetting may also enhance routing efficiency, or have advantages in network management when subnetworks are administratively controlled by different entities in a larger organization. Subnets may be arranged logically in a hierarchical architecture, partitioning an organization's network address space into a tree-like routing structure.
|
86 |
+
|
87 |
+
While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the Internet Engineering Task Force (IETF).[78] The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. Resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices (BCP) when implementing Internet technologies.
|
88 |
+
|
89 |
+
The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services.
|
90 |
+
|
91 |
+
Most servers that provide these services are today hosted in data centers, and content is often accessed through high-performance content delivery networks.
|
92 |
+
|
93 |
+
The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. Hypertext Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistic and is one of many languages or protocols that can be used for communication on the Internet.[79]
|
94 |
+
|
95 |
+
World Wide Web browser software, such as Microsoft's Internet Explorer/Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, lets users navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain any combination of computer data, including graphics, sounds, text, video, multimedia and interactive content that runs while the user is interacting with the page. Client-side software can include animations, games, office applications and scientific demonstrations. Through keyword-driven Internet research using search engines like Yahoo!, Bing and Google, users worldwide have easy, instant access to a vast and diverse amount of online information. Compared to printed media, books, encyclopedias and traditional libraries, the World Wide Web has enabled the decentralization of information on a large scale.
|
96 |
+
|
97 |
+
The Web has enabled individuals and organizations to publish ideas and information to a potentially large audience online at greatly reduced expense and time delay. Publishing a web page, a blog, or building a website involves little initial cost and many cost-free services are available. However, publishing and maintaining large, professional web sites with attractive, diverse and up-to-date information is still a difficult and expensive proposition. Many individuals and some companies and groups use web logs or blogs, which are largely used as easily updatable online diaries. Some commercial organizations encourage staff to communicate advice in their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information, and be attracted to the corporation as a result.
|
98 |
+
|
99 |
+
Advertising on popular web pages can be lucrative, and e-commerce, which is the sale of products and services directly via the Web, continues to grow. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.[80]:19 Many common online advertising practices are controversial and increasingly subject to regulation.
|
100 |
+
|
101 |
+
When the Web developed in the 1990s, a typical web page was stored in completed form on a web server, formatted in HTML, complete for transmission to a web browser in response to a request. Over time, the process of creating and serving web pages has become dynamic, creating a flexible design, layout, and content. Websites are often created using content management software with, initially, very little content. Contributors to these systems, who may be paid staff, members of an organization or the public, fill underlying databases with content using editing pages designed for that purpose while casual visitors view and read this content in HTML form. There may or may not be editorial, approval and security systems built into the process of taking newly entered content and making it available to the target visitors.
|
102 |
+
|
103 |
+
Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet.[81][82] Pictures, documents, and other files are sent as email attachments. Email messages can be cc-ed to multiple email addresses.
|
104 |
+
|
105 |
+
Internet telephony is a common communications service realized with the Internet. The name of the principle internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP). The idea began in the early 1990s with walkie-talkie-like voice applications for personal computers. VoIP systems now dominate many markets, and are as easy to use and as convenient as a traditional telephone. The benefit has been in substantial cost savings over traditional telephone calls, especially over long distances. Cable, ADSL, and mobile data networks provide Internet access in customer premises[83] and inexpensive VoIP network adapters provide the connection for traditional analog telephone sets. The voice quality of VoIP often exceeds that of traditional calls. Remaining problems for VoIP include the situation that emergency services may not be universally available, and that devices rely on a local power supply, while older traditional phones are powered from the local loop, and typically operate during a power failure.
|
106 |
+
|
107 |
+
File sharing is an example of transferring large amounts of data across the Internet. A computer file can be emailed to customers, colleagues and friends as an attachment. It can be uploaded to a website or File Transfer Protocol (FTP) server for easy download by others. It can be put into a "shared location" or onto a file server for instant use by colleagues. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. In any of these cases, access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by digital signatures or by MD5 or other message digests. These simple features of the Internet, over a worldwide basis, are changing the production, sale, and distribution of anything that can be reduced to a computer file for transmission. This includes all manner of print publications, software products, news, music, film, video, photography, graphics and the other arts. This in turn has caused seismic shifts in each of the existing industries that previously controlled the production and distribution of these products.
|
108 |
+
|
109 |
+
Streaming media is the real-time delivery of digital media for the immediate consumption or enjoyment by end users. Many radio and television broadcasters provide Internet feeds of their live audio and video productions. They may also allow time-shift viewing or listening such as Preview, Classic Clips and Listen Again features. These providers have been joined by a range of pure Internet "broadcasters" who never had on-air licenses. This means that an Internet-connected device, such as a computer or something more specific, can be used to access on-line media in much the same way as was previously possible only with a television or radio receiver. The range of available types of content is much wider, from specialized technical webcasts to on-demand popular multimedia services. Podcasting is a variation on this theme, where—usually audio—material is downloaded and played back on a computer or shifted to a portable media player to be listened to on the move. These techniques using simple equipment allow anybody, with little censorship or licensing control, to broadcast audio-visual material worldwide.
|
110 |
+
|
111 |
+
Digital media streaming increases the demand for network bandwidth. For example, standard image quality needs 1 Mbit/s link speed for SD 480p, HD 720p quality requires 2.5 Mbit/s, and the top-of-the-line HDX quality needs 4.5 Mbit/s for 1080p.[84]
|
112 |
+
|
113 |
+
Webcams are a low-cost extension of this phenomenon. While some webcams can give full-frame-rate video, the picture either is usually small or updates slowly. Internet users can watch animals around an African waterhole, ships in the Panama Canal, traffic at a local roundabout or monitor their own premises, live and in real time. Video chat rooms and video conferencing are also popular with many uses being found for personal webcams, with and without two-way sound. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with a vast number of users. It uses an HTML5 based web player by default to stream and show video files.[85] Registered users may upload an unlimited amount of video and build their own personal profile. YouTube claims that its users watch hundreds of millions, and upload hundreds of thousands of videos daily.
|
114 |
+
|
115 |
+
The Internet has enabled new forms of social interaction, activities, and social associations. This phenomenon has given rise to the scholarly study of the sociology of the Internet.
|
116 |
+
|
117 |
+
From 2000 to 2009, the number of Internet users globally rose from 394 million to 1.858 billion.[89] By 2010, 22 percent of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube.[90] In 2014 the world's Internet users surpassed 3 billion or 43.6 percent of world population, but two-thirds of the users came from richest countries, with 78.0 percent of Europe countries population using the Internet, followed by 57.4 percent of the Americas.[91] However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world coming from that region. The number of China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million Internet users.[92] By 2019, China was the world's leading country in terms of Internet users, with more than 800 million users, followed closely by India, with some 700 million users, with the United States a distant third with 275 million users. However, in terms of penetration, China has[when?] a 38.4% penetration rate compared to India's 40% and the United States's 80%.[93] As of 2020, it was estimated that 4.5 billion people use the Internet.[94]
|
118 |
+
|
119 |
+
The prevalent language for communication via the Internet has always been English. This may be a result of the origin of the Internet, as well as the language's role as a lingua franca and as a world language. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet.
|
120 |
+
|
121 |
+
After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%).[95] By region, 42% of the world's Internet users are based in Asia, 24% in Europe, 14% in North America, 10% in Latin America and the Caribbean taken together, 6% in Africa, 3% in the Middle East and 1% in Australia/Oceania.[96] The Internet's technologies have developed enough in recent years, especially in the use of Unicode, that good facilities are available for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain.
|
122 |
+
|
123 |
+
In an American study in 2005, the percentage of men using the Internet was very slightly ahead of the percentage of women, although this difference reversed in those under 30. Men logged on more often, spent more time online, and were more likely to be broadband users, whereas women tended to make more use of opportunities to communicate (such as email). Men were more likely to use the Internet to pay bills, participate in auctions, and for recreation such as downloading music and videos. Men and women were equally likely to use the Internet for shopping and banking.[97]
|
124 |
+
More recent studies indicate that in 2008, women significantly outnumbered men on most social networking sites, such as Facebook and Myspace, although the ratios varied with age.[98] In addition, women watched more streaming content, whereas men downloaded more.[99] In terms of blogs, men were more likely to blog in the first place; among those who blog, men were more likely to have a professional blog, whereas women were more likely to have a personal blog.[100]
|
125 |
+
|
126 |
+
Splitting by country, in 2012 Iceland, Norway, Sweden, the Netherlands, and Denmark had the highest Internet penetration by the number of users, with 93% or more of the population with access.[101]
|
127 |
+
|
128 |
+
Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net")[102] refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech,[103][104] Internaut refers to operators or technically highly capable users of the Internet,[105][106] digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation.[107]
|
129 |
+
|
130 |
+
The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly. Within the limitations imposed by small screens and other limited facilities of such pocket-sized devices, the services of the Internet, including email and the web, may be available. Service providers may restrict the services offered and mobile data charges may be significantly higher than other access methods.
|
131 |
+
|
132 |
+
Educational material at all levels from pre-school to post-doctoral is available from websites. Examples range from CBeebies, through school and high-school revision guides and virtual universities, to access to top-end scholarly literature through the likes of Google Scholar. For distance education, help with homework and other assignments, self-guided learning, whiling away spare time, or just looking up more detail on an interesting fact, it has never been easier for people to access educational information at any level from anywhere. The Internet in general and the World Wide Web in particular are important enablers of both formal and informal education. Further, the Internet allows universities, in particular, researchers from the social and behavioral sciences, to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results.[111]
|
133 |
+
|
134 |
+
The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software. Not only can a group cheaply communicate and share ideas but the wide reach of the Internet allows such groups more easily to form. An example of this is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice). Internet chat, whether using an IRC chat room, an instant messaging system, or a social networking website, allows colleagues to stay in touch in a very convenient way while working at their computers during the day. Messages can be exchanged even more quickly and conveniently than via email. These systems may allow files to be exchanged, drawings and images to be shared, or voice and video contact between team members.
|
135 |
+
|
136 |
+
Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work. Business and project teams can share calendars as well as documents and other information. Such collaboration occurs in a wide variety of areas including scientific research, software development, conference planning, political activism and creative writing. Social and political collaboration is also becoming more widespread as both Internet access and computer literacy spread.
|
137 |
+
|
138 |
+
The Internet allows computer users to remotely access other computers and information stores easily from any access point. Access may be with computer security, i.e. authentication and encryption technologies, depending on the requirements. This is encouraging new ways of working from home, collaboration and information sharing in many industries. An accountant sitting at home can audit the books of a company based in another country, on a server situated in a third country that is remotely maintained by IT specialists in a fourth. These accounts could have been created by home-working bookkeepers, in other remote locations, based on information emailed to them from offices all over the world. Some of these things were possible before the widespread use of the Internet, but the cost of private leased lines would have made many of them infeasible in practice. An office worker away from their desk, perhaps on the other side of the world on a business trip or a holiday, can access their emails, access their data using cloud computing, or open a remote desktop session into their office PC using a secure virtual private network (VPN) connection on the Internet. This can give the worker complete access to all of their normal files and data, including email and other applications, while away from the office. It has been referred to among system administrators as the Virtual Private Nightmare,[112] because it extends the secure perimeter of a corporate network into remote locations and its employees' homes.
|
139 |
+
|
140 |
+
Many people use the World Wide Web to access news, weather and sports reports, to plan and book vacations and to pursue their personal interests. People use chat, messaging and email to make and stay in touch with friends worldwide, sometimes in the same way as some previously had pen pals. Social networking websites such as Facebook, Twitter, and Myspace have created new ways to socialize and interact. Users of these sites are able to add a wide variety of information to pages, to pursue common interests, and to connect with others. It is also possible to find existing acquaintances, to allow communication among existing groups of people. Sites like LinkedIn foster commercial and business connections. YouTube and Flickr specialize in users' videos and photographs. While social networking sites were initially for individuals only, today they are widely used by businesses and other organizations to promote their brands, to market to their customers and to encourage posts to "go viral". "Black hat" social media techniques are also employed by some organizations, such as spam accounts and astroturfing.
|
141 |
+
|
142 |
+
A risk for both individuals and organizations writing posts (especially public posts) on social networking websites, is that especially foolish or controversial posts occasionally lead to an unexpected and possibly large-scale backlash on social media from other Internet users. This is also a risk in relation to controversial offline behavior, if it is widely made known. The nature of this backlash can range widely from counter-arguments and public mockery, through insults and hate speech, to, in extreme cases, rape and death threats. The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment in response to posts they have made on social media, and Twitter in particular has been criticised in the past for not doing enough to aid victims of online abuse.[113]
|
143 |
+
|
144 |
+
For organizations, such a backlash can cause overall brand damage, especially if reported by the media. However, this is not always the case, as any brand damage in the eyes of people with an opposing opinion to that presented by the organization could sometimes be outweighed by strengthening the brand in the eyes of others. Furthermore, if an organization or individual gives in to demands that others perceive as wrong-headed, that can then provoke a counter-backlash.
|
145 |
+
|
146 |
+
Some websites, such as Reddit, have rules forbidding the posting of personal information of individuals (also known as doxxing), due to concerns about such postings leading to mobs of large numbers of Internet users directing harassment at the specific individuals thereby identified. In particular, the Reddit rule forbidding the posting of personal information is widely understood to imply that all identifying photos and names must be censored in Facebook screenshots posted to Reddit. However, the interpretation of this rule in relation to public Twitter posts is less clear, and in any case, like-minded people online have many other ways they can use to direct each other's attention to public social media posts they disagree with.
|
147 |
+
|
148 |
+
Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Children may also encounter material which they may find upsetting, or material which their parents consider to be not age-appropriate. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering, and/or supervise their children's online activities, in an attempt to protect their children from inappropriate material on the Internet. The most popular social networking websites, such as Facebook and Twitter, commonly forbid users under the age of 13. However, these policies are typically trivial to circumvent by registering an account with a false birth date, and a significant number of children aged under 13 join such sites anyway. Social networking sites for younger children, which claim to provide better levels of protection for children, also exist.[114]
|
149 |
+
|
150 |
+
The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic.[citation needed] Many Internet forums have sections devoted to games and funny videos.[citation needed] The Internet pornography and online gambling industries have taken advantage of the World Wide Web. Although many governments have attempted to restrict both industries' use of the Internet, in general, this has failed to stop their widespread popularity.[115]
|
151 |
+
|
152 |
+
Another area of leisure activity on the Internet is multiplayer gaming.[116] This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer.[117] Non-subscribers were limited to certain types of game play or certain games. Many people use the Internet to access and download music, movies and other works for their enjoyment and relaxation. Free and fee-based services exist for all of these activities, using centralized servers and distributed peer-to-peer technologies. Some of these sources exercise more care with respect to the original artists' copyrights than others.
|
153 |
+
|
154 |
+
Internet usage has been correlated to users' loneliness.[118] Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.
|
155 |
+
|
156 |
+
A 2017 book claimed that the Internet consolidates most aspects of human endeavor into singular arenas of which all of humanity are potential members and competitors, with fundamentally negative impacts on mental health as a result. While successes in each field of activity are pervasively visible and trumpeted, they are reserved for an extremely thin sliver of the world's most exceptional, leaving everyone else behind. Whereas, before the Internet, expectations of success in any field were supported by reasonable probabilities of achievement at the village, suburb, city or even state level, the same expectations in the Internet world are virtually certain to bring disappointment today: there is always someone else, somewhere on the planet, who can do better and take the now one-and-only top spot.[119]
|
157 |
+
|
158 |
+
Cybersectarianism is a new organizational form which involves: "highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, on-line chat rooms, and web-based message boards."[120] In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.
|
159 |
+
|
160 |
+
Cyberslacking can become a drain on corporate resources; the average UK employee spent 57 minutes a day surfing the Web while at work, according to a 2003 study by Peninsula Business Services.[121] Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity.[122]
|
161 |
+
|
162 |
+
Electronic business (e-business) encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion for 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales.[123]
|
163 |
+
|
164 |
+
While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide.[124] Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality.[125][126][127]
|
165 |
+
|
166 |
+
Author Andrew Keen, a long-time critic of the social transformations caused by the Internet, has focused on the economic effects of consolidation from Internet businesses. Keen cites a 2013 Institute for Local Self-Reliance report saying brick-and-mortar retailers employ 47 people for every $10 million in sales while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, transportation network company Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people.[128]
|
167 |
+
|
168 |
+
Telecommuting is the performance within a traditional worker and employer relationship when it is facilitated by tools such as groupware, virtual private networks, conference calling, videoconferencing, and voice over IP (VOIP) so that work may be performed from any location, most conveniently the worker's home. It can be efficient and useful for companies as it allows workers to communicate over long distances, saving significant amounts of travel time and cost. As broadband Internet connections become commonplace, more workers have adequate bandwidth at home to use these tools to link their home to their corporate intranet and internal communication networks.
|
169 |
+
|
170 |
+
Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries.[129] In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work.[130] The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park.[131] The English Wikipedia has the largest user base among wikis on the World Wide Web[132] and ranks in the top 10 among all Web sites in terms of traffic.[133]
|
171 |
+
|
172 |
+
The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism, most notably practiced by rebels in the Arab Spring.[134][135] The New York Times suggested that social media websites, such as Facebook and Twitter, helped people organize the political revolutions in Egypt, by helping activists organize protests, communicate grievances, and disseminate information.[136]
|
173 |
+
|
174 |
+
Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies.[citation needed]
|
175 |
+
|
176 |
+
The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. Kiva raises funds for local intermediary microfinance organizations which post stories and updates on behalf of the borrowers. Lenders can contribute as little as $25 to loans of their choice, and receive their money back as borrowers repay. Kiva falls short of being a pure peer-to-peer charity, in that loans are disbursed before being funded by lenders and borrowers do not communicate with lenders themselves.[137][138]
|
177 |
+
|
178 |
+
Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information.
|
179 |
+
|
180 |
+
Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users. Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of cyber warfare using similar methods on a large scale.[citation needed]
|
181 |
+
|
182 |
+
The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet.[139] In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies.[140][141][142] Packet capture is the monitoring of data traffic on a computer network. Computers communicate over the Internet by breaking up messages (emails, images, videos, web pages, files, etc.) into small chunks called "packets", which are routed through a network of computers, until they reach their destination, where they are assembled back into a complete "message" again. Packet Capture Appliance intercepts these packets as they are traveling through the network, in order to examine their contents using other programs. A packet capture is an information gathering tool, but not an analysis tool. That is it gathers "messages" but it does not analyze them and figure out what they mean. Other programs are needed to perform traffic analysis and sift through intercepted data looking for important/useful information. Under the Communications Assistance For Law Enforcement Act all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and voice over Internet protocol (VoIP) traffic.[143]
|
183 |
+
|
184 |
+
The large amount of data gathered from packet capturing requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access of certain types of web sites, or communicating via email or chat with certain parties.[144] Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data.[145] Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software was allegedly installed by German Siemens AG and Finnish Nokia.[146]
|
185 |
+
|
186 |
+
Some governments, such as those of Burma, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters.[152]
|
187 |
+
|
188 |
+
In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret.[153] Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet, but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive websites on individual computers or networks, in order to limit access by children to pornographic material or depiction of violence.
|
189 |
+
|
190 |
+
As the Internet is a heterogeneous network, the physical characteristics, including for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization.[154]
|
191 |
+
|
192 |
+
The volume of Internet traffic is difficult to measure, because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.
|
193 |
+
|
194 |
+
An Internet blackout or outage can be caused by local signalling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to a small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia.[155] Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93%[156] of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests.[157]
|
195 |
+
|
196 |
+
Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB.[158] The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis.[158]
|
197 |
+
|
198 |
+
In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic.[159][160] According to a non-peer reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure.[161] The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files.[162]
|
en/2753.html.txt
ADDED
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The small intestine or small bowel is an organ in the gastrointestinal tract where most of the end absorption of nutrients and minerals from food takes place. It lies between the stomach and large intestine, and receives bile and pancreatic juice through the pancreatic duct to aid in digestion.
|
4 |
+
|
5 |
+
The small intestine has three distinct regions – the duodenum, jejunum, and ileum. The duodenum, the shortest, is where preparation for absorption through small finger-like protrusions called villi begins.[2] The jejunum is specialized for the absorption through its lining by enterocytes: small nutrient particles which have been previously digested by enzymes in the duodenum. The main function of the ileum is to absorb vitamin B12, bile salts, and whatever products of digestion were not absorbed by the jejunum.
|
6 |
+
|
7 |
+
The length of the small intestine can vary greatly, from as short as 3.00 m (9.84 ft) to as long as 10.49 m (34.4 ft), also depending on the measuring technique used.[3] The typical length in a living person is 3m–5m.[4][5] The length depends both on how tall the person is and how the length is measured.[3] Taller people generally have a longer small intestine and measurements are generally longer after death and when the bowel is empty.[3]
|
8 |
+
|
9 |
+
It is approximately 1.5 cm in diameter in newborns after 35 weeks of gestational age,[7] and 2.5–3 cm (1 inch) in diameter in adults. On abdominal X-rays, the small intestine is considered to be abnormally dilated when the diameter exceeds 3 cm.[8][9] On CT scans, a diameter of over 2.5 cm is considered abnormally dilated.[8][10] The surface area of the human small intestinal mucosa, due to enlargement caused by folds, villi and microvilli, averages 30 square meters.[11]
|
10 |
+
|
11 |
+
The small intestine is divided into three structural parts.
|
12 |
+
|
13 |
+
The jejunum and ileum are suspended in the abdominal cavity by mesentery. The mesentery is part of the peritoneum. Arteries, veins, lymph vessels and nerves travel within the mesentery.[13]
|
14 |
+
|
15 |
+
The small intestine receives a blood supply from the celiac trunk and the superior mesenteric artery. These are both branches of the aorta. The duodenum receives blood from the coeliac trunk via the superior pancreaticoduodenal artery and from the superior mesenteric artery via the inferior pancreaticoduodenal artery. These two arteries both have anterior and posterior branches that meet in the midline and anastomose. The jejunum and ileum receive blood from the superior mesenteric artery.[14] Branches of the superior mesenteric artery form a series of arches within the mesentery known as arterial arcades, which may be several layers deep. Straight blood vessels known as vasa recta travel from the arcades closest to the ileum and jejunum to the organs themselves.[14]
|
16 |
+
|
17 |
+
The three sections of the small intestine look similar to each other at a microscopic level, but there are some important differences. The parts of the intestine are as follows:
|
18 |
+
|
19 |
+
About 20,000 protein coding genes are expressed in human cells and 70% of these genes are expressed in the normal duodenum.[15][16] Some 300 of these genes are more specifically expressed in the duodenum with very few genes expressed only in the small intestine. The corresponding specific proteins are expressed in glandular cells of the mucosa, such as fatty acid binding protein FABP6. Most of the more specifically expressed genes in the small intestine are also expressed in the duodenum, for example FABP2 and the DEFA6 protein expressed in secretory granules of Paneth cells.[17]
|
20 |
+
|
21 |
+
The small intestine develops from the midgut of the primitive gut tube.[18] By the fifth week of embryological life, the ileum begins to grow longer at a very fast rate, forming a U-shaped fold called the primary intestinal loop. The loop grows so fast in length that it outgrows the abdomen and protrudes through the umbilicus. By week 10, the loop retracts back into the abdomen. Between weeks six and ten the small intestine rotates anticlockwise, as viewed from the front of the embryo. It rotates a further 180 degrees after it has moved back into the abdomen. This process creates the twisted shape of the large intestine.[18]
|
22 |
+
|
23 |
+
Third state of the development of the intestinal canal and peritoneum, seen from in front (diagrammatic). The mode of preparation is the same as in Fig 400
|
24 |
+
|
25 |
+
Second stage of development of the intestinal canal and peritoneum, seen from in front (diagrammatic). The liver has been removed and the two layers of the ventral mesogastrium (lesser omentum) have been cut. The vessels are represented in black and the peritoneum in the reddish tint.
|
26 |
+
|
27 |
+
First stage of the development of the intestinal canal and the peritoneum, seen from the side (diagrammatic). From colon 1 the ascending and transverse colon will be formed and from colon 2 the descending and sigmoid colons and the rectum.
|
28 |
+
|
29 |
+
Food from the stomach is allowed into the duodenum through the pylorus by a muscle called the pyloric sphincter.
|
30 |
+
|
31 |
+
The small intestine is where most chemical digestion takes place. Many of the digestive enzymes that act in the small intestine are secreted by the pancreas and liver and enter the small intestine via the pancreatic duct. Pancreatic enzymes and bile from the gallbladder enter the small intestine in response to the Hormone cholecystokinin, which is produced in the small intestine in response to the presence of nutrients. Secretin, another hormone produced in the small intestine, causes additional effects on the pancreas, where it promotes the release of bicarbonate into the duodenum in order to neutralize the potentially harmful acid coming from the stomach.
|
32 |
+
|
33 |
+
The three major classes of nutrients that undergo digestion are proteins, lipids (fats) and carbohydrates:
|
34 |
+
|
35 |
+
Digested food is now able to pass into the blood vessels in the wall of the intestine through either diffusion or active transport. The small intestine is the site where most of the nutrients from ingested food are absorbed. The inner wall, or mucosa, of the small intestine, is lined with simple columnar epithelial tissue. Structurally, the mucosa is covered in wrinkles or folds called plicae circulares, which are considered permanent features in the wall of the organ. They are distinct from rugae which are considered non-permanent or temporary allowing for distention and contraction. From the plicae circulares project microscopic finger-like pieces of tissue called villi (Latin for "shaggy hair"). The individual epithelial cells also have finger-like projections known as microvilli. The functions of the plicae circulares, the villi, and the microvilli are to increase the amount of surface area available for the absorption of nutrients, and to limit the loss of said nutrients to intestinal fauna.
|
36 |
+
|
37 |
+
Each villus has a network of capillaries and fine lymphatic vessels called lacteals close to its surface. The epithelial cells of the villi transport nutrients from the lumen of the intestine into these capillaries (amino acids and carbohydrates) and lacteals (lipids). The absorbed substances are transported via the blood vessels to different organs of the body where they are used to build complex substances such as the proteins required by our body. The material that remains undigested and unabsorbed passes into the large intestine.
|
38 |
+
|
39 |
+
Absorption of the majority of nutrients takes place in the jejunum, with the following notable exceptions:
|
40 |
+
|
41 |
+
The small intestine supports the body's immune system.[20] The presence of gut flora appears to contribute positively to the host's immune system.
|
42 |
+
Peyer's patches, located within the ileum of the small intestine, are an important part of the digestive tract's local immune system. They are part of the lymphatic system, and provide a site for antigens from potentially harmful bacteria or other microorganisms in the digestive tract to be sampled, and subsequently presented to the immune system.[21]
|
43 |
+
|
44 |
+
The small intestine is a complex organ, and as such, there are a very large number of possible conditions that may affect the function of the small bowel. A few of them are listed below, some of which are common, with up to 10% of people being affected at some time in their lives, while others are vanishingly rare.
|
45 |
+
|
46 |
+
The small intestine is found in all tetrapods and also in teleosts, although its form and length vary enormously between species. In teleosts, it is relatively short, typically around one and a half times the length of the fish's body. It commonly has a number of pyloric caeca, small pouch-like structures along its length that help to increase the overall surface area of the organ for digesting food. There is no ileocaecal valve in teleosts, with the boundary between the small intestine and the rectum being marked only by the end of the digestive epitheliu.[22]
|
47 |
+
|
48 |
+
In tetrapods, the ileocaecal valve is always present, opening into the colon. The length of the small intestine is typically longer in tetrapods than in teleosts, but is especially so in herbivores, as well as in mammals and birds, which have a higher metabolic rate than amphibians or reptiles. The lining of the small intestine includes microscopic folds to increase its surface area in all vertebrates, but only in mammals do these develop into true villi.[22]
|
49 |
+
|
50 |
+
The boundaries between the duodenum, jejunum, and ileum are somewhat vague even in humans, and such distinctions are either ignored when discussing the anatomy of other animals, or are essentially arbitrary.[22]
|
51 |
+
|
52 |
+
There is no small intestine as such in non-teleost fish, such as sharks, sturgeons, and lungfish. Instead, the digestive part of the gut forms a spiral intestine, connecting the stomach to the rectum. In this type of gut, the intestine itself is relatively straight but has a long fold running along the inner surface in a spiral fashion, sometimes for dozens of turns. This valve greatly increases both the surface area and the effective length of the intestine. The lining of the spiral intestine is similar to that of the small intestine in teleosts and non-mammalian tetrapods.[22]
|
53 |
+
|
54 |
+
In lampreys, the spiral valve is extremely small, possibly because their diet requires little digestion. Hagfish have no spiral valve at all, with digestion occurring for almost the entire length of the intestine, which is not subdivided into different regions.[22]
|
55 |
+
|
56 |
+
In traditional Chinese medicine, the small intestine is a yang organ.[23]
|
57 |
+
|
58 |
+
Small intestine in situ, greater omentum folded upwards.
|