question
stringlengths
15
100
context
stringlengths
18
412k
when does the newest season of vikings start
Vikings (TV series) - Wikipedia Vikings is a historical drama television series written and created by Michael Hirst for the History channel. Filmed in Ireland, it premiered on March 3, 2013 in Canada. Vikings is inspired by the sagas of Viking Ragnar Lothbrok, one of the best - known legendary Norse heroes and notorious as the scourge of England and France. The show portrays Ragnar as a farmer who rises to fame by successful raids into England, and eventually becomes a Scandinavian king, with the support of his family and fellow warriors: his brother Rollo, his son Björn Ironside, and his wives -- the shieldmaiden Lagertha and the princess Aslaug. On March 17, 2016, History renewed Vikings for a fifth season of 20 episodes, which premiered on November 29, 2017. On September 12, 2017, ahead of its fifth - season premiere, the series was renewed for a sixth season, which will consist of 20 episodes. The series is inspired by the tales of the Norsemen of early medieval Scandinavia. It broadly follows the exploits of the legendary Viking chieftain Ragnar Lothbrok and his crew, family and descendants, as notably laid down in the 13th - century sagas Ragnars saga Loðbrókar and Ragnarssona þáttr, as well as in Saxo Grammaticus 's 12th - century work Gesta Danorum. Norse legendary sagas were partially fictional tales based in the Norse oral tradition, written down about 200 to 400 years after the events they describe. Further inspiration is taken from historical sources of the period, such as records of the Viking raid on Lindisfarne depicted in the second episode, or Ahmad ibn Fadlan 's 10th - century account of the Volga Vikings. The series begins at the start of the Viking Age, marked by the Lindisfarne raid in 793. An Irish - Canadian co-production, Vikings was developed and produced by Octagon Films and Take 5 Productions. Michael Hirst, Morgan O'Sullivan, John Weber, Sherry Marsh, Alan Gasmer, James Flynn and Sheila Hockin are credited as executive producers. The first season 's budget was reported as US $ 40 million. The series began filming in July 2012 at Ashford Studios, a newly built facility in Ireland, chosen as a location for its tax advantages. On August 16, 2012, longship scenes were filmed at Luggala, as well as on the Poulaphouca Reservoir in the Wicklow Mountains. 70 percent of the first season was filmed outdoors. Some additional background shots were done in western Norway. Johan Renck, Ciarán Donnelly and Ken Girotti each directed three episodes. The production team included cinematographer John Bartley, costume designer Joan Bergin, production designer Tom Conroy, composer Trevor Morris and Irish choir Crux Vocal Ensemble, directed by Paul McGough. On April 5, 2013, History renewed Vikings for a ten - episode second season. Jeff Woolnough and Kari Skogland joined Ken Girotti and Ciaran Donnelly as directors of the second season. Two new series regulars were announced on June 11, 2013. Alexander Ludwig, portraying the teenage Björn, and Linus Roache, playing King Ecbert of Wessex. The second season undergoes a jump in time, aging the young Björn (Nathan O'Toole) into an older swordsman portrayed by Ludwig. The older Björn has not seen his father, Ragnar, for "a long period of time ''. Lagertha remarries to a powerful jarl, a stepfather who provides harsh guidance to Björn. Edvin Endre, son of Swedish actress Lena Endre, and Anna Åström signed up for roles in the second season. Endre had the role of Erlendur, one of King Horik 's sons. Morgan O'Sullivan, Sheila Hockin, Sherry Marsh, Alan Gasmer, James Flynn, John Weber, and Michael Hirst are credited as executive producers. This season was produced by Steve Wakefield and Keith Thompson. Bill Goddard and Séamus McInerney act as co-producers. The production team for this season includes casting directors Frank and Nuala Moiselle, costume designer Joan Bergin, visual effects supervisors Julian Parry and Dominic Remane, stunt action designers Franklin Henson and Richard Ryan, composer Trevor Morris, production designer Mark Geraghty, editors Aaron Marshall for the first, third, fifth, seventh and ninth episodes, and Tad Seaborn for the second, fourth, sixth, eighth and tenth episodes, and cinematographer Pj Dillon. Norwegian music group Wardruna provided much of the background music to the series. Wardruna 's founder Einar Kvitrafn Selvik also appeared as an actor in the show during the third season as a shaman. Michael Hirst announced plans for the fourth season before the third season had begun airing. The fourth season began production around the Dublin area in April 2015. Finnish actors Peter Franzén and Jasper Pääkkönen, as well as Canadian actress Dianne Doan joined the cast of the fourth season. Franzén played Norwegian King Harald Finehair, a potential rival to Ragnar. Pääkkönen was cast as Halfdan the Black, Finehair 's brother. Doan portrays Yidu, a Chinese character who has a major role in the first half of the fourth season. At the same time that the series was renewed for a fifth season, it was announced that Irish actor Jonathan Rhys Meyers would be joining the cast, as Heahmund, a "warrior bishop ''. Vikings creator Michael Hirst, explained: "I was looking at the history books, and I came across these warrior bishops. The antecedents of the Knights Templar: these are people who were absolutely religious, yet they put on armor and they fought. Do n't let their priestly status fool you, either. ' They were crazy! They believed totally in Christianity and the message, and yet, on the battlefield, they were totally berserk. ' '' Former WWE star Adam Copeland, was cast in a recurring role for the fifth season, as Kjettil Flatnose, a violent and bold warrior. He is chosen by Floki to lead an expedition to Iceland to set up a colony. Irish actor Darren Cahill will play the role of Aethelred in the fifth season. Nigerian actor Stanley Amuzie told local media he had landed a small role in the fifth season. The fifth season will also include Irish actor, musician and real - life police detective, Kieran O'Reilly, who will play the role of "White Hair ''. In April 2017 it was announced that Danish actor Erik Madsen will join the cast for the fifth season, as King Hemmig. He spent several months of 2016 on the set of The Last Kingdom, portraying a Viking. Russian actor Danila Kozlovsky is set to join the series for the sixth season, as Oleg of Novgorod, the 10th century Varangian (east European Vikings) ruler of the Rus ' people. Coincidentally, Kozlovsky headlined the big - budget 2016 Russian feature Viking, playing one of Oleg 's successors, Vladimir the Great, Prince of Novgorod. Vikings premiered on March 3, 2013 in Canada and the United States. Vikings was renewed for a fourth season in March 2015 with an extended order of 20 episodes, which premiered on February 18, 2016. On March 17, 2016, History renewed Vikings for a fifth season of 20 episodes, which premiered on November 29, 2017. On September 12, 2017, ahead of its fifth - season premiere, the series was renewed for a sixth season of 20 episodes. In the UK, Vikings premiered on May 24, 2013 where it was exclusively available on the streaming video - on - demand service LoveFilm. The second season premiered on March 24, 2015. The third season began airing on February 20, 2015 on Amazon Video. In Australia, the series premiered on August 8, 2013 on SBS One. It was later moved to FX, which debuted the second season on February 4, 2015. Season three of Vikings began broadcasting in Australia on SBS One on March 19, 2015. Season four of Vikings began broadcasting in Australia on SBS One on February 24, 2016. The first episode received favourable reviews, with an average rating of 71 % according to Metacritic. Alan Sepinwall of HitFix praised the casting, notably of Fimmel as Ragnar, and observed that Vikings "is n't complicated. It... relies on the inherent appeal of the era and these characters to drive the story. '' Nancy DeWolf Smith of The Wall Street Journal noted the "natural and authentic '' setting and costumes, and appreciated that Vikings was (unlike, e.g., Spartacus) not a celebration of sex and violence, but "a study of character, stamina, power and... of social, emotional and even intellectual awakening ''. Hank Stuever, writing for the Washington Post, said that the "compelling and robust new drama series... delivers all the expected gore and blood spatter '', but that it successfully adapted the skills of cable television drama, with the care taken in acting, writing and sense of scope reminiscent of Rome, Sons of Anarchy and Game of Thrones. He also suggested that the way the series emphasized "a core pride and nobility in this tribe of thugs '' reflected "just another iteration of Tony Soprano ''. Neil Genzlinger, in The New York Times, praised the "arresting '' cinematography and the actors ' performances, notably Fimmel 's, and favorably contrasted Vikings to Game of Thrones and Spartacus for the absence of gratuitous nudity. In TIME, James Poniewozik noted that the relatively simple generational conflict underlying Vikings "does n't nearly have the narrative ambition of a Game of Thrones or the political subtleties of a Rome '', nor these series ' skill with dialogue, but that it held up pretty well compared to the "tabloid history '' of The Tudors and The Borgias. He concluded that "Vikings ' larger story arc is really more about historical forces '' than about its not very complex characters. Clark Collis of Entertainment Weekly appreciated the performances, but considered Vikings to be "kind of a mess '', lacking the intrigue of The Tudors and Game of Thrones. Brian Lowry criticized the series in Variety as an "unrelenting cheese-fest '' and as a "more simpleminded version of Game of Thrones '', but considered that it had "a level of atmosphere and momentum that makes it work as a mild diversion ''. In the San Francisco Chronicle, David Wiegand was disappointed by the series 's "glacial pace '' and lack of action as well as the "flabby direction and a gassy script '', while appreciating the performances and characters. The second season received a Metacritic rating of 77 %, and a Rotten Tomatoes rating of 92 % based on twelve professional critic reviews. According to Nielsen, the series premiere drew six million viewers in the U.S., topping all broadcast networks among viewers aged 18 to 49. An earlier claim of over eighteen million viewers was later retracted by the channel with an apology. In Canada, the premiere had 1.1 million viewers. The first season averaged 942,000 viewers. Some critics have pointed out historical inaccuracies in the series 's depiction of Viking society. Lars Walker, in the magazine The American Spectator, criticized its portrayal of early Viking Age government (represented by Earl Haraldson) as autocratic rather than essentially democratic. Joel Robert Thompson criticized depiction of the Scandinavians ' supposed ignorance of the existence of Britain and Ireland, and of the death penalty rather than outlawry (skoggangr) as their most serious punishment. Monty Dobson, a historian at Central Michigan University, criticised the depiction of Viking Age clothing, but went on to say that fictional shows like Vikings could still be a useful teaching tool. The Norwegian newspaper Aftenposten reported that the series incorrectly depicted the temple at Uppsala as a stave church in the mountains, whereas the historical temple was situated on flat land and stave churches were a hallmark of later Christian architecture. On the other hand, the temple as depicted does have similarities with reconstructions of the Uppåkra hof. Many characters are based on (or inspired by) real people from history or legend, and the major events portrayed are broadly drawn from history. However, events from over a hundred years have been condensed, so that people who could never have met are shown as of similar age, with the historical events amended for dramatic effect. For example, season one leads up to the attack on Lindisfarne Abbey of 793 (before the real Rollo was born), but in season three the same characters at roughly the same ages participate in the Siege of Paris of 845. By this time, Ecbert had been dead for over forty years, and King Alfred the Great was already king, yet he is still portrayed as a child in season four. Rollo is portrayed having his followers killed, and fighting his fellow Vikings, whereas in history they were granted what became Normandy and continued to co-operate with their Norse kinsmen. Furthermore, most of the principal characters are portrayed as being from Norway, while according to primary sources they would most likely have been Danes. Little is known about Viking religious practice and so its depiction is largely creative. When Katheryn Winnick was asked why she licked the seer 's hand she answered: "It was n't originally in the script and we just wanted to come up with something unique and different ''. Regarding the historical differences and accuracy issues of the show, showrunner Michael Hirst said: "I especially had to take liberties with Vikings because no one knows for sure what happened in the Dark Ages... we want people to watch it. A historical account of the Vikings would reach hundreds, occasionally thousands, of people. Here we 've got to reach millions. '' Zenescope partnered with the History Channel to create a free Vikings comic book based on the series. It was first distributed at Comic - Con 2013 and by comiXology in February 2014. The comic was written by Michael Hirst, features interior artwork by Dennis Calero (X-Men Noir), and is set before the events of season one. In addition to featuring Ragnar and Rollo battling alongside their father, the comic depicts the brothers ' first encounter with Lagertha.
list the properties of dental materials and how they affect their application
Dental material - wikipedia This page is about types of dental restorative materials. For dental fillings see dental restoration Dental materials are specially fabricated materials, designed for use in dentistry. There are many different types of dental material, and their characteristics vary according to their intended purpose. Examples include temporary dressings, dental restorations (fillings, crowns, bridges), endodontic materials (used in root canal therapy), impression materials, prosthetic materials (dentures), dental implants, and many others. A temporary dressing is a dental filling which is not intended to last in the long term. They are interim materials which may have therapeutic properties. A common use of temporary dressing occurs if root canal therapy is carried out over more than one appointment. In between each visit, the pulp canal system must be protected from contamination from the oral cavity, and a temporary filling is placed in the access cavity. Examples include: Dental cements are used most often to bond indirect restorations such as crowns to the natural tooth surface. Examples include: Dental impressions are negative imprints of teeth and oral soft tissues from which a positive representation can be cast. They are used in prosthodontics (to make dentures), orthodontics, restorative dentistry, dental implantology and oral and maxillofacial surgery. Impression materials are designed to be liquid or semi-solid when first mixed, then set hard in a few minutes, leaving imprints of oral structures. Common dental impression materials include sodium alginate, polyether, and silicones. Historically, plaster of Paris, zinc oxide eugenol, and agar have been used. Dental restorative materials are used to replace tooth structure loss, usually due to dental caries (dental cavities), but also tooth wear and dental trauma. On other occasions, such materials may be used for cosmetic purposes to alter the appearance of an individual 's teeth. There are many challenges for the physical properties of the ideal dental restorative material. The goal of research and development in restorative materials is to develop the ideal restorative material. The ideal restorative material would be identical to natural tooth structure in strength, adherence, and appearance. The properties of an ideal filling material can be divided into four categories: physical properties, biocompatibility, aesthetics and application. Direct restorations are ones which are placed directly into a cavity on a tooth, and shaped to fit. The chemistry of the setting reaction for direct restorative materials is designed to be more biologically compatible. Heat and byproducts generated can not damage the tooth or patient, since the reaction needs to take place while in contact with the tooth during restoration. This ultimately limits the strength of the materials, since harder materials need more energy to manipulate. The type of filling (restorative) material used has a minor effect on how long they last. The majority of clinical studies indicate the annual failure rates (AFR 's) are between 1 % and 3 % with tooth colored fillings on back teeth. Note that root canaled (endodontically) treated teeth have AFR 's between 2 % and 12 %. The main reasons for failure are cavities that occur around the filling and fracture of the real tooth. These are related to personal cavity risk and factors like grinding teeth (bruxism). Amalgam is a metallic filling material composed from a mixture of mercury (from 43 % to 54 %) and powdered alloy made mostly of silver, tin, zinc and copper, commonly called the amalgam alloy. Amalgam does not adhere to tooth structure without the aid of cements or use of techniques which lock in the filling, using the same principles as a dovetail joint. Amalgam is still used extensively in many parts of the world because of its cost effectiveness, superior strength and longevity. However, the metallic colour is not aesthetically pleasing and tooth coloured alternatives are continually emerging with increasingly comparable properties. Due to the known toxicity of the element mercury, there is some controversy about the use of amalgams. The Swedish government banned the use of mercury amalgam in June 2009. Research has shown that, while amalgam use is controversial and may increase mercury levels in the human body, these levels are below safety threshold levels established by the WHO and the EPA. However, there are certain subpopulations who, due to inherited genetic variabilities, exhibit sensitivity to mercury levels lower than these threshold levels. These particular individuals may experience adverse effects caused by amalgam restoration. These include myriad neural defects, mainly caused by impaired neurotransmitter processing. Composite resin fillings (also called white fillings) are a mixture of powdered glass and plastic resin, and can be made to resemble the appearance of the natural tooth. Although cosmetically superior to amalgam fillings, composite resin fillings are usually more expensive. Bis - GMA based resins contain Bisphenol A, a known endocrine disrupter chemical, and may contribute to the development of breast cancer. However, it has been demonstrated that the extremely low levels of bis - GMA released by composite restorations do not cause a significant increase in markers of renal injury, when compared to amalgam restorations. That is, there is no added risk of renal or endocrine injury in choosing composite restorations over amalgams. PEX - based materials do not contain Bisphenol A and are the least cytotoxic material available. Most modern composite resins are light - cured photopolymers, meaning that they harden with light exposure. They can then be polished to achieve maximum aesthetic results. Composite resins experience a very small amount of shrinkage upon curing, causing the material to pull away from the walls of the cavity preparation. This makes the tooth slightly more vulnerable to microleakage and recurrent decay. Microleakage can be minimized or eliminated by utilizing proper handling techniques and appropriate material selection. In some circumstances, less tooth structure can be removed compared to preparation for other dental materials such as amalgam and many of the indirect methods of restoration. This is because composite resins bind to enamel (and dentin too, although not as well) via a micromechanical bond. As conservation of tooth structure is a key ingredient in tooth preservation, many dentists prefer placing materials like composite instead of amalgam fillings whenever possible. Generally, composite fillings are used to fill a carious lesion involving highly visible areas (such as the central incisors or any other teeth that can be seen when smiling) or when conservation of tooth structure is a top priority. The bond of composite resin to tooth, is especially affected by moisture contamination and cleanliness of the prepared surface. Other materials can be selected when restoring teeth where moisture control techniques are not effective. The concept of using "smart '' materials in dentistry has attracted a lot of attention in recent years. Conventional glass - ionomer (GI) cements have a large number of applications in dentistry. They are biocompatible with the dental pulp to some extent. Clinically, this material was initially used as a biomaterial to replace the lost osseous tissues in the human body. These fillings are a mixture of glass and an organic acid. Although they are tooth - colored, glass ionomers vary in translucency. Although glass ionomers can be used to achieve an aesthetic result, their aesthetic potential does not measure up to that provided by composite resins. The cavity preparation of a glass ionomer filling is the same as a composite resin. However, one of the advantages of GI compared to other restorative materials is that they can be placed in cavities without any need for bonding agents (4). Conventional glass ionomers are chemically set via an acid - base reaction. Upon mixing of the material components, there is no light cure needed to harden the material once placed in the cavity preparation. After the initial set, glass ionomers still need time to fully set and harden. Advantages: 1. Glass ionomer can be placed in cavities without any need for bonding agents. 2. They are not subject to shrinkage and microleakage, as the bonding mechanism is an acid - base reaction and not a polymerization reaction. (GICs do not undergo great dimensional changes in a moist environment in response to heat or cold and it appears heating results only in water movement within the structure of the material. These exhibit shrinkage in a dry environment at temperature higher than 50C, which is similar to the behavior of dentin. 3. Glass ionomers contain and release fluoride, which is important to preventing carious lesions. Furthermore, as glass ionomers release their fluoride, they can be "recharged '' by the use of fluoride - containing toothpaste. Hence, they can be used as a treatment modality for patients who are at high risk for caries. Newer formulations of glass ionomers that contain light - cured resins can achieve a greater aesthetic result, but do not release fluoride as well as conventional glass ionomers. Disadvantages: The most important disadvantage is lack of adequate strength and toughness. In an attempt to improve the mechanical properties of the conventional GI, resin - modified ionomers have been marketed. GICs are usually weak after setting and are not stable in water; however, they become stronger with the progression of reactions and become more resistant to moisture. New generations: The aim is tissue regeneration and use of biomaterial in the form of a powder or solution is to induce local tissue repair. These bioactive materials release chemical agents in the form of dissolved ions or growth factors such as bone morphogenic protein, which stimulates activate cells. Glass ionomers are about as expensive as composite resin. The fillings do not wear as well as composite resin fillings. Still, they are generally considered good materials to use for root caries and for sealants. A combination of glass - ionomer and composite resin, these fillings are a mixture of glass, an organic acid, and resin polymer that harden when light cured (the light activates a catalyst in the cement that causes it to cure in seconds). The cost is similar to composite resin. It holds up better than glass ionomer, but not as well as composite resin, and is not recommended for biting surfaces of adult teeth. Generally, resin modified glass - ionomer cements can achieve a better aesthetic result than conventional glass ionomers, but not as good as pure composites. It has its own setting reaction. Another combination of composite resin and glass ionomer technology, with focus lying towards the composite resin end of the spectrum. Although compomers have better mechanical and aesthetic properties than RMGIC, they have worse wear properties and require bonding materials. Although compomers release fluoride, they do so at such a low level that it is not deemed effective, and unlike glass ionomer and RMIC, can not act as a fluoride reservoir. Indirect restorations are ones where the tooth or teeth to receive the restoration are first prepared, then a dental impression is taken and sent to a dental technician who fabricates the restoration according to the dentist 's prescription. Porcelain fillings are hard, but can cause wear on opposing teeth. They are brittle and are not always recommended for molar fillings. Tooth colored dental composite materials are either used as direct filling or as construction material of an indirect inlay. It is usually cured by light. Nano - ceramic particles embedded in a resin matrix, they are less brittle and therefore less likely to crack, or chip, than all - ceramic indirect fillings; they absorb the shock of chewing more like natural teeth, and more like resin or gold fillings, than do ceramic fillings; and at the same time more resistant to wear than all - resin indirect fillings. These are available in blocks for use with CAD - CAM systems. Gold fillings have excellent durability, wear well, and do not cause excessive wear to the opposing teeth, but they do conduct heat and cold, which can be irritating. There are two categories of gold fillings, cast gold fillings (gold inlays and onlays) made with 14 or 18 kt gold, and gold foil made with pure 24 kt gold that is burnished layer by layer. For years, they have been considered the benchmark of restorative dental materials. Recent advances in dental porcelains and consumer focus on aesthetic results have caused demand for gold fillings to drop in favor of advanced composites and porcelain veneers and crowns. Gold fillings are sometimes quite expensive; yet, they do last a very long time -- which can mean gold restorations are less costly and painful in the long run. It is not uncommon for a gold crown to last 30 years. Lead fillings were used in the 18th century, but became unpopular in the 19th century because of their softness. This was before lead poisoning was understood. According to American Civil War - era dental handbooks from the mid-19th century, since the early 19th century metallic fillings had been used, made of lead, gold, tin, platinum, silver, aluminum, or amalgam. A pellet was rolled slightly larger than the cavity, condensed into place with instruments, then shaped and polished in the patient 's mouth. The filling was usually left "high '', with final condensation -- "tamping down '' -- occurring while the patient chewed food. Gold foil was the most popular and preferred filling material during the Civil War. Tin and amalgam were also popular due to lower cost, but were held in lower regard. One survey of dental practices in the mid-19th century catalogued dental fillings found in the remains of seven Confederate soldiers from the U.S. Civil War; they were made of: Fillings have a finite lifespan: an average of 12.8 years for amalgam and 7.8 years for composite resins. However, the lifespan of a restoration also depends upon how the patient takes care of the damaged tooth. The Nordic Institute of Dental Materials (NIOM) evaluates dental materials in the Nordic countries. This research and testing institution are accredited to perform several test procedures for dental products. In Europe, dental materials are classified as medical devices according to the Medical Devices Directive. In USA, the U.S. Food and Drug Administration is the regulatory body for dental products.
where did the last name butler come from
Butler (surname) - wikipedia Butler is a surname that has been associated with many different places and people. It can be either: The surname "Butler '' was originated in the 12th century by Theobald le Botiller FitzWalter (Lord of Preston). Lord FitzWalter accompanied King John to Ireland to help secure Norman areas. When men whom Walter led killed Dermot MacCarthy, prince of Desmond, Walter was granted land holdings of Baggotrath, County Dublin, and the Stein River lands around what is now Trinity College Dublin. He was also given an important fief, on which Walter both founded an abbey and established his Irish seat. Upon returning to England, King John endowed Walter with the hereditary office "Butler to the Lord of Ireland '' in 1177; some evidence indicates that he was also dubbed "Butler of Ireland ''. As such, he had the right to pour the King 's wine. This title can be defined as Governor by today 's standards. His son, Theobalde Butler, was the first to hold the name and pass it to his descendants. Walter 's grandson was James Butler, 1st Earl of Ormonde. Kilkenny Castle was the main seat of the Butler family.
when did windows xp go end of life
Windows XP - wikipedia Windows XP (codenamed Whistler) is a personal computer operating system that was produced by Microsoft as part of the Windows NT family of operating systems. It was released to manufacturing on August 24, 2001, and broadly released for retail sale on October 25, 2001. Development of Windows XP began in the late 1990s as "Neptune '', an operating system built on the Windows NT kernel which was intended specifically for mainstream consumer use. An updated version of Windows 2000 was also originally planned for the business market; however, in January 2000, both projects were shelved in favor of a single OS codenamed "Whistler '', which would serve as a single OS platform for both consumer and business markets. Windows XP was a major advance from the MS - DOS based versions of Windows in security, stability and efficiency due to its use of Windows NT underpinnings. It introduced a significantly redesigned graphical user interface and was the first version of Windows to use product activation in an effort to reduce its copyright infringement. Upon its release, Windows XP received generally positive reviews, with critics noting increased performance and overall stability (especially in comparison to Windows ME), a more intuitive user interface, improved hardware support, and expanded multimedia capabilities. Despite some initial concerns over the new licensing model and product activation system, Windows XP eventually proved to be popular and widely used. It is estimated that at least 400 million copies of Windows XP were sold globally within its first five years of availability, and at least one billion copies were sold by April 2014. Sales of Windows XP licenses to original equipment manufacturers (OEMs) ceased on June 30, 2008, but continued for netbooks until October 2010. Extended support for Windows XP ended on April 8, 2014, after which the operating system ceased receiving further support or security updates to most users. As of January 2018, Windows XP holds 3.36 % of the whole Windows market share, meaning it has 2.8 % of the desktop operating system market share. Other estimates are as high as 4.05 %. Its market share is no longer in double digits in the vast majority of countries. In the late 1990s, initial development of what would become Windows XP was focused on two individual products; "Odyssey '', which was reportedly intended to succeed the future Windows 2000, and "Neptune '', which was reportedly a consumer - oriented operating system using the Windows NT architecture, succeeding the MS - DOS - based Windows 98. Based on the NT 5.0 kernel in Windows 2000, Neptune primarily focused on offering a simplified, task - based interface based on a concept known internally as "activity centers '', originally planned to be implemented in Windows 98. A number of activity centers were planned, serving as hubs for email communications, playing music, managing or viewing photos, searching the Internet, and viewing recently used content. A single build of Neptune, 5111 (which still carried the branding of Windows 2000 in places), revealed early work on the activity center concept, with an updated user account interface and graphical login screen, common functions (such as recently used programs) being accessible from a customizable "Starting Places '' page (which could be used as either a separate window, or a full - screen desktop replacement). However, the project proved to be too ambitious. Microsoft discussed a plan to delay Neptune in favor of an interim OS known as "Asteroid '', which would have been an update to Windows 2000 (Windows NT 5.0), and have a consumer - oriented version. At the WinHEC conference on April 7, 1999, Steve Ballmer announced an updated version of Windows 98 known as Windows Millennium, breaking a promise made by Microsoft CEO Bill Gates in 1998 that Windows 98 would be the final consumer - oriented version of Windows to use the MS - DOS architecture. Concepts introduced by Neptune would influence future Windows products; in Windows ME, the activity center concept was used for System Restore and Help and Support Center (which both combined Win32 code with an interface rendered using Internet Explorer 's layout engine), the hub concept would be expanded on Windows Phone, and Windows 8 would similarly use a simplified user interface running atop the existing Windows shell. In January 2000, shortly prior to the official release of Windows 2000, technology writer Paul Thurrott reported that Microsoft had shelved both Neptune and Odyssey in favor of a new product codenamed Whistler, after Whistler, British Columbia, as many Microsoft employees skied at the Whistler - Blackcomb ski resort. The goal of Whistler was to unify both the consumer and business - oriented Windows lines under a single, Windows NT platform: Thurrott stated that Neptune had become "a black hole when all the features that were cut from (Windows ME) were simply re-tagged as Neptune features. And since Neptune and Odyssey would be based on the same code - base anyway, it made sense to combine them into a single project ''. At WinHEC in April 2000, Microsoft officially announced and presented an early build of Whistler, focusing on a new modularized architecture, built - in CD burning, fast user switching, and updated versions of the digital media features introduced by ME. Windows general manager Carl Stork stated that Whistler would be released in both consumer - and business - oriented versions built atop the same architecture, and that there were plans to update the Windows interface to make it "warmer and more friendly ''. In June 2000, Microsoft began the technical beta testing process. Whistler was expected to be made available in "Personal '', "Professional '', "Server '', "Advanced Server '', and "Datacenter '' editions. At PDC on July 13, 2000, Microsoft announced that Whistler would be released during the second half of 2001, and also released the first preview build, 2250. The build notably introduced an early version of a new visual styles system along with an interim theme known as "Professional '' (later renamed "Watercolor ''), and contained a hidden "Start page '' (a full - screen page similar to Neptune 's "Starting Places ''), and a hidden, early version of a two - column Start menu design. Build 2257 featured further refinements to the Watercolor theme, along with the official introduction of the two - column Start menu, and the addition of an early version of Windows Firewall. Microsoft released Whistler Beta 1, build 2296, on October 31, 2000. In January 2001, build 2410 introduced Internet Explorer 6.0 (previously branded as 5.6) and the Microsoft Product Activation system. Bill Gates dedicated a portion of his keynote at Consumer Electronics Show to discuss Whistler, explaining that the OS would bring "(the) dependability of our highest end corporate desktop, and total dependability, to the home, '' and also "move it in the direction of making it very consumer - oriented. Making it very friendly for the home user to use. '' Alongside Beta 1, it was also announced that Microsoft would prioritize the release of the consumer - oriented versions of Whistler over the server - oriented versions in order to gauge reaction, but that they would be both generally available during the second half of 2001 (Whistler Server would ultimately be delayed into 2003). Builds 2416 and 2419 added the File and Transfer Settings Wizard and began to introduce elements of the operating system 's final appearance (such as its near - final Windows Setup design, and the addition of new default wallpapers, such as Bliss). On February 5, 2001, Microsoft officially announced that Whistler would be known as Windows XP, where XP stands for "experience ''. As a complement, the next version of Microsoft Office was also announced as Office XP. Microsoft stated that the name "(symbolizes) the rich and extended user experiences Windows and Office can offer by embracing Web services that span a broad range of devices. '' In a press event at EMP Museum in Seattle on February 13, 2001, Microsoft publicly unveiled the new "Luna '' user interface of Windows XP. Windows XP Beta 2, build 2462a (which among other improvements, introduced the Luna style), was launched at WinHEC on March 25, 2001. In April 2001, Microsoft controversially announced that XP would not integrate support for Bluetooth or USB 2.0 on launch, requiring the use of third - party drivers. Critics felt that in the case of the latter, Microsoft 's decision had delivered a potential blow to the adoption of USB 2.0, as XP was to provide support for the competing, Apple - developed, FireWire standard instead. A representative stated that the company had "(recognized) the importance of USB 2.0 as a newly emerging standard and is evaluating the best mechanism for making it available to Windows XP users after the initial release. '' The builds prior to and following Release Candidate 1 (build 2505, released on July 5, 2001), and Release Candidate 2 (build 2526, released on July 27, 2001), focused on fixing bugs, acknowledging user feedback, and other final tweaks before the RTM build. In June 2001, Microsoft indicated that it was planning to, in conjunction with Intel and other PC makers, spend at least US $ 1 billion on marketing and promoting Windows XP. The theme of the campaign, "Yes You Can '', was designed to emphasize the platform 's overall capabilities. Microsoft had originally planned to use the slogan "Prepare to Fly '', but it was replaced due to sensitivity issues in the wake of the September 11 attacks. A prominent aspect of Microsoft 's campaign was a U.S. television commercial featuring Madonna 's song "Ray of Light ''; a Microsoft spokesperson stated that the song was chosen due to its optimistic tone and how it complemented the overall theme of the campaign. On August 24, 2001, Windows XP build 2600 was released to manufacturing. During a ceremonial media event at Microsoft Redmond Campus, copies of the RTM build were given to representatives of several major PC manufacturers in briefcases, who then flew off on decorated helicopters. While PC manufacturers would be able to release devices running XP beginning on September 24, 2001, XP was expected to reach general, retail availability on October 25, 2001. On the same day, Microsoft also announced the final retail pricing of XP 's two main editions, "Home '' and "Professional ''. While retaining some similarities to previous versions, Windows XP 's interface was overhauled with a new visual appearance, with an increased use of alpha compositing effects, drop shadows, and "visual styles '', which completely change the appearance of the operating system. The number of effects enabled are determined by the operating system based on the computer 's processing power, and can be enabled or disabled on a case - by - case basis. XP also added ClearType, a new subpixel rendering system designed to improve the appearance of fonts on liquid - crystal displays. A new set of system icons was also introduced. The default wallpaper, Bliss, is a photo of a landscape in the Napa Valley outside Napa, California, with rolling green hills and a blue sky with stratocumulus and cirrus clouds. The Start menu received its first major overhaul on XP, switching to a two - column layout with the ability to list, pin, and display frequently used applications, recently opened documents, and the traditional cascading "All Programs '' menu. The taskbar can now group windows opened by a single application into one taskbar button, with a popup menu listing the individual windows. The notification area also hides "inactive '' icons by default. The taskbar can also be "locked '' to prevent accidental moving or other changes. A "common tasks '' list was added, and Windows Explorer 's sidebar was updated to use a new task - based design with lists of common actions; the tasks displayed are contextually relevant to the type of content in a folder (i.e. a folder with music displays offers to play all the files in the folder, or burn them to a CD). Fast user switching allows additional users to log into a Windows XP machine without existing users having to close their programs and logging out. Although only one user at the time can use the console (i.e. monitor, keyboard and mouse), previous users can resume their session once they regained control of the console. Windows XP uses prefetcher to improve startup and application launch times. It also became possible to revert the installation of an updated device driver, should the updated driver produce undesirable results. Numerous improvements were also made to system administration tools such as Windows Installer, Windows Script Host, Disk Defragmenter, Windows Task Manager, Group Policy, CHKDSK, NTBackup, Microsoft Management Console, Shadow Copy, Registry Editor, Sysprep and WMI. Windows XP introduced a copy protection system known as Windows Product Activation. All Windows licenses must be tied to a unique ID generated using information from the computer hardware, transmitted either via the internet or a telephone hotline. If Windows is not activated within 30 days of installation, the OS will cease to function until it is activated. Windows also periodically verifies the hardware to check for changes. If significant hardware changes are detected, the activation is voided, and Windows must be re-activated. Windows XP was originally bundled with Internet Explorer 6, Outlook Express 6, Windows Messenger, and MSN Explorer. New networking features were also added, including Internet Connection Firewall, Internet Connection Sharing integration with UPnP, NAT traversal APIs, Quality of Service features, IPv6 and Teredo tunneling, Background Intelligent Transfer Service, extended fax features, network bridging, peer to peer networking, support for most DSL modems, IEEE 802.11 (Wi - Fi) connections with auto configuration and roaming, TAPI 3.1, and networking over FireWire. Remote Assistance and Remote Desktop were also added, which allow users to connect to a computer running Windows XP from across a network or the Internet and access their applications, files, printers, and devices or request help. Improvements were also made to IntelliMirror features such as Offline Files, Roaming user profiles and Folder redirection. Some of the programs and features that were part of the previous versions of Windows did not make it to Windows XP. CD Player, DVD Player, and Imaging for Windows are replaced with Windows Picture and Fax Viewer, Windows Media Player, and Windows shell. NetBEUI and NetDDE are deprecated and are not installed by default. DLC and AppleTalk network protocols are removed. Plug - and - play -- incompatible communication devices (like modems and network interface cards) are no longer supported. Service Pack 2 and Service Pack 3 also remove features from Windows XP but to a less noticeable extent. For instance, Program Manager and support for TCP half - open connections are removed in Service Pack 2. The Energy Star logo and the address bar on the taskbar are removed in Service Pack 3. Windows XP was released in two major editions on launch: Home Edition and Professional Edition. Both editions were made available at retail as pre-loaded software on new computers, and in boxed copies. Boxed copies were sold as "Upgrade '' or "Full '' licenses; the "Upgrade '' versions were slightly cheaper, but require an existing version of Windows to install. The "Full '' version can be installed on systems without an operating system or existing version of Windows. Both versions of XP were aimed towards different markets: Home Edition is explicitly intended for consumer use and disables or removes certain advanced and enterprise - oriented features present on Professional, such as the ability to join a Windows domain, Internet Information Services, and Multilingual User Interface. Windows 98 or ME can be upgraded to either version, but Windows NT 4.0 and Windows 2000 can only be upgraded to Professional. Windows ' software license agreement for pre-loaded licenses allows the software to be "returned '' to the OEM for a refund if the user does not wish to use it. Despite the refusal of some manufacturers to honor the entitlement, it has been enforced by courts in some countries. Two specialized variants of XP were introduced in 2002 for certain types of hardware, exclusively through OEM channels as pre-loaded software. Windows XP Media Center Edition was initially designed for high - end home theater PCs with TV tuners (marketed under the term "Media Center PC ''), offering expanded multimedia functionality, an electronic program guide, and digital video recorder (DVR) support through the Windows Media Center application. Microsoft also unveiled Windows XP Tablet PC Edition, which contains additional pen input features, and is optimized for mobile devices meeting its Tablet PC specifications. Two different 64 - bit editions of XP were made available; the first, Windows XP 64 - Bit Edition, was intended for IA - 64 (Itanium) systems; as IA - 64 usage declined on workstations in favor of AMD 's x86 - 64 architecture (which was supported by the later Windows XP Professional x64 Edition), the Itanium version was discontinued in 2005. Microsoft also targeted emerging markets with the 2004 introduction of Windows XP Starter Edition, a special variant of Home Edition intended for low - cost PCs. The OS is primarily aimed at first - time computer owners, containing heavy localization (including wallpapers and screen savers incorporating images of local landmarks), and a "My Support '' area which contains video tutorials on basic computing tasks. It also removes certain "complex '' features, and does not allow users to run more than three applications at a time. After a pilot program in India and Thailand, Starter was released in other emerging markets throughout 2005. In 2006, Microsoft also unveiled the FlexGo initiative, which would also target emerging markets with subsidized PCs on a pre-paid, subscription basis. As the result of unfair competition lawsuits in Europe and South Korea, which both alleged that Microsoft had improperly leveraged its status in the PC market to favor its own bundled software, Microsoft was ordered to release special versions of XP in these markets that excluded certain applications. In March 2004, after the European Commission fined Microsoft € 497 million (US $603 million), Microsoft was ordered to release "N '' versions of XP that excluded Windows Media Player, encouraging users to pick and download their own media player software. As it was sold at the same price as the version with Windows Media Player included, certain OEMs (such as Dell, who offered it for a short period, along with Hewlett - Packard, Lenovo and Fujitsu Siemens) chose not to offer it. Consumer interest was minuscule, with roughly 1,500 units shipped to OEMs, and no reported sales to consumers. In December 2005, the Korean Fair Trade Commission ordered Microsoft to make available editions of Windows XP and Windows Server 2003 that do not contain Windows Media Player or Windows Messenger. The "K '' and "KN '' editions of Windows XP were released in August 2006, and are only available in English and Korean, and also contain links to third - party instant messenger and media player software. A service pack is cumulative update package that is a superset of all updates, and even service packs, that have been released before it. Three service packs have been released for Windows XP. Service Pack 3 is slightly different, in that it needs at least Service Pack 1 to have been installed, in order to update a live OS. However, Service Pack 3 can still be embedded into a Windows installation disc; SP1 is not reported as a prerequisite for doing so. Service Pack 1 (SP1) for Windows XP was released on September 9, 2002. It contained over 300 minor, post-RTM bug fixes, along with all security patches released since the original release of XP. SP1 also added USB 2.0 support, Microsoft Java Virtual Machine,. NET Framework support, and support for technologies used by the then - upcoming Media Center and Tablet PC editions of XP. The most significant change on SP1 was the addition of Set Program Access and Defaults, a settings page which allows programs to be set as default for certain types of activities (such as media players or web browsers) and for access to bundled, Microsoft programs (such as Internet Explorer or Windows Media Player) to be disabled. This feature was added to comply with the settlement of United States v. Microsoft Corp., which required Microsoft to offer the ability for OEMs to bundle third - party competitors to software it bundles with Windows (such as Internet Explorer and Windows Media Player), and give them the same level of prominence as those normally bundled with the OS. On February 3, 2003, Microsoft released Service Pack 1a (SP1a). It is the same as SP1, except Microsoft Java Virtual Machine is removed. Service Pack 2 (SP2) was released on August 25, 2004, SP2 added new functionality to Windows XP, such as WPA encryption compatibility and improved Wi - Fi support (with a wizard utility), a pop - up ad blocker for Internet Explorer 6, and partial Bluetooth support. Service Pack 2 also added new security enhancements (codenamed "Springboard ''), which included a major revision to the included firewall (renamed Windows Firewall, and now enabled by default), Data Execution Prevention gained hardware support in the NX bit that can stop some forms of buffer overflow attacks. Raw socket support is removed (which supposedly limits the damage done by zombie machines) and the Windows Messenger service (which had been abused to cause pop - up advertisements to be displayed as system messages without a web browser or any additional software) became disabled by default. Additionally, security - related improvements were made to e-mail and web browsing. Service Pack 2 also added Security Center, an interface which provides a general overview of the system 's security status, including the state of the firewall and automatic updates. Third - party firewall and antivirus software can also be monitored from Security Center. In August 2006, Microsoft released updated installation media for Windows XP and Windows Server 2003 SP2 (SP2b), in order to incorporate a patch requiring ActiveX controls in Internet Explorer to be manually activated before a user may interact with them. This was done so that the browser would not violate a patent owned by Eolas. Microsoft has since licensed the patent, and released a patch reverting the change in April 2008. In September 2007, another minor revision known as SP2c was released for XP Professional, extending the number of available product keys for the operating system to "support the continued availability of Windows XP Professional through the scheduled system builder channel end - of - life (EOL) date of January 31, 2009. '' Windows XP Service Pack 3 (SP3) was released to manufacturing on April 21, 2008, and to the public via both the Microsoft Download Center and Windows Update on May 6, 2008. It began being automatically pushed out to Automatic Updates users on July 10, 2008. A feature set overview which details new features available separately as stand - alone updates to Windows XP, as well as backported features from Windows Vista, has been posted by Microsoft. A total of 1,174 fixes are included in SP3. Service Pack 3 can be installed on systems with Internet Explorer versions 6, 7, or 8. Internet Explorer 7 is not included as part of SP3. Service Pack 3 is not available for the 64 bit version of Windows XP, which is based on the Windows Server 2003 kernel. Service Pack 3 also incorporated several previously released key updates for Windows XP, which were not included up to SP2, including: Service Pack 3 contains updates to the operating system components of Windows XP Media Center Edition (MCE) and Windows XP Tablet PC Edition, and security updates for. NET Framework version 1.0, which is included in these editions. However, it does not include update rollups for the Windows Media Center application in Windows XP MCE 2005. SP3 also omits security updates for Windows Media Player 10, although the player is included in Windows XP MCE 2005. The Address Bar DeskBand on the Taskbar is no longer included due to antitrust violation concerns. System requirements for Windows XP are as follows: Notes: The maximum amount of RAM that Windows XP can support varies depending on the product edition and the processor architecture, as shown in the following table. Windows XP Professional supports up to two physical processors (CPU sockets); Windows XP Home Edition is limited to one. Windows XP supports a greater number of logical processors. A logical processor is either: 1) One of the two handlers of threads of instructions in one of the cores of a physical processor with support for hyper - threading present and enabled; or 2) one of the cores of one of the physical processors without enabled support for hyper - threading. Windows XP 32 - bit editions support up to 32 logical processors; 64 - bit editions support up to 64 logical processors. Support for Windows XP without a service pack ended on September 30, 2005. Windows XP Service Packs 1 and 1a were retired on October 10, 2006, and Windows XP Service Pack 2 reached end of support on July 13, 2010, almost six years after its general availability. The company stopped general licensing of Windows XP to OEMs and terminated retail sales of the operating system on June 30, 2008, 17 months after the release of Windows Vista. However, an exception was announced on April 3, 2008, for OEMs producing what it defined as "ultra low - cost personal computers '', particularly netbooks, until one year after the availability of Windows 7 on October 22, 2010. Analysts felt that the move was primarily intended to compete against Linux - based netbooks, although Microsoft 's Kevin Hutz stated that the decision was due to apparent market demand for low - end computers with Windows. Variants of Windows XP for embedded systems have different support policies: Windows XP Embedded SP3 and Windows Embedded for Point of Service SP3 were supported until January and April 2016, respectively. Windows Embedded Standard 2009 and Windows Embedded POSReady 2009 continue to receive Extended support through January and April 2019, respectively. On April 14, 2009, Windows XP exited mainstream support and entered the Extended support phase; Microsoft continued to provide security updates every month for Windows XP; however, free technical support, warranty claims, and design changes were no longer being offered. Extended support ended on April 8, 2014, over 12 years since the release of XP; normally Microsoft products have a support life cycle of only 10 years. Beyond the final security updates released on April 8, no more security patches or support information are provided for XP free - of - charge; "critical patches '' will still be created, and made available only to customers subscribing to a paid "Custom Support '' plan. As it is a Windows component, all versions of Internet Explorer for Windows XP also became unsupported. In January 2014, it was estimated that more than 95 % of the 3 million automated teller machines in the world were still running Windows XP (which largely replaced IBM 's OS / 2 as the predominant operating system on ATMs); ATMs have an average lifecycle of between seven and ten years, but some have had lifecycles as long as 15. Plans were being made by several ATM vendors and their customers to migrate to Windows 7 - based systems over the course of 2014, while vendors have also considered the possibility of using Linux - based platforms in the future to give them more flexibility for support lifecycles, and the ATM Industry Association (ATMIA) has since endorsed Windows 10 as a further replacement. However, ATMs typically run the embedded variant of Windows XP, which was supported through January 2016. As of May 2017, around 60 % of the 220,000 ATMs in India still run Windows XP. As of January 2014, at least 49 % of all computers in China still ran XP. These holdouts have been influenced by several factors; prices of genuine copies of Windows in the country are high, while Ni Guangnan of the Chinese Academy of Sciences warned that Windows 8 could allegedly expose users to surveillance by the United States government, and the Chinese government would ban the purchase of Windows 8 products for government use in May 2014 in protest of Microsoft 's inability to provide "guaranteed '' support. The government also had concerns that the impending end of support could affect their anti-piracy initiatives with Microsoft, as users would simply pirate newer versions rather than purchasing them legally. As such, government officials formally requested that Microsoft extend the support period for XP for these reasons. While Microsoft did not comply with their requests, a number of major Chinese software developers, such as Lenovo, Kingsoft and Tencent, will provide free support and resources for Chinese users migrating from XP. Several governments, in particular the Netherlands and the United Kingdom, elected to negotiate "Custom Support '' plans with Microsoft for their continued, internal use of Windows XP; the British government 's deal lasts for a year, and also covers support for Office 2003 (which reached end - of - life the same day) and cost £ 5.5 million. On March 8, 2014, Microsoft deployed an update for XP that, on the 8th of each month, displays a pop - up notification to remind users about the end of support; however, these notifications may be disabled by the user. Microsoft also partnered with Laplink to provide a special "express '' version of its PCmover software to help users migrate files and settings from XP to a computer with a newer version of Windows. Despite the approaching end of support, there were still notable holdouts that had not migrated past XP; many users elected to remain on XP because of the poor reception of Windows Vista, sales of newer PCs with newer versions of Windows declined due to the Great Recession and the effects of Vista, and deployments of new versions of Windows in enterprise environments require a large amount of planning, which includes testing applications for compatibility (especially those that are dependent on Internet Explorer 6, which is not compatible with newer versions of Windows). Major security software vendors (including Microsoft itself) planned to continue offering support and definitions for Windows XP past the end of support to varying extents, along with the developers of Google Chrome, Mozilla Firefox, and Opera web browsers; despite these measures, critics similarly argued that users should eventually migrate from XP to a supported platform. Microsoft continued to provide Security Essentials virus definitions and updates for its Malicious Software Removal Tool (MSRT) for XP until July 14, 2015. As the end of extended support approached, Microsoft began to increasingly urge XP customers to migrate to newer versions such as Windows 7 or 8 in the interest of security, suggesting that attackers could reverse engineer security patches for newer versions of Windows and use them to target equivalent vulnerabilities in XP. Windows XP is remotely exploitable by numerous security holes that were discovered after Microsoft stopped supporting it. Similarly specialized devices that run XP, particularly medical devices, must have any revisions to their software -- even security updates for the underlying operating system -- approved by relevant regulators before they can be released. For the same reason, manufacturers of medical devices had historically refused to provide, or even allow the installation of any Windows updates for these devices, leaving them open to security exploits and malware. Despite the end of support for Windows XP, Microsoft has released two emergency security patches for the operating system to patch major security vulnerabilities: On release, Windows XP received mostly positive reviews. CNET described the operating system as being "worth the hype '', considering the new interface to be "spiffier '' and more intuitive than previous versions, but feeling that it may "annoy '' experienced users with its "hand - holding ''. XP 's expanded multimedia support and CD burning functionality was also noted, along with its streamlined networking tools. The performance improvements of XP in comparison to 2000 and ME were also praised, along with its increased number of built - in device drivers in comparison to 2000. The software compatibility tools were also praised, although it was noted that some programs, particularly older MS - DOS software, may not work correctly on XP due to its differing architecture. They panned Windows XP 's new licensing model and product activation system, considering it to be a "slightly annoying roadblock '', but acknowledged Microsoft 's intent for the changes. PC Magazine provided similar praise, although noting that a number of its online features were designed to promote Microsoft - owned services, and that aside from quicker boot times, XP 's overall performance showed little difference over Windows 2000. According to web analytics data generated by Net Applications, Windows XP was the most widely used operating system until August 2012, when Windows 7 overtook it. In January 2014, Net Applications reported a market share of 29.23 % of "desktop operating systems '' for XP (when XP was introduced there was not a separate mobile category to track), while W3Schools reported a share of 11.0 %. According to web analytics data generated by StatOwl, Windows XP had a 27.82 % market share as of November 2012, having dropped to second place in October 2011. According to web analytics data generated by W3Schools, from September 2003 to July 2011, Windows XP was the most widely used operating system for accessing the w3schools website, which they claim is consistent with statistics from other websites. As of August 2015, Windows XP market share was at 3.6 % after having peaked at 76.1 % in January 2007. As of November 2016, Windows XP desktop market share, according web analysis (as a proxy for all use) shows significant variation in different parts of the world, but was on average at 4.66 % (according to NetMarketshare, 8.45 %) ranked after three other versions of Windows by StatCounter 's numbers (2.31 % across all platforms); for example, in North America usage of Windows XP had dropped to 2.06 %. As of 2018, in most regions Windows XP market share has gone below 4 %, while it is still stubbornly in the double digit market share, years after discontinuation, in a few countries such as China at 10.39 % and North Korea at 15.1 %.
who wrote the i had a dream speech
I Have a Dream - wikipedia "I Have a Dream '' is a public speech delivered by American civil rights activist Martin Luther King Jr. during the March on Washington for Jobs and Freedom on August 28, 1963, in which he calls for an end to racism in the United States and called for civil and economic rights. Delivered to over 250,000 civil rights supporters from the steps of the Lincoln Memorial in Washington, D.C., the speech was a defining moment of the Civil Rights Movement. Beginning with a reference to the Emancipation Proclamation, which freed millions of slaves in 1863, King observes that: "one hundred years later, the Negro still is not free ''. Toward the end of the speech, King departed from his prepared text for a partly improvised peroration on the theme "I have a dream '', prompted by Mahalia Jackson 's cry: "Tell them about the dream, Martin! '' In this part of the speech, which most excited the listeners and has now become its most famous, King described his dreams of freedom and equality arising from a land of slavery and hatred. Jon Meacham writes that, "With a single phrase, Martin Luther King Jr. joined Jefferson and Lincoln in the ranks of men who 've shaped modern America ''. The speech was ranked the top American speech of the 20th century in a 1999 poll of scholars of public address. The March on Washington for Jobs and Freedom was partly intended to demonstrate mass support for the civil rights legislation proposed by President Kennedy in June. Martin Luther King and other leaders therefore agreed to keep their speeches calm, also, to avoid provoking the civil disobedience which had become the hallmark of the Civil Rights Movement. King originally designed his speech as a homage to Abraham Lincoln 's Gettysburg Address, timed to correspond with the 100 - year centennial of the Emancipation Proclamation. King had been preaching about dreams since 1960, when he gave a speech to the National Association for the Advancement of Colored People (NAACP) called "The Negro and the American Dream ''. This speech discusses the gap between the American dream and reality, saying that overt white supremacists have violated the dream, and that "our federal government has also scarred the dream through its apathy and hypocrisy, its betrayal of the cause of justice ''. King suggests that "It may well be that the Negro is God 's instrument to save the soul of America. '' In 1961, he spoke of the Civil Rights Movement and student activists ' "dream '' of equality -- "the American Dream... a dream as yet unfulfilled '' -- in several national speeches and statements, and took "the dream '' as the centerpiece for these speeches. On November 27, 1962, King gave a speech at Booker T. Washington High School in Rocky Mount, North Carolina. That speech was longer than the version which he would eventually deliver from the Lincoln Memorial. And while parts of the text had been moved around, large portions were identical, including the "I have a dream '' refrain. After being rediscovered, the restored and digitized recording of the 1962 speech was presented to the public by the English department of North Carolina State University. Martin Luther King had also delivered a "dream '' speech in Detroit, in June 1963, when he marched on Woodward Avenue with Walter Reuther and the Reverend C.L. Franklin, and had rehearsed other parts. The March on Washington Speech, known as "I Have a Dream Speech '', has been shown to have had several versions, written at several different times. It has no single version draft, but is an amalgamation of several drafts, and was originally called "Normalcy, Never Again ''. Little of this, and another "Normalcy Speech '', ended up in the final draft. A draft of "Normalcy, Never Again '' is housed in the Morehouse College Martin Luther King Jr. Collection of the Robert W. Woodruff Library, Atlanta University Center and Morehouse College. The focus on "I have a dream '' comes through the speech 's delivery. Toward the end of its delivery, noted African American gospel singer Mahalia Jackson shouted to King from the crowd, "Tell them about the dream, Martin. '' King departed from his prepared remarks and started "preaching '' improvisationally, punctuating his points with "I have a dream. '' The speech was drafted with the assistance of Stanley Levison and Clarence Benjamin Jones in Riverdale, New York City. Jones has said that "the logistical preparations for the march were so burdensome that the speech was not a priority for us '' and that, "on the evening of Tuesday, Aug. 27, (12 hours before the march) Martin still did n't know what he was going to say ''. Leading up to the speech 's rendition at the Great March on Washington, King had delivered its "I have a dream '' refrains in his speech before 25,000 people in Detroit 's Cobo Hall immediately after the 125,000 - strong Great Walk to Freedom in Detroit, June 23, 1963. After the Washington, D.C. March, a recording of King 's Cobo Hall speech was released by Detroit 's Gordy Records as an LP entitled "The Great March To Freedom ''. Widely hailed as a masterpiece of rhetoric, King 's speech invokes pivotal documents in American history, including the Declaration of Independence, the Emancipation Proclamation, and the United States Constitution. Early in his speech, King alludes to Abraham Lincoln 's Gettysburg Address by saying "Five score years ago... '' In reference to the abolition of slavery articulated in the Emancipation Proclamation, King says: "It came as a joyous daybreak to end the long night of their captivity. '' Anaphora (i.e., the repetition of a phrase at the beginning of sentences) is employed throughout the speech. Early in his speech, King urges his audience to seize the moment; "Now is the time '' is repeated three times in the sixth paragraph. The most widely cited example of anaphora is found in the often quoted phrase "I have a dream '', which is repeated eight times as King paints a picture of an integrated and unified America for his audience. Other occasions include "One hundred years later '', "We can never be satisfied '', "With this faith '', "Let freedom ring '', and "free at last ''. King was the sixteenth out of eighteen people to speak that day, according to the official program. Among the most quoted lines of the speech include "I have a dream that my four little children will one day live in a nation where they will not be judged by the color of their skin, but by the content of their character. I have a dream today! '' According to U.S. Representative John Lewis, who also spoke that day as the president of the Student Nonviolent Coordinating Committee, "Dr. King had the power, the ability, and the capacity to transform those steps on the Lincoln Memorial into a monumental area that will forever be recognized. By speaking the way he did, he educated, he inspired, he informed not just the people there, but people throughout America and unborn generations. '' The ideas in the speech reflect King 's social experiences of ethnocentric abuse, the mistreatment and exploitation of blacks. The speech draws upon appeals to America 's myths as a nation founded to provide freedom and justice to all people, and then reinforces and transcends those secular mythologies by placing them within a spiritual context by arguing that racial justice is also in accord with God 's will. Thus, the rhetoric of the speech provides redemption to America for its racial sins. King describes the promises made by America as a "promissory note '' on which America has defaulted. He says that "America has given the Negro people a bad check '', but that "we 've come to cash this check '' by marching in Washington, D.C. King 's speech used words and ideas from his own speeches and other texts. For years, he had spoken about dreams, quoted from Samuel Francis Smith 's popular patriotic hymn "America '' ("My Country, ' Tis of Thee ''), and of course referred extensively to the Bible. The idea of constitutional rights as an "unfulfilled promise '' was suggested by Clarence Jones. The final passage from King 's speech closely resembles Archibald Carey Jr. 's address to the 1952 Republican National Convention: both speeches end with a recitation of the first verse of "America '', and the speeches share the name of one of several mountains from which both exhort "let freedom ring ''. King also is said to have used portions of Prathia Hall 's speech at the site of a burned - down African American church in Terrell County, Georgia, in September 1962, in which she used the repeated phrase "I have a dream ''. The church burned down after it was used for voter registration meetings. The speech also alludes to Psalm 30: 5 in the second stanza of the speech. Additionally, King quotes from Isaiah 40: 4 -- 5 ("I have a dream that every valley shall be exalted... '') and Amos 5: 24 ("But let justice roll down like water... ''). He also alludes to the opening lines of Shakespeare 's Richard III ("Now is the winter of our discontent / Made glorious summer... '') when he remarks that "this sweltering summer of the Negro 's legitimate discontent will not pass until there is an invigorating autumn... '' The "I Have a Dream '' speech can be dissected by using three rhetorical lenses: voice merging, prophetic voice, and dynamic spectacle. Voice merging is the combining of one 's own voice with religious predecessors. Prophetic voice is using rhetoric to speak for a population. A dynamic spectacle has origins from the Aristotelian definition as "a weak hybrid form of drama, a theatrical concoction that relied upon external factors (shock, sensation, and passionate release) such as televised rituals of conflict and social control. '' Voice merging is a common technique used amongst African American preachers. It combines the voices of previous preachers and excerpts from scriptures along with their own unique thoughts to create a unique voice. King uses voice merging in his peroration when he references the secular hymn "America ''. The rhetoric of King 's speech can be compared to the rhetoric of Old Testament prophets. During King 's speech, he speaks with urgency and crisis giving him a prophetic voice. The prophetic voice must "restore a sense of duty and virtue amidst the decay of venality. '' An evident example is when King declares that, "now is the time to make justice a reality for all of God 's children. '' Why King 's speech was powerful is debated, but essentially, it came at a point of many factors combining at a key cultural turning point. Executive speechwriter Anthony Trendl writes, "The right man delivered the right words to the right people in the right place at the right time. '' "Given the context of drama and tension in which it was situated '', King 's speech can be classified as a dynamic spectacle. A dynamic spectacle is dependent on the situation in which it is used. It can be considered a dynamic spectacle because it happened at the correct time and place: during the Civil Rights Movement and the March on Washington. The speech was lauded in the days after the event, and was widely considered the high point of the March by contemporary observers. James Reston, writing for The New York Times, said that "Dr. King touched all the themes of the day, only better than anybody else. He was full of the symbolism of Lincoln and Gandhi, and the cadences of the Bible. He was both militant and sad, and he sent the crowd away feeling that the long journey had been worthwhile. '' Reston also noted that the event "was better covered by television and the press than any event here since President Kennedy 's inauguration '', and opined that "it will be a long time before (Washington) forgets the melodious and melancholy voice of the Rev. Dr. Martin Luther King Jr. crying out his dreams to the multitude. '' An article in The Boston Globe by Mary McGrory reported that King 's speech "caught the mood '' and "moved the crowd '' of the day "as no other '' speaker in the event. Marquis Childs of The Washington Post wrote that King 's speech "rose above mere oratory ''. An article in the Los Angeles Times commented that the "matchless eloquence '' displayed by King -- "a supreme orator '' of "a type so rare as almost to be forgotten in our age '' -- put to shame the advocates of segregation by inspiring the "conscience of America '' with the justice of the civil - rights cause. The Federal Bureau of Investigation (FBI), which viewed King and his allies for racial justice as subversive, also noticed the speech. This provoked the organization to expand their COINTELPRO operation against the Southern Christian Leadership Conference (SCLC), and to target King specifically as a major enemy of the United States. Two days after King delivered "I Have a Dream '', Agent William C. Sullivan, the head of COINTELPRO, wrote a memo about King 's growing influence: In the light of King 's powerful demagogic speech yesterday he stands head and shoulders above all other Negro leaders put together when it comes to influencing great masses of Negroes. We must mark him now, if we have not done so before, as the most dangerous Negro of the future in this Nation from the standpoint of communism, the Negro and national security. The speech was a success for the Kennedy administration and for the liberal civil rights coalition that had planned it. It was considered a "triumph of managed protest '', and not one arrest relating to the demonstration occurred. Kennedy had watched King 's speech on television and been very impressed. Afterwards, March leaders accepted an invitation to the White House to meet with President Kennedy. Kennedy felt the March bolstered the chances for his civil rights bill. Meanwhile, some of the more radical Black leaders who were present condemned the speech (along with the rest of the march) as too compromising. Malcolm X later wrote in his autobiography: "Who ever heard of angry revolutionaries swinging their bare feet together with their oppressor in lily pad pools, with gospels and guitars and ' I have a dream ' speeches? '' The March on Washington put pressure on the Kennedy administration to advance its civil rights legislation in Congress. The diaries of Arthur M. Schlesinger Jr., published posthumously in 2007, suggest that President Kennedy was concerned that if the march failed to attract large numbers of demonstrators, it might undermine his civil rights efforts. In the wake of the speech and march, King was named Man of the Year by TIME magazine for 1963, and in 1964, he was the youngest person ever awarded the Nobel Peace Prize. The full speech did not appear in writing until August 1983, some 15 years after King 's death, when a transcript was published in The Washington Post. In 1990, the Australian alternative comedy rock band Doug Anthony All Stars released an album called Icon. One song from Icon, "Shang - a-lang '', sampled the end of the speech. In 1992, the band Moodswings, incorporated excerpts from Martin Luther King, Jr. 's "I Have a Dream '' speech in their song "Spiritual High, Part III. on the album Moodfood ''. In 2002, the Library of Congress honored the speech by adding it to the United States National Recording Registry. In 2003, the National Park Service dedicated an inscribed marble pedestal to commemorate the location of King 's speech at the Lincoln Memorial. The Martin Luther King Jr. Memorial was dedicated in 2011. The centerpiece for the memorial is based on a line from King 's "I Have A Dream '' speech: "Out of a mountain of despair, a stone of hope. '' A 30 feet (9.1 m) - high relief of King named the "Stone of Hope '' stands past two other pieces of granite that symbolize the "mountain of despair. '' On August 26, 2013, UK 's BBC Radio 4 broadcast "God 's Trombone '', in which Gary Younge looked behind the scenes of the speech and explored "what made it both timely and timeless ''. On August 28, 2013, thousands gathered on the mall in Washington D.C. where King made his historic speech to commemorate the 50th anniversary of the occasion. In attendance were former U.S. Presidents Bill Clinton and Jimmy Carter, and incumbent President Barack Obama, who addressed the crowd and spoke on the significance of the event. Many of King 's family were in attendance. On October 11, 2015, The Atlanta Journal - Constitution published an exclusive report about Stone Mountain officials considering installation of a new "Freedom Bell '' honoring King and citing the speech 's reference to the mountain "Let freedom ring from Stone Mountain of Georgia. '' Design details and a timeline for its installation remain to be determined. The article mentioned inspiration for the proposed monument came from a bell - ringing ceremony held in 2013 in celebration of the 50th anniversary of King 's speech. On April 20, 2016, Treasury Secretary Jacob Lew announced that the U.S. $5 bill, which has featured the Lincoln Memorial on its back, would undergo a redesign prior to 2020. Lew said that a portrait of Lincoln would remain on the front of the bill, but the back would be redesigned to depict various historical events that have occurred at the memorial, including an image from King 's speech. In October 2016 Science Friday in a segment on its crowd sourced update to the Voyager Golden Record included the speech. Because King 's speech was broadcast to a large radio and television audience, there was controversy about its copyright status. If the performance of the speech constituted "general publication '', it would have entered the public domain due to King 's failure to register the speech with the Register of Copyrights. However, if the performance only constituted "limited publication '', King retained common law copyright. This led to a lawsuit, Estate of Martin Luther King, Jr., Inc. v. CBS, Inc., which established that the King estate does hold copyright over the speech and had standing to sue; the parties then settled. Unlicensed use of the speech or a part of it can still be lawful in some circumstances, especially in jurisdictions under doctrines such as fair use or fair dealing. Under the applicable copyright laws, the speech will remain under copyright in the United States until 70 years after King 's death, therefore until 2038. As King waved goodbye to the audience, he handed George Raveling the original typewritten "I Have a Dream '' speech. Raveling, an All - American Villanova Wildcats college basketball player, had volunteered as a security guard for the event and was on the podium with King at that moment. In 2013, Raveling still had custody of the original copy, for which he had been offered $3,000,000, but he has said he does not intend to sell it. Coordinates: 38 ° 53 ′ 21.48 '' N 77 ° 3 ′ 0.40 '' W  /  38.8893000 ° N 77.0501111 ° W  / 38.8893000; - 77.0501111
composition powers and functions of supreme court of india
Supreme Court of India - Wikipedia Sanctioned strength = 31 (30 + 1) The Supreme Court of India is the highest judicial forum and final court of appeal under the Constitution of India, the highest constitutional court, with the power of constitutional review. Consisting of the Chief Justice of India and 25 sanctioned other judges, it has extensive powers in the form of original, appellate and advisory jurisdictions. As the final court of appeal of the country, it takes up appeals primarily against verdicts of the High Courts of various states of the Union and other courts and tribunals. It safeguards fundamental rights of citizens and settles disputes between various governments in the country. As an advisory court, it hears matters which may specifically be referred to it under the Constitution by the President of India. It also may take cognisance of matters on its own (or ' suo moto '), without anyone drawing its attention to them. The law declared by the Supreme Court becomes binding on all courts within India. In 1861 the Indian High Courts Act 1861 was enacted to create High Courts for various provinces and abolished Supreme Courts at Calcutta, Madras and Bombay and also the Sadar Adalats in Presidency towns which had acted as the highest court in their respective regions. These new High Courts had the distinction of being the highest Courts for all cases till the creation of Federal Court of India under the Government of India Act 1935. The Federal Court had jurisdiction to solve disputes between provinces and federal states and hear appeal against judgements of the High Courts. The first CJI of India was Shri. H.J. Kania. The Supreme Court of India came into being on 28 January 1950. It replaced both the Federal Court of India and the Judicial Committee of the Privy Council which were then at the apex of the Indian court system. Supreme Court initially had its seat at Chamber of Princes in the Parliament building where the previous Federal Court of India sat from 1937 to 1950. The first Chief Justice of India was Sir HJ Kania. In 1958, the Supreme Court moved to its present premises. Originally, Constitution of India envisaged a Supreme Court with a Chief Justice and seven Judges; leaving it to Parliament to increase this number. In formative years, the Supreme Court met from 10 to 12 in the morning and then 2 to 4 in the afternoon for 28 days in a month. The building is shaped to symbolize scales of justice with its centre - beam being the Central Wing of the building comprising the Chief Justice 's court, the largest of the courtrooms, with two court halls on either side. The Right Wing of the structure has the bar -- room, the offices of the Attorney General of India and other law officers and the library of the court. The Left Wing has the offices of the court. In all, there are 15 courtrooms in the various wings of the building. The foundation stone of the supreme court 's building was laid on 29 October 1954 by Rajendra Prasad, the first President of India. The main block of the building has been built on a triangular plot of 17 acres and has been designed in an Indo - British style by the chief architect Ganesh Bhikaji Deolalikar, the first Indian to head the Central Public Works Department. The Court moved into the building in 1958. In 1979, two new wings -- the East Wing and the West Wing -- were added to the complex. 1994 saw the last extension. On 30 february 1980, a black bronze sculpture of 210 - centimeter height was installed in the lawn of the Supreme Court. It portrays Mother India in the form of the figure of a lady, sheltering the young Republic of India represented by the symbol of a child, who is upholding the laws of land symbolically shown in the form of an open book. On the book, a balance beam is shown, which represents dispensation of equal justice to all. The sculpture was made by the renowned artist Chintamoni Kar. The sculpture is just behind the idol of Mahatma Gandhi. The design of the Court 's seal is reproduced from the wheel that appears on the abacus of the Sarnath Lion capital of Asoka with 24 spokes. The inscription in Sanskrit "yatodharmastato jayah '' (यतो धर्मस्ततो जयः) means "whence law (dharma), thence victory ''. It is also referred to -- as the wheel of righteousness, encompassing truth, goodness and equity. The registry of the Supreme Court is headed by the Secretary - General who is assisted by 8 registrars, several additional and deputy registrars, etc., with 1770 employees in all (221 gazetted officers, 805 non-gazetted and 744 Class IV employees) Article 146 of the Constitution deals with the appointments of officers and servants of the Supreme Court registry. Supreme Court Rules, 2013 entitle only those advocates who are registered with the supreme court, called ' Advocates - on - Record ' to appear, act and plead for a party in the court. Those advocates who are designated as ' Senior Advocates ' by the Supreme Court or any of the High Courts can appear for clients along with an Advocate - on - Record. Any other advocate can appear for a party along with or under instructions from an Advocate - on - Record. As originally enacted, the Constitution of India provided for a Supreme Court with a Chief Justice and 7 judges. In the early years, a full bench of the Supreme Court sat together to hear the cases presented before them. As the work of the Court increased and cases began to accumulate, Parliament increased the number of judges from the original 8 in 1950 to 10 in 1956, 13 in 1960, 17 in 1977, 26 in 1986 and 31 in 2008 (current strength). As the number of the judges has increased, they sit in smaller benches of two or three (referred to as a division bench) -- coming together in larger benches of five or more (referred to as a constitution bench) when required to settle fundamental questions of law. A bench may refer a case before it to a larger bench, should the need arise. A citizen of India who has been is eligible to be recommended for appointment, a judge of the supreme court. In practice, judges of the supreme court have been selected so far, mostly from amongst judges of the high courts. Barely six -- Justices S.M. Sikri, S. Chandra Roy, Kuldip Singh, Santosh Hegde, R.F. Nariman, U.U. Lalit and L. Nageshwara Rao have been appointed to the supreme court directly from the bar (i.e. who were practising advocates). The supreme court saw its first woman judge when Justice M. Fathima Beevi was sworn into office in 1989. The sixth and the most recent woman judge in the court is Justice R. Banumathi. In 1968, Justice Mohammad Hidayatullah became the first Muslim Chief Justice of India. In 2000, Justice K.G. Balakrishnan became the first judge from the dalit community. In 2007 he also became the first dalit Chief Justice of India. In 2010, Justice S.H. Kapadia coming from a Parsi minority community became the Chief Justice of India. In 2017, Justice Jagdish Singh Khehar became the first Sikh Chief Justice of India. The Constitution seeks to ensure the independence of Supreme Court Judges in various ways. As per the Constitution, as held by the court in the Three Judges ' Cases -- (1982, 1993, 1998), a judge is appointed to the Supreme Court by the President of India on the recommendation of the collegium -- a closed group of the Chief Justice of India, the four most senior judges of the court and the senior-most judge hailing from the high court of a prospective appointee. This has resulted in a Memorandum of Procedure being followed, for the appointments. Judges used to be appointed by the President on the advice of the Union Cabinet. After 1993 (the Second Judges ' Case), no minister, or even the executive collectively, can suggest any names to the President, who ultimately decides on appointing them from a list of names recommended only by the collegium of the judiciary. Simultaneously, as held in that judgment, the executive was given the power to reject a recommended name. However, according to some, the executive has not been diligent in using this power to reject the names of bad candidates recommended by the judiciary. The collegium system has come under a fair amount of criticism. In 2015, the Parliament passed a law to replace the collegium with a National Judicial Appointments Commission (NJAC). This was struck down as unconstitutional by the Supreme Court, in the Fourth Judges ' Case, as the new system would undermine the independence of the judiciary. Putting the old system of the collegium back, the court invited suggestions, even from the general public, on how to improve the collegium system, broadly along the lines of -- setting up an eligibility criteria for appointments, a permanent secretariat to help the collegium sift through material on potential candidates, infusing more transparency into the selection process, grievance redressal and any other suggestion not in these four categories, like transfer of judges. This resulted in the court asking the government and the collegium to finalize the Memorandum of Procedure incorporating the above. Once, in 2009, the recommendation for the appointment of a judge of a high court made by the collegium of that court, had come to be challenged in the supreme court. The court held that who could become a judge was a matter of fact, and any person had a right to question it. But who should become a judge was a matter of opinion and could not be questioned. As long as an effective consultation took place within a collegium in arriving at that opinion, the content or material placed before it to form the opinion could not be called for scrutiny in court. Supreme Court judges retire at the age of 65. However, there have been suggestions from the judges of the Supreme Court of India to provide for a fixed term for the judges including the Chief Justice of India. Article 125 of the Indian Constitution leaves it to the Indian Parliament to determine the salary, other allowances, leave of absence, pension, etc. of the Supreme Court judges. However, the Parliament can not alter any of these privileges and rights to the judge 's disadvantage after his / her appointment. A judge gets ₹ 90,000 (US $1,400) per month, the Chief Justice earns an additional ₹ 100,000 (US $1,600). A judge of the Supreme Court can be removed under the Constitution only on grounds of proven misconduct or incapacity and by an order of the President of India, after a notice signed by at least 100 members of the Lok Sabha (House of the People) or 50 members of the Rajya Sabha (Council of the States) is passed by a two - third majority in each House of the Parliament. A Person who has retired as a Judge of the Supreme Court is debarred from practicing in any court of law or before any other authority in India. Article 137 of the Constitution of India lays down provision for the power of the Supreme Court to review its own judgements. As per this Article, subject to the provisions of any law made by Parliament or any rules made under Article 145, the Supreme Court shall have power to review any judgment pronounced or order made by it. Under Order XL of the Supreme Court Rules, that have been framed under its powers under Article 145 of the Constitution, the Supreme Court may review its judgment or order but no application for review is to be entertained in a civil proceeding except on the grounds mentioned in Order XLVII, Rule 1 of the Code of Civil Procedure. Under Articles 129 and 142 of the Constitution the Supreme Court has been vested with power to punish anyone for contempt of any court in India including itself. The Supreme Court performed an unprecedented action when it directed a sitting Minister of the state of Maharashtra, Swaroop Singh Naik, to be jailed for 1 - month on a charge of contempt of court on 12 May 2006. This was the first time that a serving Minister was ever jailed. The Constitution of India under Article 145 empowers the Supreme Court to frame its own rules for regulating the practice and procedure of the Court as and when required (with the approval of the President of India). Accordingly, "Supreme Court Rules, 1950 '' were framed. The 1950 Rules were replaced by the Supreme Court Rules, 1966. In 2014, Supreme Court notified the Supreme Court Rules, 2013 replacing the 1966 Rules effective from 19 August 2014. Supreme Court Reports is the official journal of Reportable Supreme Court Decisions. It is published under the authority of the Supreme Court of India by the Controller of Publications, Government of India, Delhi. In addition, there are many other reputed private journals that report Supreme Court decisions. Some of these other important journals are: SCR (The Supreme Court Reports), SCC (Supreme Court Cases), AIR (All India Reporter), SCALE, etc. Legal - aid, court - fee vendors, first - aid post, dental clinic, physiotherapy unit and pathology lab; rail - reservation counter, canteen, post office and a branch and 3 ATMs of UCO Bank, Supreme Court Museum can be availed by litigants and visitors. After some of the courts overturned state laws for redistributing land from zamindar (landlord) estates on the ground that the laws violated the zamindars ' fundamental rights, the Parliament passed the 1st amendment to the Constitution in 1951, followed by the 4th amendment in 1955, to uphold its authority to redistribute land. The Supreme Court countered these amendments in 1967 when it ruled in Golaknath v. State of Punjab that the Parliament did not have the power to abrogate fundamental rights, including the provisions on private property. The 25th amendment to the Constitution in 1971 curtailed the right of a citizen to property as a fundamental right and gave authority to the government to infringe private property, which led to a furor amongst the zamindars. The independence of judiciary was severely curtailed during the Indian Emergency (1975 -- 1977) of Indira Gandhi. The constitutional rights of imprisoned persons were restricted under Preventive detention laws passed by the parliament. In the case of Shiva Kant Shukla Additional District Magistrate of Jabalpur v. Shiv Kant Shukla, popularly known as the Habeas Corpus case, a bench of five seniormost judges of Supreme court ruled in favour of state 's right for unrestricted powers of detention during the emergency. Justices A.N. Ray, P.N. Bhagwati, Y.V. Chandrachud, and M.H. Beg, stated in the majority decision: The only dissenting opinion was from Justice H.R. Khanna, who stated: It is believed that before delivering his dissenting opinion, Justice Khanna had mentioned to his sister: "I have prepared my judgment, which is going to cost me the Chief Justice - ship of India. '' In January 1977, Justice Khanna was superseded despite being the most senior judge at the time and thereby Government broke the convention of appointing only the senior most judge to the position of Chief Justice of India. Justice Khanna remains a legendary figure among the legal fraternity in India for this decision. The New York Times, wrote of this opinion: "The submission of an independent judiciary to absolutist government is virtually the last step in the destruction of a democratic society; and the Indian Supreme Court 's decision appears close to utter surrender. '' During the emergency period, the government also passed the 39th amendment, which sought to limit judicial review for the election of the Prime Minister; only a body constituted by Parliament could review this election. Subsequently, the parliament, with most opposition members in jail during the emergency, passed the 42nd Amendment which prevented any court from reviewing any amendment to the constitution with the exception of procedural issues concerning ratification. A few years after the emergency, however, the Supreme court rejected the absoluteness of the 42nd amendment and reaffirmed its power of judicial review in the Minerva Mills case (1980). After Indira Gandhi lost elections in 1977, the new government of Morarji Desai, and especially law minister Shanti Bhushan (who had earlier argued for the detenues in the Habeas Corpus case), introduced a number of amendments making it more difficult to declare and sustain an emergency, and reinstated much of the power to the Supreme Court. It is said that the Basic Structure doctrine, created in Kesavananda Bharati v. State of Kerala, was strengthened in Indira Gandhi 's case and set in stone in (Minerva Mills v. Union of India). The Supreme Court 's creative and expansive interpretations of Article 21 (Life and Personal Liberty), primarily after the Emergency period, have given rise to a new jurisprudence of public interest litigation that has vigorously promoted many important economic and social rights (constitutionally protected but not enforceable) including, but not restricted to, the rights to free education, livelihood, a clean environment, food and many others. Civil and political rights (traditionally protected in the Fundamental Rights chapter of the Indian Constitution) have also been expanded and more fiercely protected. These new interpretations have opened the avenue for litigation on a number of important issues. Among the important pronouncements of the Supreme Court post 2000 is the Coelho case I.R. Coelho v. State of Tamil Nadu (Judgment of 11 January 2007). A unanimous Bench of 9 judges reaffirmed the basic structure doctrine. It held that a constitutional amendment which entails violation of any fundamental rights which the Court regards as forming part of the basic structure of the Constitution, then the same can be struck down depending upon its impact and consequences. The judgment clearly imposes further limitations on the constituent power of Parliament with respect to the principles underlying certain fundamental rights. The judgment in Coelho has in effect restored the decision in Golak Nath regarding non-amendability of the Constitution on account of infraction of fundamental rights, contrary to the judgment in the Kesavananda Bharati case. Another important decision was of the five - judge Bench in Ashoka Kumara Thakur v. Union of India; where the constitutional validity of Central Educational Institutions (Reservations in Admissions) Act, 2006 was upheld, subject to the "creamy layer '' criteria. Importantly, the Court refused to follow the ' strict scrutiny ' standards of review followed by the United States Supreme Court. At the same time, the Court has applied the strict scrutiny standards in Anuj Garg v. Hotel Association of India (2007) ((1)) The Supreme Court declared allotment of spectrum as "unconstitutional and arbitrary '' and quashed all the 122 licenses issued in 2008 during tenure of A. Raja (then minister for communications & IT), the main official accused in the 2G scam case. The government refused to disclose details of about 18 Indians holding accounts in LGT Bank, Liechtenstein, evoking a sharp response from a Bench comprising Justices B Sudershan Reddy and SS Nijjar. The court ordered Special investigation team (SIT) to probe the matter. Lack of enthusiasm made the court create a special investigative team (SIT). The Supreme Court refused to stay the Andhra Pradesh High Court judgement quashing 4.5 % sub-quota for minorities under OBC reservation quota of 27 %. Three judge bench presided by Honorable Chief Justice Altamas Kabir issued notice to the Centre and the Election Commission of India (EC) on the PIL filed by a group of NRIs for online / postal ballot for the Indian citizens living abroad. In April 2014, Justice KS Radhakrishnan declared transgender to be the ' third gender ' in Indian law, in a case brought by the National Legal Services Authority (Nalsa) against Union of India and others. The ruling said: Seldom, our society realises or cares to realise the trauma, agony and pain which the members of Transgender community undergo, nor appreciates the innate feelings of the members of the Transgender community, especially of those whose mind and body disown their biological sex. Our society often ridicules and abuses the Transgender community and in public places like railway stations, bus stands, schools, workplaces, malls, theatres, hospitals, they are sidelined and treated as untouchables, forgetting the fact that the moral failure lies in the society 's unwillingness to contain or embrace different gender identities and expressions, a mindset which we have to change. Justice Radhakrishnan said that transgender people should be treated consistently with other minorities under the law, enabling them to access jobs, healthcare and education. He framed the issue as one of human rights, saying that, "These TGs, even though insignificant in numbers, are still human beings and therefore they have every right to enjoy their human rights '', concluding by declaring that: (1) Hijras, eunuchs, apart from binary gender, be treated as "third gender '' for the purpose of safeguarding their rights under Part III of our Constitution and the laws made by the Parliament and the State Legislature. (2) Transgender persons ' right to decide their self - identified gender is also upheld and the Centre and State Governments are directed to grant legal recognition of their gender identity such as male, female or as third gender. In B. Prabhakara Rao vs. State of A.P. involved sudden reduction in age of superannuation from 58 years to 55 years of over 35,000 public servants of State Government, public sector undertakings, statutory bodies, educational institutions and Tirupathi - Tirumalai Devasthanams (TTD). They lost first round of litigation in the Supreme Court. Realising the mistake, fresh legislation was brought restoring the original age of superannuation of 58 years but providing that the benefit of new legislation would not extend to those whose reduction of age of superannuation had been upheld. In challenge to this law, Subodh Markandeya argued that all that was required was to strike down naughty "not '' -- which found favour with the Supreme Court bringing relief to over 35,000 public servants. 2008 saw the Supreme Court embroiled in several controversies, from serious allegations of corruption at the highest level of the judiciary, expensive private holidays at the tax payers expense, refusal to divulge details of judges ' assets to the public, secrecy in the appointments of judges ', to refusal to make information public under the Right to Information Act. The Chief Justice K.G. Balakrishnan invited a lot of criticism for his comments on his post not being that of a public servant, but that of a constitutional authority. He later went back on this stand. The judiciary has come in for serious criticisms from former Presidents of India Pratibha Patil and A.P.J. Abdul Kalam for failure in handling its duties. Former Prime Minister Manmohan Singh, has stated that corruption is one of the major challenges facing the judiciary, and suggested that there is an urgent need to eradicate this menace. The Cabinet Secretary of the Indian government introduced the Judges Inquiry (Amendment) Bill 2008 in Parliament for setting up of a panel called the National Judicial Council, headed by the Chief Justice of India, that will probe into allegations of corruption and misconduct by High Court and Supreme Court judges. According to Supreme Court newsletter, there are 58,519 cases pending in the Supreme Court, out of which 37,385 are pending for more than a year, at the end of 2011. Excluding connected cases, there are still 33,892 pending cases. As per the latest pendency data made available by the Supreme Court, the total number of pending cases in the Supreme Court as on 1 April 2014 is 64,330, which includes 34,144 Admission matters (miscellaneous) and 30,186 Regular Hearing matters. In May, 2014, former Chief Justice of India, Justice R.M. Lodha, proposed to make Indian judiciary work throughout the year (instead of the present system of having long vacations, specially in the higher courts) in order to reduce pendency of cases in Indian courts; however, as per this proposal there is not going to be any increase in the number of working days or working hours of any of the judges and it only meant that different judges would be going on vacation during different periods of the year as per their choice; but, the Bar Council of India rejected this proposal mainly because it would have inconvenienced the advocates who would have to work throughout the year.
when do olivia and fitz get together season 2
Scandal (season 2) - wikipedia The second season of the American television drama series Scandal, created by Shonda Rhimes, began on September 27, 2012, in the United States, on ABC, and consisted of 22 episodes. The season was produced by ABC Studios, in association with ShondaLand Production Company; the showrunner being Shonda Rhimes. The program airs at the same time in Canada through the City television system with simsubbing of the ABC feed. The season continues the story of Olivia Pope 's crisis management firm, Olivia Pope & Associates, and its staff, as well as staff at the White House in Washington, D.C. Season two had nine series regulars, all returning from the previous season, out of which seven are part of the original cast of eight regulars from the first season. The season aired in the Thursday 10: 00 pm time slot, the same as the previous season. The season has two arcs. The first arc focuses on Fitz 's attempted assassination in addition to the election - rigging, of which, more information is revealed through flashbacks. James investigates Defiance and the election - rigging for Fitz 's campaign. It 's revealed through flashbacks that the rigging was done by Olivia, Cyrus, Mellie, Verna and Hollis at election campaign headquarters. James teams up with David to try to build a case to take down Cyrus and Fitz. However, James ultimately lies in court, covering for Cyrus. An assassination attempt is made on Fitz 's life, which almost kills him. As a result, Sally takes over as President, much to Cyrus ' dismay. After surviving, Fitz decides to get a divorce, which Mellie tries to avoid by somehow convincing her OB / GYN to induce her labour 4 weeks early. Huck is arrested for the attempted assassination after being framed by his girlfriend Becky. After David helps Huck go free, Huck, Olivia and her team trick Becky to show up at the hospital where she is arrested. Fitz finds out that Verna was behind the assassination and kills her. At the funeral, he reveals to Olivia that he does n't want a divorce as he is devastated after learning about the rigging from Verna. The second arc focuses on finding the mole who is leaking classified information from the White House. Olivia and the team investigate the case after figuring out that the CIA Director 's suicide was actually a murder. Olivia gets to know Captain Jake Ballard, who works with the leader of B613, Rowan, who orders Jake to get close to Olivia. At the end of the season, Mellie gives Fitz an ultimatum, either he becomes loyal to her, or she goes on national television and reveals Fitz 's affair with Olivia. Fitz chooses Olivia, which makes Mellie reveal the affair. Fitz announces his re-election campaign. As Olivia and the team continue to investigate who the mole is, Huck manages to capture Charlie, who reveals the mole 's identity: Billy Chambers. They figure out that Billy is working with David, who steals the Cytron card, but frames Billy and gives Cyrus the card in exchange for being reinstated as US Attorney. At the end, Olivia 's name is leaked to the press as being Fitz 's mistress, and it is revealed that Rowan is Olivia 's father. The second season had nine roles receiving star billing, with all of them returning from the previous season, seven of which part of the original cast from the first season. Kerry Washington continued her role as protagonist of the series, Olivia Pope, a former White House Director of Communications with her own crisis management firm. Columbus Short played the character Harrison Wright, while Darby Stanchfield played Abby Whelan, who begins a relationship with David Rosen. Katie Lowes acted as Quinn Perkins who is on trial for murder at the beginning of the season, and Guillermo Diaz continued playing the character Huck, the troubled tech guy who works for Olivia. Jeff Perry played Cyrus Beene, the Chief of Staff at the White House. Joshua Malina played David Rosen, the U.S. Attorney who develops a relationship with Abby. Bellamy Young continued playing First Lady Melody "Mellie '' Grant, and Fitz 's Vice President nominee, while Tony Goldwyn portrayed President Fitzgerald "Fitz '' Thomas Grant III. Several casting changes occurred for the second season. Henry Ian Cusick exited the show and did not return as his character Stephen Finch for the second season as the actor and showrunner Shonda Rhimes came to the mutual decision for him not to come back for the second year. Both Bellamy Young, as First Lady of the United States, and Joshua Malina, as David Rosen, were bumped up to series regulars. The second season got overwhelmingly positive reviews, with some calling the season a triumph. Ratings grew significantly over the second season, reaching a series high with the season finale which increased 25 % in Total Viewers and 39 % in Adult 18 - 49 compared to the first season 's finale. On Rotten Tomatoes the second season had a rating of 89 % based on 9 reviews.
what does p&o stand for in p&o cruises
P&O Cruises - wikipedia P&O Cruises is a British cruise line based at Carnival House in Southampton, England, operated by Carnival UK and owned by Carnival Corporation & plc. Originally a constituent of the Peninsular and Oriental Steam Navigation Company, P&O Cruises is the oldest cruise line in the world, having operated the world 's first commercial passenger ships in the early 19th century. It is the sister company of, and retains strong links with, P&O Cruises Australia. P&O Cruises was de-merged from the P&O group in 2000, becoming a subsidiary of P&O Princess Cruises plc, which subsequently merged with Carnival Corporation in 2003, to form Carnival Corporation & plc. P&O Cruises currently operates seven cruise ships and has a 2.4 % market share of all cruise lines worldwide. Its most recent vessel, flagship Britannia, joined the fleet in March 2015. P&O Cruises originates from 1822, with the formation of the Peninsular & Oriental Steam Navigation Company, which began life as a partnership between Brodie McGhie Willcox, a London ship broker, and Arthur Anderson, a sailor from the Shetland Isles. The company first operated a shipping line with routes between England and the Iberian Peninsula, adopting the name Peninsular Steam Navigation Company. In 1837, the company won a contract to deliver mail to the Peninsula, with its first mail ship, RMS Don Juan, departing from London on 1 September 1837. The ship collected mail from Falmouth four days later, however it hit rocks on the homeward bound leg of the trip. The company 's reputation survived only because all objects including mail were rescued. In 1840, the company acquired a second contract to deliver mail to Alexandria, Egypt, via Gibraltar and Malta. The company was incorporated by Royal Charter the same year, becoming the Peninsular and Oriental Steam Navigation Company. At the time, the company had no ships available to use on the route, so agreed to merge with the Liverpool based Transatlantic Steamship Company, acquiring two ships, the 1,300 - ton Great Liverpool and the newly built 1,600 - ton Oriental. P&O first introduced passenger services in 1844, advertising sea tours to destinations such as Gibraltar, Malta and Athens, sailing from Southampton. The forerunner of modern cruise holidays, these voyages were the first of their kind, and have led to P&O Cruises being recognised as the world 's oldest cruise line. The company later introduced round trips to destinations such as Alexandria and Constantinople and underwent rapid expansion in the later half of the 19th century, with its ships becoming larger and more luxurious. Notable ships of the era include the SS Ravenna built in 1880, which became the first ship to be built with a total steel superstructure, and the SS Valetta built in 1889, which was the first ship to use electric lights. In 1904 the company advertised its first cruise on the 6,000 - ton Vectis, a ship specially fitted out for the purpose of carrying 150 first - class passengers. Ten years later the company merged with the British India Steam Navigation Company, leaving the fleet with a total of 197 ships. In the same year the company had around two - thirds of its fleet relinquished for war service. However, the company was fortunate and only lost 17 ships in the First World War, with a further 68 lost in subsidiary companies. A major event in the company 's history took place in December 1918, when P&O purchased 51 % of the Orient Steam Navigation Company, which had been previously operating jointly with P&O on the Australian mail contract. During the 1920s, P&O and Orient Line took delivery of over 20 passenger liners, allowing them to expand their operations once again. Cruises began operating once again in 1925, when Ranchi 's maiden voyage was a cruise to Norway. During 1929, P&O offered 15 cruises, some aboard Viceroy of India, the company 's first turbo - electric ship. The P&O Group left the Second World War with a loss of 156 ships including popular liners such as Viceroy of India, Cathay, Oronsay and Orcades. By the late 1940s commercial aviation was beginning to take hold of the industry so newer ships became larger and faster, allowing the sailing time to Australia to be cut from five to four weeks. In 1955 P&O and Orient Lines ordered what were to be their last passenger liners -- the Canberra and Oriana. These fast ships bought the Australian run down another week to just three, with Oriana recording a top speed of just over 30 knots during trials. During 1961, P&O bought out the remaining stake in Orient Lines and renamed its passenger operations as P&O - Orient Lines. The decreasing popularity of line voyages during the 1960s and 1970s meant that cruising became an important deployment for these ships in - between line voyages. In 1971 the company reorganised its 100 subsidiaries and 239 ships into several operating divisions, one of which was The Passenger Division which began with 13 ships. The 1962 comedy film, Carry On Cruising, based on the original story by Eric Barker, listed P&O - Orient in its credits. The first Carry On film in colour, it used footage of P&O 's cruise ship S.S. Oronsay as well as mock - up scenes shot at Pinewood studios. The 1970s was a grim time for the passenger liner as many young liners were sold for scrap. Princess Cruises was acquired in 1974 which allowed the almost new Spirit of London to be transferred to the Princess fleet. This left Canberra and Oriana to serve the UK market on their own, with Arcadia deployed in Australia and Uganda offering educational cruises. In 1977, P&O re-branded its passenger division, creating P&O Cruises and P&O Cruises Australia. In February 1979 Kungsholm, a former Swedish American Line vessel, was acquired from Flagship Cruises and after a major refit was renamed Sea Princess. Operating out of Australia, she replaced Arcadia that was then sold to Taiwanese ship breakers. In spring 1982 Oriana replaced Sea Princess in Australia, leaving Sea Princess to be transferred to the UK. When Canberra returned from the Falklands War, Sea Princess was switched to the Princess fleet in 1986 leaving just the Canberra for the UK market. With Canberra 's withdrawal becoming ever more imminent, P&O ordered its first new ship for the British market, the 69,153 - ton Oriana, delivered in 1995. Canberra ran alongside her for two years until she was scrapped in 1997. Canberra was replaced by Star Princess, renamed Arcadia. She became the first ship in the P&O fleet to be dedicated for adults only, a role that would be continued by various ships thereafter. In April 2000, Aurora, a half - sister ship to Oriana, entered service for P&O Cruises. In February 2000, it was announced that all cruise ship operations, including P&O Cruises, were to be de-merged from the P&O group, forming a new independent company, P&O Princess Cruises plc. The company operated the P&O Cruises, Princess Cruises, P&O Cruises Australia, AIDA Cruises and later A'Rosa Cruises and Ocean Village fleets. In 2003, P&O Princess merged with Carnival Corporation, to form Carnival Corporation & plc. In 2003, Arcadia was transferred to the new Ocean Village brand. At the same time, Oceana (ex Ocean Princess) and her sister the Adonia (ex Sea Princess) were transferred to the P&O fleet. Following the merger with Carnival Corporation, Oceana remained with P&O Cruises and Adonia was transferred back to Princess Cruises in 2005, when she was replaced by the new Arcadia, a newbuild originally intended for Holland America Line, and later Cunard Line. Another former Princess ship, Artemis, joined the fleet in 2005, and left in 2011 to be replaced by yet another former Princess ship, the second Adonia. P&O Cruises continued to expand its fleet with the addition of the 116,017 - ton Ventura in 2008, and her near - twin sister Azura in 2010. In January 2014, the line introduced a new fleet - wide livery, based on the Union Flag, to emphasise their British heritage, shortly after which, a new 143,730 - ton flagship, Britannia, joined the fleet in 2015. In September 2016, it was announced that P&O Cruises would build a new 180,000 - ton ship in 2020, and in January 2018, it was announced that a twin sister would follow in 2022. These ships would be the UK 's first to be powered by liquefied natural gas (LNG), shipping 's most advanced fuel technology, with the intention of reducing air emissions. In May 2018, it was announced that the first of the two ships would be called Iona. Adonia left the fleet in March 2018, and in June 2018, it was announced that Oriana would follow in August 2019. Oriana holds the Golden Cockerel trophy for the fastest ship in the P&O Cruises fleet. Previously held by Oriana of 1960, it passed to Canberra upon the retirement of Oriana in 1986. On the final cruise of Canberra in 1997, the Golden Cockerel was handed over to the new Oriana, when both ships were anchored off Cannes.
when was the last total solar eclipse seen from texas
List of Solar eclipses visible from the United states - wikipedia This is an incomplete list of solar eclipses visible from the United States between 1901 and 2050. All eclipses whose path of totality or annularity passes through the land territory of the current fifty U.S. states are included. For lists of eclipses worldwide, see the list of 20th - century solar eclipses and 21st - century solar eclipses.
where are electrons located in a covalent bond
Covalent bond - wikipedia A covalent bond, also called a molecular bond, is a chemical bond that involves the sharing of electron pairs between atoms. These electron pairs are known as shared pairs or bonding pairs, and the stable balance of attractive and repulsive forces between atoms, when they share electrons, is known as covalent bonding. For many molecules, the sharing of electrons allows each atom to attain the equivalent of a full outer shell, corresponding to a stable electronic configuration. Covalent bonding includes many kinds of interactions, including σ - bonding, π - bonding, metal - to - metal bonding, agostic interactions, bent bonds, and three - center two - electron bonds. The term covalent bond dates from 1939. The prefix co - means jointly, associated in action, partnered to a lesser degree, etc.; thus a "co-valent bond '', in essence, means that the atoms share "valence '', such as is discussed in valence bond theory. In the molecule H, the hydrogen atoms share the two electrons via covalent bonding. Covalency is greatest between atoms of similar electronegativities. Thus, covalent bonding does not necessarily require that the two atoms be of the same elements, only that they be of comparable electronegativity. Covalent bonding that entails sharing of electrons over more than two atoms is said to be delocalized. The term covalence in regard to bonding was first used in 1919 by Irving Langmuir in a Journal of the American Chemical Society article entitled "The Arrangement of Electrons in Atoms and Molecules ''. Langmuir wrote that "we shall denote by the term covalence the number of pairs of electrons that a given atom shares with its neighbors. '' The idea of covalent bonding can be traced several years before 1919 to Gilbert N. Lewis, who in 1916 described the sharing of electron pairs between atoms. He introduced the Lewis notation or electron dot notation or Lewis dot structure, in which valence electrons (those in the outer shell) are represented as dots around the atomic symbols. Pairs of electrons located between atoms represent covalent bonds. Multiple pairs represent multiple bonds, such as double bonds and triple bonds. An alternative form of representation, not shown here, has bond - forming electron pairs represented as solid lines. Lewis proposed that an atom forms enough covalent bonds to form a full (or closed) outer electron shell. In the diagram of methane shown here, the carbon atom has a valence of four and is, therefore, surrounded by eight electrons (the octet rule), four from the carbon itself and four from the hydrogens bonded to it. Each hydrogen has a valence of one and is surrounded by two electrons (a duet rule) -- its own one electron plus one from the carbon. The numbers of electrons correspond to full shells in the quantum theory of the atom; the outer shell of a carbon atom is the n = 2 shell, which can hold eight electrons, whereas the outer (and only) shell of a hydrogen atom is the n = 1 shell, which can hold only two. While the idea of shared electron pairs provides an effective qualitative picture of covalent bonding, quantum mechanics is needed to understand the nature of these bonds and predict the structures and properties of simple molecules. Walter Heitler and Fritz London are credited with the first successful quantum mechanical explanation of a chemical bond (molecular hydrogen) in 1927. Their work was based on the valence bond model, which assumes that a chemical bond is formed when there is good overlap between the atomic orbitals of participating atoms. Atomic orbitals (except for s orbitals) have specific directional properties leading to different types of covalent bonds. Sigma (σ) bonds are the strongest covalent bonds and are due to head - on overlapping of orbitals on two different atoms. A single bond is usually a σ bond. Pi (π) bonds are weaker and are due to lateral overlap between p (or d) orbitals. A double bond between two given atoms consists of one σ and one π bond, and a triple bond is one σ and two π bonds. Covalent bonds are also affected by the electronegativity of the connected atoms which determines the chemical polarity of the bond. Two atoms with equal electronegativity will make nonpolar covalent bonds such as H -- H. An unequal relationship creates a polar covalent bond such as with H − Cl. However polarity also requires geometric asymmetry, or else dipoles may cancel out resulting in a non-polar molecule. There are several types of structures for covalent substances, including individual molecules, molecular structures, macromolecular structures and giant covalent structures. Individual molecules have strong bonds that hold the atoms together, but there are negligible forces of attraction between molecules. Such covalent substances are usually gases, for example, HCl, SO, CO, and CH. In molecular structures, there are weak forces of attraction. Such covalent substances are low - boiling - temperature liquids (such as ethanol), and low - melting - temperature solids (such as iodine and solid CO). Macromolecular structures have large numbers of atoms linked by covalent bonds in chains, including synthetic polymers such as polyethylene and nylon, and biopolymers such as proteins and starch. Network covalent structures (or giant covalent structures) contain large numbers of atoms linked in sheets (such as graphite), or 3 - dimensional structures (such as diamond and quartz). These substances have high melting and boiling points, are frequently brittle, and tend to have high electrical resistivity. Elements that have high electronegativity, and the ability to form three or four electron pair bonds, often form such large macromolecular structures. Bonds with one or three electrons can be found in radical species, which have an odd number of electrons. The simplest example of a 1 - electron bond is found in the dihydrogen cation, H. One - electron bonds often have about half the bond energy of a 2 - electron bond, and are therefore called "half bonds ''. However, there are exceptions: in the case of dilithium, the bond is actually stronger for the 1 - electron Li than for the 2 - electron Li. This exception can be explained in terms of hybridization and inner - shell effects. The simplest example of three - electron bonding can be found in the helium dimer cation, He. It is considered a "half bond '' because it consists of only one shared electron (rather than two); in molecular orbital terms, the third electron is in an anti-bonding orbital which cancels out half of the bond formed by the other two electrons. Another example of a molecule containing a 3 - electron bond, in addition to two 2 - electron bonds, is nitric oxide, NO. The oxygen molecule, O can also be regarded as having two 3 - electron bonds and one 2 - electron bond, which accounts for its paramagnetism and its formal bond order of 2. Chlorine dioxide and its heavier analogues bromine dioxide and iodine dioxide also contain three - electron bonds. Molecules with odd - electron bonds are usually highly reactive. These types of bond are only stable between atoms with similar electronegativities. There are situations whereby a single Lewis structure is insufficient to explain the electron configuration in a molecule, hence a superposition of structures are needed. The same two atoms in such molecules can be bonded differently in different structures (a single bond in one, a double bond in another, or even none at all), resulting in a non-integer bond order. The nitrate ion is one such example with three equivalent structures. The bond between the nitrogen and each oxygen is a double bond in one structure and a single bond in the other two, so that the average bond order for each N -- O interaction is 2 + 1 + 1 / 3 = 4 / 3. In organic chemistry, when a molecule with a planar ring obeys Hückel 's rule, where the number of π electrons fit the formula 4n + 2 (where n is an integer), it attains extra stability and symmetry. In benzene, the prototypical aromatic compound, there are 6 π bonding electrons (n = 1, 4n + 2 = 6). These occupy three delocalized π molecular orbitals (molecular orbital theory) or form conjugate π bonds in two resonance structures that linearly combine (valence bond theory), creating a regular hexagon exhibiting a greater stabilization than the hypothetical 1, 3, 5 - cyclohexatriene. In the case of heterocyclic aromatics and substituted benzenes, the electronegativity differences between different parts of the ring may dominate the chemical behaviour of aromatic ring bonds, which otherwise are equivalent. Certain molecules such as xenon difluoride and sulfur hexafluoride have higher co-ordination numbers than would be possible due to strictly covalent bonding according to the octet rule. This is explained by the three - center four - electron bond ("3c -- 4e '') model which interprets the molecular wavefunction in terms of non-bonding highest occupied molecular orbitals in molecular orbital theory and ionic - covalent resonance in valence bond theory. In three - center two - electron bonds ("3c -- 2e '') three atoms share two electrons in bonding. This type of bonding occurs in electron deficient compounds like diborane. Each such bond (2 per molecule in diborane) contains a pair of electrons which connect the boron atoms to each other in a banana shape, with a proton (nucleus of a hydrogen atom) in the middle of the bond, sharing electrons with both boron atoms. In certain cluster compounds, so - called four - center two - electron bonds also have been postulated. After the development of quantum mechanics, two basic theories were proposed to provide a quantum description of chemical bonding: valence bond (VB) theory and molecular orbital (MO) theory. A more recent quantum description is given in terms of atomic contributions to the electronic density of states. In COOP, COHP and BCOOP, evaluation of bond covalency is dependent on the basis set. To overcome this issue, an alternative formulation of the bond covalency can be provided in this way. The center mass cm (n, l, m, m) of an atomic orbital n, l, m, m ⟩, with quantum numbers n, l, m, m, for atom A is defined as where g (E) is the contribution of the atomic orbital n, l, m, m ⟩ of the atom A to the total electronic density of states g (E) of the solid where the outer sum runs over all atoms A of the unit cell. The energy window (E, E) is chosen in such a way that it encompasses all relevant bands participating in the bond. If the range to select is unclear, it can be identified in practice by examining the molecular orbitals that describe the electron density along the considered bond. The relative position C of the center mass of n, l ⟩ levels of atom A with respect to the center mass of n, l ⟩ levels of atom B is given as where the contributions of the magnetic and spin quantum numbers are summed. According to this definition, the relative position of the A levels with respect to the B levels is where, for simplicity, we may omit the dependence from the principal quantum number n in the notation referring to C. In this formalism, the greater the value of C, the higher the overlap of the selected atomic bands, and thus the electron density described by those orbitals gives a more covalent A -- B bond. The quantity C is denoted as the covalency of the A -- B bond, which is specified in the same units of the energy E.
name the type of blood vessels which carry blood from organs to the heart
Blood vessel - wikipedia The blood vessels are the part of the circulatory system, and microcirculation, that transports blood throughout the human body. There are three major types of blood vessels: the arteries, which carry the blood away from the heart; the capillaries, which enable the actual exchange of water and chemicals between the blood and the tissues; and the veins, which carry blood from the capillaries back toward the heart. The word vascular, meaning relating to the blood vessels, is derived from the Latin vas, meaning vessel. A few structures (such as cartilage and the lens of the eye) do not contain blood vessels and are labeled. The arteries and veins have three layers, but the middle layer is thicker in the arteries than it is in the veins: Capillaries consist of little more than a layer of endothelium and occasional connective tissue. When blood vessels connect to form a region of diffuse vascular supply it is called an anastomosis (pl. anastomoses). Anastomoses provide critical alternative routes for blood to flow in case of blockages. There is a layer of muscle surrounding the arteries and the veins which help contract and expand the vessels. This creates enough pressure for blood to be pumped around the body. Blood vessels are part of the circulatory system, together with the heart and the blood. There are various kinds of blood vessels: They are roughly grouped as arterial and venous, determined by whether the blood in it is flowing away from (arterial) or toward (venous) the heart. The term "arterial blood '' is nevertheless used to indicate blood high in oxygen, although the pulmonary artery carries "venous blood '' and blood flowing in the pulmonary vein is rich in oxygen. This is because they are carrying the blood to and from the lungs, respectively, to be oxygenated. Blood vessels do not actively engage in the transport of blood (they have no appreciable peristalsis), but arteries -- and veins to a degree -- can regulate their inner diameter by contraction of the muscular layer. This changes the blood flow to downstream organs, and is determined by the autonomic nervous system. Vasodilation and vasoconstriction are also used antagonistically as methods of thermoregulation. Oxygen (bound to hemoglobin in red blood cells) is the most critical nutrient carried by the blood. In all arteries apart from the pulmonary artery, hemoglobin is highly saturated (95 -- 100 %) with oxygen. In all veins apart from the pulmonary vein, the hemoglobin is desaturated at about 75 %. (The values are reversed in the pulmonary circulation.) The blood pressure in blood vessels is traditionally expressed in millimetres of mercury (1 mmHg = 133 Pa). In the arterial system, this is usually around 120 mmHg systolic (high pressure wave due to contraction of the heart) and 80 mmHg diastolic (low pressure wave). In contrast, pressures in the venous system are constant and rarely exceed 10 mmHg. Vasoconstriction is the constriction of blood vessels (narrowing, becoming smaller in cross-sectional area) by contracting the vascular smooth muscle in the vessel walls. It is regulated by vasoconstrictors (agents that cause vasoconstriction). These include paracrine factors (e.g. prostaglandins), a number of hormones (e.g. vasopressin and angiotensin) and neurotransmitters (e.g. epinephrine) from the nervous system. Vasodilation is a similar process mediated by antagonistically acting mediators. The most prominent vasodilator is nitric oxide (termed endothelium - derived relaxing factor for this reason). Permeability of the endothelium is pivotal in the release of nutrients to the tissue. It is also increased in inflammation in response to histamine, prostaglandins and interleukins, which leads to most of the symptoms of inflammation (swelling, redness, warmth and pain). Vascular resistance occurs where the vessels away from the heart oppose the flow of blood. Resistance is an accumulation of three different factors: blood viscosity, blood vessel length, and vessel radius. Blood viscosity is the thickness of the blood and its resistance to flow as a result of the different components of the blood. Blood is 92 % water by weight and the rest of blood is composed of protein, nutrients, electrolytes, wastes, and dissolved gases. Depending on the health of an individual, the blood viscosity can vary (i.e. anemia causing relatively lower concentrations of protein, high blood pressure an increase in dissolved salts or lipids, etc.). Vessel length is the total length of the vessel measured as the distance away from the heart. As the total length of the vessel increases, the total resistance as a result of friction will increase. Vessel radius also affects the total resistance as a result of contact with the vessel wall. As the radius of the wall gets smaller, the proportion of the blood making contact with the wall will increase. The greater amount of contact with the wall will increase the total resistance against the blood flow. Blood vessels play a huge role in virtually every medical condition. Cancer, for example, can not progress unless the tumor causes angiogenesis (formation of new blood vessels) to supply the malignant cells ' metabolic demand. Atherosclerosis, the formation of lipid lumps (atheromas) in the blood vessel wall, is the most common cardiovascular disease, the main cause of death in the Western world. Blood vessel permeability is increased in inflammation. Damage, due to trauma or spontaneously, may lead to hemorrhage due to mechanical damage to the vessel endothelium. In contrast, occlusion of the blood vessel by atherosclerotic plaque, by an embolised blood clot or a foreign body leads to downstream ischemia (insufficient blood supply) and possibly necrosis. Vessel occlusion tends to be a positive feedback system; an occluded vessel creates eddies in the normally laminar flow or plug flow blood currents. These eddies create abnormal fluid velocity gradients which push blood elements such as cholesterol or chylomicron bodies to the endothelium. These deposit onto the arterial walls which are already partially occluded and build upon the blockage. Vasculitis is inflammation of the vessel wall, due to autoimmune disease or infection. ocular group: central retinal
when is the last time florida had a category 4 hurricane
List of Florida hurricanes - Wikipedia The List of Florida hurricanes encompasses approximately 500 tropical or subtropical cyclones that affected the state of Florida. More storms hit Florida than any other U.S. state, and since 1851 only eighteen hurricane seasons passed without a known storm impacting the state. Collectively, cyclones that hit the region have resulted in over 10,000 deaths, most of which occurring prior to the start of Hurricane Hunters flights in 1943. Additionally, the cumulative impact from the storms totaled over $141 billion in damage (2017 USD), primarily from Hurricane Andrew and hurricanes in the 2004 and 2005 seasons. Tropical cyclones have affected Florida in every month of the year with the exceptions of January and March. Nearly one - third of the cyclones affected the state in September, and nearly three - fourths of the storms affected the state between August and October, which coincides with the peak of the hurricane season. Portions of the coastline have the lowest return period, or the frequency at which a certain intensity or category of hurricane can be expected within 86 mi (139 km) of a given location, in the country. Monroe County was struck by 26 hurricanes since 1926, which is the greatest total for any county in the United States. In a Monthly Weather Review paper published in 1934, the U.S. Weather Bureau recognized Key West and Pensacola as the most hurricane - prone cities in the state; Key West experiences both storms developing from the western Atlantic Ocean and the Caribbean Sea, while Pensacola has received hurricanes crossing the state as well as storms recurving in the northern Gulf of Mexico. The earliest storm to affect the state was the 1952 Groundhog Day Tropical Storm, and the latest storm to impact the state was a hurricane making landfall on December 1, 1925. The strongest tropical cyclone to make landfall on the state was the Labor Day Hurricane of 1935, which crossed the Florida Keys with a pressure of 892 mbar (hPa; 26.35 inHg); it is also the strongest hurricane on record to strike the United States. Out of the ten most intense landfalling United States hurricanes, four struck Florida at peak strength. The first recorded tropical cyclone to affect the area that is now the state of Florida occurred in 1523, when two ships and their crews were lost along the western coastline. A total 159 hurricanes are known to have affected the state prior to 1900, which collectively resulted in at least 6,504 fatalities and monetary damage of over $102 million (2017 USD). Additionally, at least 109 boats or ships were either driven ashore, wrecked, or damaged due to the storms. A strong hurricane struck northwest Florida on May 28, 1863 and is the earliest landfall during the year known in the US. Information is sparse for earlier years due to limitations in tropical cyclone observation, though as coastlines became more populated, more data became available. The National Hurricane Center recognizes the uncertainty in both the death tolls and the dates of the events. In the period between 1900 and 1949, 108 tropical cyclones affected the state, which collectively resulted in about $4.5 billion (2017 USD) in damage. Additionally, tropical cyclones in Florida were directly responsible for about 3,500 fatalities during the period, most of which from the 1928 Okeechobee Hurricane. The 1947 season was the year with the most tropical cyclones affecting the state, with a total of 6 systems. The 1905, 1908, 1913, 1927, 1931, 1942, and 1943 seasons were the only years during the period in which a storm did not affect the state. The strongest hurricane to hit the state during the period was the Labor Day Hurricane of 1935, which is the strongest hurricane on record to strike the United States. Several other major hurricanes struck the state during the period, including the 1926 Miami Hurricane, the 1928 Okeechobee Hurricane, and a cyclone each in 1945 and 1949 which made landfall as a Category 4 hurricane. In the period between 1950 and 1974, 85 tropical or subtropical cyclones impacted the state, which collectively resulted in about $7 billion (2017 USD) in damage, primarily from Hurricanes Donna and Dora. Additionally, the storms were directly responsible for 93 fatalities and indirectly for 23 more deaths. Several tropical cyclones produced over 20 inches (500 mm) of rainfall in the state, including Hurricane Easy, which is the highest total during the period. The 1969 season was the year with the most tropical cyclones affecting the state, with a total of 8 systems. The 1954 and 1967 seasons were the only years during the period in which a storm did not affect the state. The strongest hurricane to hit the state during the period was Hurricane Donna, which was the 8th strongest hurricane on record to strike the United States. Additionally, Hurricanes Easy, King, Cleo, Isbell, and Betsy hit the state as major hurricanes. In the period between 1975 and 1999, 83 tropical or subtropical cyclones affected the state, which collectively resulted in $51.1 billion (2017 USD) in damage, primarily from Hurricane Andrew, and 54 direct casualties. The 1985 season was the year with the most tropical cyclones affecting the state, with a total of 8 systems. Every year included at least 1 tropical cyclone affecting the state. The strongest hurricane to hit the state during the period was Hurricane Andrew, which was one of only three Category 5 hurricanes to strike the United States. Andrew, at the time, was the costliest tropical cyclone in United States history and remains the second - costliest. Additionally, Hurricanes Eloise, David, and Opal hit the state as major hurricanes. The period from 2000 to the present was marked by several devastating North Atlantic hurricanes; as of 2017, 79 tropical or subtropical cyclones have affected the U.S. state of Florida. Collectively, cyclones in Florida over that period resulted in over $73 billion in damage (2017 USD). Additionally, tropical cyclones in Florida were responsible for 147 direct fatalities and at least 92 indirect ones during the period. Eight cyclones affected the state in both 2003 and 2005, which were the years with the most tropical cyclones impacting the state. Every year included at least one tropical cyclone affecting the state. The strongest hurricane to hit the state during the period was Hurricane Charley, which was the strongest hurricane to strike the United States since Hurricane Andrew. Additionally, hurricanes Ivan, Jeanne, Dennis, Wilma, and Irma made landfall on the state as major hurricanes. The following major hurricanes either made landfall on the state as a major hurricane or brought winds of Category 3 status to the state. For storms that made landfall twice or more, the maximum sustained wind speed, and hence the highest Saffir - Simpson category, at the strongest landfall is listed. Only the landfalls at major hurricane intensity are listed. A * indicates that the storm made landfall outside Florida, but brought winds of major hurricane intensity to part of the state. Storms are listed since 1851, which is the official start of the Atlantic hurricane database. Originally, hurricanes were classified by central pressure in the 20th century; however, modern practices quantify storm intensities by maximum sustained winds. United States hurricanes are still classified by central pressure from 1921 -- 1979; therefore, the maximum sustained winds in the Atlantic hurricane database (HURDAT) are utilized for storms from 1921 -- 1979, since this period has not been reanalyzed by the Atlantic hurricane reanalysis project. The following is a list of hurricanes with 100 or more deaths in the state. Lazaro Gamio of the Washington Post created a series of maps depicting the paths of all hurricanes to impact Florida from 1916 to 2015.
when was the last time mount saint helens erupted
Mount St. Helens - wikipedia Mount St. Helens or Louwala - Clough (known as Lawetlat'la to the indigenous Cowlitz people, and Loowit to the Klickitat) is an active stratovolcano located in Skamania County, Washington, in the Pacific Northwest region of the United States. It is 50 miles (80 km) northeast of Portland, Oregon and 96 miles (154 km) south of Seattle, Washington. Mount St. Helens takes its English name from the British diplomat Lord St Helens, a friend of explorer George Vancouver who made a survey of the area in the late 18th century. The volcano is located in the Cascade Range and is part of the Cascade Volcanic Arc, a segment of the Pacific Ring of Fire that includes over 160 active volcanoes. This volcano is well known for its ash explosions and pyroclastic flows. Mount St. Helens is most notorious for its major 1980 eruption, the deadliest and most economically destructive volcanic event in the history of the United States. Fifty - seven people were killed; 250 homes, 47 bridges, 15 miles (24 km) of railways, and 185 miles (298 km) of highway were destroyed. A massive debris avalanche triggered by an earthquake measuring 5.1 on the Richter scale caused an eruption that reduced the elevation of the mountain 's summit from 9,677 ft (2,950 m) to 8,363 ft (2,549 m), leaving a 1 mile (1.6 km) wide horseshoe - shaped crater. The debris avalanche was up to 0.7 cubic miles (2.9 km) in volume. The Mount St. Helens National Volcanic Monument was created to preserve the volcano and allow for its aftermath to be scientifically studied. As with most other volcanoes in the Cascade Range, Mount St. Helens is a large eruptive cone consisting of lava rock interlayered with ash, pumice, and other deposits. The mountain includes layers of basalt and andesite through which several domes of dacite lava have erupted. The largest of the dacite domes formed the previous summit, and off its northern flank sat the smaller Goat Rocks dome. Both were destroyed in the 1980 eruption. Mount St. Helens is 34 miles (55 km) west of Mount Adams, in the western part of the Cascade Range. These "sister and brother '' volcanic mountains are approximately 50 miles (80 km) from Mount Rainier, the highest of Cascade volcanoes. Mount Hood, the nearest major volcanic peak in Oregon, is 60 miles (100 km) southeast of Mount St. Helens. Mount St. Helens is geologically young compared with the other major Cascade volcanoes. It formed only within the past 40,000 years, and the pre-1980 summit cone began rising about 2,200 years ago. The volcano is considered the most active in the Cascades within the Holocene epoch (the last 10,000 or so years). Prior to the 1980 eruption, Mount St. Helens was the fifth - highest peak in Washington. It stood out prominently from surrounding hills because of the symmetry and extensive snow and ice cover of the pre-1980 summit cone, earning it the nickname "Fuji - san of America ''. The peak rose more than 5,000 feet (1,500 m) above its base, where the lower flanks merge with adjacent ridges. The mountain is 6 miles (9.7 km) across at its base, which is at an elevation of 4,400 feet (1,300 m) on the northeastern side and 4,000 feet (1,200 m) elsewhere. At the pre-eruption tree line, the width of the cone was 4 miles (6.4 km). Streams that originate on the volcano enter three main river systems: the Toutle River on the north and northwest, the Kalama River on the west, and the Lewis River on the south and east. The streams are fed by abundant rain and snow. The average annual rainfall is 140 inches (3,600 mm), and the snow pack on the mountain 's upper slopes can reach 16 feet (4.9 m). The Lewis River is impounded by three dams for hydroelectric power generation. The southern and eastern sides of the volcano drain into an upstream impoundment, the Swift Reservoir, which is directly south of the volcano 's peak. Although Mount St. Helens is in Skamania County, Washington, access routes to the mountain run through Cowlitz County to the west. State Route 504, locally known as the Spirit Lake Memorial Highway, connects with Interstate 5 at Exit 49, 34 miles (55 km) to the west of the mountain. That north -- south highway skirts the low - lying cities of Castle Rock, Longview and Kelso along the Cowlitz River, and passes through the Vancouver, Washington -- Portland, Oregon metropolitan area less than 50 miles (80 km) to the southwest. The community nearest the volcano is Cougar, Washington, in the Lewis River valley 11 miles (18 km) south - southwest of the peak. Gifford Pinchot National Forest surrounds Mount St. Helens. During the winter of 1980 -- 1981, a new glacier appeared. Now officially named Crater Glacier, it was formerly known as the Tulutson Glacier. Shadowed by the crater walls and fed by heavy snowfall and repeated snow avalanches, it grew rapidly (14 feet (4.3 m) per year in thickness). By 2004, it covered about 0.36 square miles (0.93 km), and was divided by the dome into a western and eastern lobe. Typically, by late summer, the glacier looks dark from rockfall from the crater walls and ash from eruptions. As of 2006, the ice had an average thickness of 300 feet (100 m) and a maximum of 650 feet (200 m), nearly as deep as the much older and larger Carbon Glacier of Mount Rainier. The ice is all post -- 1980, making the glacier very young geologically. However, the volume of the new glacier is about the same as all the pre -- 1980 glaciers combined. With the recent volcanic activity starting in 2004, the glacier lobes were pushed aside and upward by the growth of new volcanic domes. The surface of the glacier, once mostly without crevasses, turned into a chaotic jumble of icefalls heavily criss - crossed with crevasses and seracs caused by movement of the crater floor. The new domes have almost separated the Crater Glacier into an eastern and western lobe. Despite the volcanic activity, the termini of the glacier have still advanced, with a slight advance on the western lobe and a more considerable advance on the more shaded eastern lobe. Due to the advance, two lobes of the glacier joined together in late May 2008 and thus the glacier completely surrounds the lava domes. In addition, since 2004, new glaciers have formed on the crater wall above Crater Glacier feeding rock and ice onto its surface below; there are two rock glaciers to the north of the eastern lobe of Crater Glacier. Crater Glacier is the only known advancing glacier in the contiguous United States. The early eruptive stages of Mount St. Helens are known as the "Ape Canyon Stage '' (around 40,000 -- 35,000 years ago), the "Cougar Stage '' (ca. 20,000 -- 18,000 years ago), and the "Swift Creek Stage '' (roughly 13,000 -- 8,000 years ago). The modern period, since about 2500 BCE, is called the "Spirit Lake Stage ''. Collectively, the pre -- Spirit Lake stages are known as the "ancestral stages ''. The ancestral and modern stages differ primarily in the composition of the erupted lavas; ancestral lavas consisted of a characteristic mixture of dacite and andesite, while modern lava is very diverse (ranging from olivine basalt to andesite and dacite). St. Helens started its growth in the Pleistocene 37,600 years ago, during the Ape Canyon stage, with dacite and andesite eruptions of hot pumice and ash. Thirty - six thousand years ago a large mudflow cascaded down the volcano; mudflows were significant forces in all of St. Helens ' eruptive cycles. The Ape Canyon eruptive period ended around 35,000 years ago and was followed by 17,000 years of relative quiet. Parts of this ancestral cone were fragmented and transported by glaciers 14,000 to 18,000 years ago during the last glacial period of the current ice age. The second eruptive period, the Cougar Stage, started 20,000 years ago and lasted for 2,000 years. Pyroclastic flows of hot pumice and ash along with dome growth occurred during this period. Another 5,000 years of dormancy followed, only to be upset by the beginning of the Swift Creek eruptive period, typified by pyroclastic flows, dome growth and blanketing of the countryside with tephra. Swift Creek ended 8,000 years ago. A dormancy of about 4,000 years was broken around 2500 BCE with the start of the Smith Creek eruptive period, when eruptions of large amounts of ash and yellowish - brown pumice covered thousands of square miles. An eruption in 1900 BCE was the largest known eruption from St. Helens during the Holocene epoch, judged by the volume of one of the tephra layers from that period. This eruptive period lasted until about 1600 BCE and left 18 inches (46 cm) deep deposits of material 50 miles (80 km) distant in what is now Mt. Rainier National Park. Trace deposits have been found as far northeast as Banff National Park in Alberta, and as far southeast as eastern Oregon. All told there may have been up to 2.5 cubic miles (10 km) of material ejected in this cycle. Some 400 years of dormancy followed. St. Helens came alive again around 1200 BCE -- the Pine Creek eruptive period. This lasted until about 800 BCE and was characterized by smaller - volume eruptions. Numerous dense, nearly red hot pyroclastic flows sped down St. Helens ' flanks and came to rest in nearby valleys. A large mudflow partly filled 40 miles (64 km) of the Lewis River valley sometime between 1000 BCE and 500 BCE. The next eruptive period, the Castle Creek period, began about 400 BCE, and is characterized by a change in composition of St. Helens ' lava, with the addition of olivine and basalt. The pre-1980 summit cone started to form during the Castle Creek period. Significant lava flows in addition to the previously much more common fragmented and pulverized lavas and rocks (tephra) distinguished this period. Large lava flows of andesite and basalt covered parts of the mountain, including one around the year 100 BCE that traveled all the way into the Lewis and Kalama river valleys. Others, such as Cave Basalt (known for its system of lava tubes), flowed up to 9 miles (14 km) from their vents. During the first century, mudflows moved 30 miles (50 km) down the Toutle and Kalama river valleys and may have reached the Columbia River. Another 400 years of dormancy ensued. The Sugar Bowl eruptive period was short and markedly different from other periods in Mount St. Helens history. It produced the only unequivocal laterally directed blast known from Mount St. Helens before the 1980 eruptions. During Sugar Bowl time, the volcano first erupted quietly to produce a dome, then erupted violently at least twice producing a small volume of tephra, directed - blast deposits, pyroclastic flows, and lahars. Roughly 700 years of dormancy were broken in about 1480, when large amounts of pale gray dacite pumice and ash started to erupt, beginning the Kalama period. The eruption in 1480 was several times larger than the May 18, 1980, eruption. In 1482, another large eruption rivaling the 1980 eruption in volume is known to have occurred. Ash and pumice piled 6 miles (9.7 km) northeast of the volcano to a thickness of 3 feet (0.9 m); 50 miles (80 km) away, the ash was 2 inches (5 cm) deep. Large pyroclastic flows and mudflows subsequently rushed down St. Helens ' west flanks and into the Kalama River drainage system. This 150 - year period next saw the eruption of less silica - rich lava in the form of andesitic ash that formed at least eight alternating light - and dark - colored layers. Blocky andesite lava then flowed from St. Helens ' summit crater down the volcano 's southeast flank. Later, pyroclastic flows raced down over the andesite lava and into the Kalama River valley. It ended with the emplacement of a dacite dome several hundred feet (~ 200 m) high at the volcano 's summit, which filled and overtopped an explosion crater already at the summit. Large parts of the dome 's sides broke away and mantled parts of the volcano 's cone with talus. Lateral explosions excavated a notch in the southeast crater wall. St. Helens reached its greatest height and achieved its highly symmetrical form by the time the Kalama eruptive cycle ended, about 1647. The volcano remained quiet for the next 150 years. The 57 - year eruptive period that started in 1800 was named after the Goat Rocks dome, and is the first time that both oral and written records exist. Like the Kalama period, the Goat Rocks period started with an explosion of dacite tephra, followed by an andesite lava flow, and culminated with the emplacement of a dacite dome. The 1800 eruption probably rivalled the 1980 eruption in size, although it did not result in massive destruction of the cone. The ash drifted northeast over central and eastern Washington, northern Idaho, and western Montana. There were at least a dozen reported small eruptions of ash from 1831 to 1857, including a fairly large one in 1842. The vent was apparently at or near Goat Rocks on the northeast flank. Goat Rocks dome was the site of the bulge in the 1980 eruption, and it was obliterated in the major eruption event on May 18, 1980 that destroyed the entire north face and top 1,300 feet (400 m) of the mountain. On March 20, 1980, Mount St. Helens experienced a magnitude 4.2 earthquake; and, on March 27, steam venting started. By the end of April, the north side of the mountain had started to bulge. On May 18, a second earthquake, of magnitude 5.1, triggered a massive collapse of the north face of the mountain. It was the largest known debris avalanche in recorded history. The magma in St. Helens burst forth into a large - scale pyroclastic flow that flattened vegetation and buildings over 230 square miles (600 km). More than 1.5 million metric tons of sulfur dioxide was released into the atmosphere. On the Volcanic Explosivity Index scale, the eruption was rated a five, and categorized as a Plinian eruption. The collapse of the northern flank of St. Helens mixed with ice, snow, and water to create lahars (volcanic mudflows). The lahars flowed many miles down the Toutle and Cowlitz Rivers, destroying bridges and lumber camps. A total of 3,900,000 cubic yards (3,000,000 m) of material was transported 17 miles (27 km) south into the Columbia River by the mudflows. For more than nine hours, a vigorous plume of ash erupted, eventually reaching 12 to 16 miles (20 to 27 km) above sea level. The plume moved eastward at an average speed of 60 miles per hour (100 km / h) with ash reaching Idaho by noon. Ashes from the eruption were found collecting on top of cars and roofs next morning, as far as the city of Edmonton in Alberta, Canada. By about 5: 30 p.m. on May 18, the vertical ash column declined in stature, and less severe outbursts continued through the night and for the next several days. The St. Helens May 18 eruption released 24 megatons of thermal energy; it ejected more than 0.67 cubic miles (2.79 km) of material. The removal of the north side of the mountain reduced St. Helens ' height by about 1,300 feet (400 m) and left a crater 1 mile (1.6 km) to 2 miles (3.2 km) wide and 0.5 miles (800 m) deep, with its north end open in a huge breach. The eruption killed 57 people, nearly 7,000 big game animals (deer, elk, and bear), and an estimated 12 million fish from a hatchery. It destroyed or extensively damaged over 200 homes, 185 miles (298 km) of highway and 15 miles (24 km) of railways. Between 1980 and 1986, activity continued at Mount St. Helens, with a new lava dome forming in the crater. Numerous small explosions and dome - building eruptions occurred. From December 7, 1989, to January 6, 1990, and from November 5, 1990, to February 14, 1991, the mountain erupted with sometimes huge clouds of ash. Magma reached the surface of the volcano about October 11, 2004, resulting in the building of a new lava dome on the existing dome 's south side. This new dome continued to grow throughout 2005 and into 2006. Several transient features were observed, such as a lava spine nicknamed the "whaleback, '' which comprised long shafts of solidified magma being extruded by the pressure of magma beneath. These features were fragile and broke down soon after they were formed. On July 2, 2005, the tip of the whaleback broke off, causing a rockfall that sent ash and dust several hundred meters into the air. Mount St. Helens showed significant activity on March 8, 2005, when a 36,000 - foot (11,000 m) plume of steam and ash emerged -- visible from Seattle. This relatively minor eruption was a release of pressure consistent with ongoing dome building. The release was accompanied by a magnitude 2.5 earthquake. Another feature to emerge from the dome was called the "fin '' or "slab. '' Approximately half the size of a football field, the large, cooled volcanic rock was being forced upward as quickly as 6 ft (2 m) per day. In mid-June 2006, the slab was crumbling in frequent rockfalls, although it was still being extruded. The height of the dome was 7,550 feet (2,300 m), still below the height reached in July 2005 when the whaleback collapsed. On October 22, 2006, at 3: 13 p.m. PST, a magnitude 3.5 earthquake broke loose Spine 7. The collapse and avalanche of the lava dome sent an ash plume 2,000 feet (600 m) over the western rim of the crater; the ash plume then rapidly dissipated. On December 19, 2006, a large white plume of condensing steam was observed, leading some media people to assume there had been a small eruption. However, the Cascades Volcano Observatory of the USGS did not mention any significant ash plume. The volcano was in continuous eruption from October 2004, but this eruption consisted in large part of a gradual extrusion of lava forming a dome in the crater. On January 16, 2008, steam began seeping from a fracture on top of the lava dome. Associated seismic activity was the most noteworthy since 2004. Scientists suspended activities in the crater and the mountain flanks, but the risk of a major eruption was deemed low. By the end of January, the eruption paused; no more lava was being extruded from the lava dome. On July 10, 2008, it was determined that the eruption had ended, after more than six months of no volcanic activity. American Indian lore contains numerous legends to explain the eruptions of Mount St. Helens and other Cascade volcanoes. The most famous of these is the Bridge of the Gods legend told by the Klickitat people. In their tale, the chief of all the gods and his two sons, Pahto (also called Klickitat) and Wy'east, traveled down the Columbia River from the Far North in search for a suitable area to settle. They came upon an area that is now called The Dalles and thought they had never seen a land so beautiful. The sons quarreled over the land, so to solve the dispute their father shot two arrows from his mighty bow -- one to the north and the other to the south. Pahto followed the arrow to the north and settled there while Wy'east did the same for the arrow to the south. The chief of the gods then built the Bridge of the Gods, so his family could meet periodically. When the two sons of the chief of the gods fell in love with a beautiful maiden named Loowit, she could not choose between them. The two young chiefs fought over her, burying villages and forests in the process. The area was devastated and the earth shook so violently that the huge bridge fell into the river, creating the cascades of the Columbia River Gorge. For punishment, the chief of the gods struck down each of the lovers and transformed them into great mountains where they fell. Wy'east, with his head lifted in pride, became the volcano known today as Mount Hood. Pahto, with his head bent toward his fallen love, was turned into Mount Adams. The fair Loowit became Mount St. Helens, known to the Klickitats as Louwala - Clough, which means "smoking or fire mountain '' in their language (the Sahaptin called the mountain Loowit). The mountain is also of sacred importance to the Cowlitz and Yakama tribes that also historically lived in the area. They find the area above its tree line to be of exceptional spiritual significance, and the mountain (which they call "Lawetlat'la '', roughly translated as "the smoker '') features prominently in their creation myth, and in some of their songs and rituals. In recognition of this cultural significance, over 12,000 acres (4,900 ha) of the mountain (roughly bounded by the Loowit Trail) have been listed on the National Register of Historic Places. Other area tribal names for the mountain include "nšh _́ ák _́ '' ("water coming out '') from the Upper Chehalis, and "aka akn '' ("snow mountain ''), a Kiksht term. Royal Navy Commander George Vancouver and the officers of HMS Discovery made the Europeans ' first recorded sighting of Mount St. Helens on May 19, 1792, while surveying the northern Pacific Ocean coast. Vancouver named the mountain for British diplomat Alleyne Fitzherbert, 1st Baron St Helens on October 20, 1792, as it came into view when the Discovery passed into the mouth of the Columbia River. Years later, explorers, traders, and missionaries heard reports of an erupting volcano in the area. Geologists and historians determined much later that the eruption took place in 1800, marking the beginning of the 57 - year - long Goat Rocks Eruptive Period (see geology section). Alarmed by the "dry snow, '' the Nespelem tribe of northeastern Washington danced and prayed rather than collecting food and suffered during that winter from starvation. In late 1805 and early 1806, members of the Lewis and Clark Expedition spotted Mount St. Helens from the Columbia River but did not report either an ongoing eruption or recent evidence of one. They did however report the presence of quicksand and clogged channel conditions at the mouth of the Sandy River near Portland, suggesting an eruption by Mount Hood sometime in the previous decades. In 1829 Hall J. Kelley led a campaign to rename the Cascade Range as the President 's Range and also to rename each major Cascade mountain after a former President of the United States. In his scheme Mount St. Helens was to be renamed Mount Washington. The first authenticated eyewitness report of a volcanic eruption was made in March 1835 by Meredith Gairdner, while working for the Hudson 's Bay Company stationed at Fort Vancouver. He sent an account to the Edinburgh New Philosophical Journal, which published his letter in January 1836. James Dwight Dana of Yale University, while sailing with the United States Exploring Expedition, saw the quiescent peak from off the mouth of the Columbia River in 1841. Another member of the expedition later described "cellular basaltic lavas '' at the mountain 's base. In late fall or early winter of 1842, nearby settlers and missionaries witnessed the so - called "Great Eruption ''. This small - volume outburst created large ash clouds, and mild explosions followed for 15 years. The eruptions of this period were likely phreatic (steam explosions). Josiah Parrish in Champoeg, Oregon witnessed Mount St. Helens in eruption on November 22, 1842. Ash from this eruption may have reached The Dalles, Oregon, 48 miles (80 km) southeast of the volcano. In October 1843, future California governor Peter H. Burnett recounted a story of an aboriginal American man who badly burned his foot and leg in lava or hot ash while hunting for deer. The likely apocryphal story went that the injured man sought treatment at Fort Vancouver, but the contemporary fort commissary steward, Napoleon McGilvery, disclaimed knowledge of the incident. British lieutenant Henry J. Warre sketched the eruption in 1845, and two years later Canadian painter Paul Kane created watercolors of the gently smoking mountain. Warre 's work showed erupting material from a vent about a third of the way down from the summit on the mountain 's west or northwest side (possibly at Goat Rocks), and one of Kane 's field sketches shows smoke emanating from about the same location. On April 17, 1857, the Republican, a Steilacoom, Washington, newspaper, reported that "Mount St. Helens, or some other mount to the southward, is seen... to be in a state of eruption ''. The lack of a significant ash layer associated with this event indicates that it was a small eruption. This was the first reported volcanic activity since 1854. Before the 1980 eruption, Spirit Lake offered year - round recreational activities. In the summer there was boating, swimming, and camping, while in the winter there was skiing. Fifty - seven people were killed during the eruption. Had the eruption occurred one day later, when loggers would have been at work, rather than on a Sunday, the death toll could have been much higher. 83 - year - old Harry R. Truman, who had lived near the mountain for 54 years, became famous when he decided not to evacuate before the impending eruption, despite repeated pleas by local authorities. His body was never found after the eruption. Another victim of the eruption was 30 - year - old volcanologist David A. Johnston, who was stationed on the nearby Coldwater Ridge. Moments before his position was hit by the pyroclastic flow, Johnston radioed his famous last words: "Vancouver! Vancouver! This is it! '' Johnston 's body was never found. U.S. President Jimmy Carter surveyed the damage and said, "Someone said this area looked like a moonscape. But the moon looks more like a golf course compared to what 's up there. '' A film crew, led by Seattle filmmaker Otto Seiber, was dropped by helicopter on St. Helens on May 23 to document the destruction. Their compasses, however, spun in circles and they quickly became lost. A second eruption occurred on May 25, but the crew survived and was rescued two days later by National Guard helicopter pilots. Their film, The Eruption of Mount St. Helens, later became a popular documentary. In 1982, President Ronald Reagan and the U.S. Congress established the Mount St. Helens National Volcanic Monument, a 110,000 acres (45,000 ha) area around the mountain and within the Gifford Pinchot National Forest. Following the 1980 eruption, the area was left to gradually return to its natural state. In 1987, the U.S. Forest Service reopened the mountain to climbing. It remained open until 2004 when renewed activity caused the closure of the area around the mountain (see Geological history section above for more details). Most notable was the closure of the Monitor Ridge trail, which previously let up to 100 permitted hikers per day climb to the summit. On July 21, 2006, the mountain was again opened to climbers. In February 2010, a climber died after falling from the rim into the crater. The mountain is now circled by the Loowit Trail at elevations of 4000 -- 4900 feet (1,200 - 1,500 m). The northern segment of the trail from the South Fork Toutle River on the west to Windy Pass on the east is a restricted zone where camping, biking, pets, fires, and off - trail excursions are all prohibited. Mount St. Helens is a popular climbing destination for both beginning and experienced mountaineers. The peak is climbed year - round, although it is more often climbed from late spring through early fall. All routes include sections of steep, rugged terrain. A permit system has been in place for climbers since 1987. A climbing permit is required year - round for anyone who will be above 4,800 feet (1,500 m) on the slopes of Mount St. Helens. The standard hiking / mountaineering route in the warmer months is the Monitor Ridge Route, which starts at the Climbers Bivouac. This is the most popular and crowded route to the summit in the summer and gains about 4,600 feet (1,400 m) in approximately 5 miles (8 km) to reach the crater rim. Although strenuous, it is considered non-technical climb that involves some scrambling. Most climbers complete the round trip in 7 to 12 hours. The Worm Flows Route is considered the standard winter route on Mount St. Helens, as it is the most direct route to the summit. The route gains about 5,700 feet (1,700 m) in elevation over about 6 miles (10 km) from trailhead to summit but does not demand the technical climbing that some other Cascade peaks like Mount Rainier do. The "Worm Flows '' part of the route name refers to the rocky lava flows that surround the route. This route can be accessed via the Marble Mountain Sno - Park and the Swift Ski Trail.
ok google what's the new york giants record
History of the New York Giants - wikipedia The New York Giants, an American football team which currently plays in the National Football League 's National Football Conference, has a history dating back more than 80 seasons. The Giants were founded in 1925 by Tim Mara in the then five - year - old NFL. Mara owned the team until his death in 1959, when it was passed on to his sons, Wellington and Jack. During their history, the Giants have won eight NFL championships, four of which came in Super Bowls. In just its third season, the team finished with the best record in the league at 11 -- 1 -- 1 and was awarded the NFL title. In a 14 - year span beginning in 1933, New York qualified to play in the NFL championship game eight times, winning twice (1934 and 1938). They did not win another championship until 1956, aided by several future Hall of Fame players such as running back Frank Gifford, linebacker Sam Huff, and offensive tackle Roosevelt Brown. From 1958 to 1963, the Giants played in the NFL championship game five times, but failed to win. The 1958 NFL Championship game, in which they lost 23 -- 17 in overtime to the Baltimore Colts, is credited with increasing the popularity of the NFL in the United States. The Giants registered just two winning seasons from 1964 to 1980 and were unable to advance to the playoffs. From 1981 to 1990, the team qualified for the postseason seven times and won Super Bowls XXI and XXV. The team 's success during the 1980s was aided by head coach Bill Parcells, quarterback Phil Simms and Hall of Fame linebackers Lawrence Taylor and Harry Carson. New York struggled throughout much of the 1990s as Parcells left the team, and players such as Simms and Taylor declined and eventually retired. They returned to the Super Bowl in 2000, but lost to the Baltimore Ravens in Super Bowl XXXV. The Giants upset the heavily favored New England Patriots in Super Bowls XLII and XLVI. Season by season timeline of the New York Giants franchise including the team name, changes of Home Field, Postseason Championships Seasons, and coaches throughout the years. The Giants were founded in 1925 by original owner Tim Mara with an investment of $500. Legally named "New York Football Giants '' (which they still are to this day) to distinguish themselves from the baseball team of the same name, they became one of the first teams in the then five - year - old National Football League. In 1919, Charles Stoneham, the owner of the New York Giants baseball team, had organized and promoted a professional football team to be called the New York Giants. The team folded before its first game. The New York Football Giants played their first game against All New Britain in New Britain, Connecticut on October 5, 1925. Although the Giants were successful on the field in their first season, going 8 -- 4, their financial status was a different story. Overshadowed by baseball, boxing, and college football, professional football was not a popular sport in 1925. They were in dire financial straits until the eleventh game of the season, when Red Grange and the Chicago Bears came to town, attracting over 73,000 fans. This gave the Giants a much needed influx of revenue, and perhaps altered the history of the franchise. New York finished 11 -- 1 -- 1 in 1927. Their league - best defense posted 10 shutouts in 13 games. New coach Earl Potteiger led the team into a late - season game against Chicago with first place on the line. New York won 13 -- 7 in what lineman Steve Owen called "the toughest, roughest football game I ever played. '' Then they won their final two regular season games to secure their first championship. Following a disappointing 4 -- 7 -- 2 season the next year, Potteiger was replaced by LeRoy Andrews. Before the 1929 season, Mara purchased the entire squad of the Detroit Wolverines, including star quarterback Benny Friedman. The Wolverines had finished in third place the year before. Led by Friedman, New York 's record soared to 13 -- 1 -- 1. However, their lone loss was a 20 -- 6 setback in November to the Green Bay Packers, and by virtue of this win, and their 12 -- 0 -- 1 record, won the NFL title. Following the season, Mara transferred ownership over to his two sons to insulate the team from creditors. At the time, Jack was just 22, and Wellington only 14. In 1930, the quality of the professional game was still in question, with many claiming the college "amateurs '' played with more intensity. In December 1930, the Giants played a team of Notre Dame All - Stars at the Polo Grounds to raise money for the unemployed of New York City. It was also an opportunity to establish the superiority of the pro game. Knute Rockne reassembled his Four Horsemen along with other Notre Dame legends, and told them to score early, then defend. But from the beginning, it was a one - sided contest, with Benny Friedman running for two Giants touchdowns and Hap Moran passing for another. Notre Dame failed to score. When it was over, Rockne told his team, ' "(t) hat was the greatest football machine I ever saw. I am glad none of you got hurt. '' The game raised $115,183 for the homeless, and is often credited with establishing the legitimacy of the professional game. The Giants hired All - Pro offensive tackle Steve Owen to be their new player - head coach prior to the 1931 season. He coached the team for the next 23 years, including two NFL championships, and was inducted into the Pro Football Hall of Fame in 1966. Owen never had a contract with the Mara family; he coached his entire tenure on a handshake basis. Before the 1931 season, New York acquired center Mel Hein, who also played the linebacker position. He would go on to a fifteen - year NFL career in which, as a center, he became an All - NFL First Team selection eight times, and the only offensive lineman ever named league MVP. Friedman quit the team following the season when Mara denied him an ownership stake, telling him "I 'm sorry... but the Giants are for my sons. '' New York struggled in 1931 and 1932, finishing with a combined record of 11 -- 12 -- 3. The Giants acquired University of Michigan All - American quarterback Harry Newman and versatile free agent halfback Ken Strong before the 1933 season. New York finished 11 -- 3, first in the new Eastern Division. Newman led the NFL in passes completed (53), passing yards (973), touchdown passes (11), and longest pass completion (78 yards), with his passing yardage total setting an NFL record. New York 's resurgence was led by some of the league 's best linemen, such as Ray Flaherty and future Hall of Famers Red Badgro, and Hein. They advanced to play in the league 's first official championship game in Chicago 's Wrigley Field versus the Bears, where they lost 23 -- 21 in a game which had six lead changes. In the 1934 NFL Championship Game, the Giants defeated previously unbeaten Chicago 30 -- 13 at the Polo Grounds on an icy field with temperatures peaking at 25 degrees. Before the game, team treasurer John Mara talked with Owen and team captain Flaherty about the field conditions. Flaherty suggested the Giants wear sneakers on the frozen field, as he had played in a game under similar circumstances at Gonzaga, and the sneakers proved to be effective. Mara dispatched equipment manager Abe Cohen to get as many sneakers as he could get. Due to traffic and the inability to find any athletic goods stores open on Sunday, Cohen was unable to return before the game started, and the Giants, wearing conventional footwear, trailed 10 -- 3 at the end of the first half. Realizing time was short, Cohen went to Manhattan College -- where he had a key to the equipment and locker rooms -- and returned to the Polo Grounds at halftime with nine pairs of basketball sneakers, saying that "nine pairs was all I could get. '' Players donned the sneakers and New York, after allowing Chicago another field goal late in the 3rd quarter, responded with 27 unanswered points in the 4th quarter to win their first NFL Championship game. The game would come to be known as "The Sneakers Game '', and the 27 points the Giants scored in the 4th quarter set a single -- quarter championship game scoring record that stood for decades. After the game, offensive tackle Len Grant expressed his sincere gratitude by stating "God bless Abe Cohen. '' The Giants were unable to repeat as champions in 1935, as they fell to the Detroit Lions 26 -- 7 in the NFL Championship game. The Lion staked a 13 -- 0 lead before the Giants were able to cut the lead to 13 -- 7 in the 3rd quarter. However, the Lions defense helped their team score two late touchdowns with a blocked punt and an interception. The Giants were so successful from the latter half of the 1930s until the United States ' entry into World War II, that according to one publication, "(f) rom 1936 to 1941 the New York Giants annually fielded a collection of NFL all - stars. '' They added their third NFL championship in 1938 with a 23 -- 17 win over Green Bay. The Giants blocked two Green Bay punts to establish an early advantage before the Packers came back to take a 17 -- 16 lead. However, in the 4th quarter, Ed Danowski threw a 23 - yard touchdown pass to Hank Soar, and the Giants defense held the Packers scoreless. The Giants made the championship game again the next year, but lost in a rematch to the Packers, 31 -- 16. They also advanced to the championship game in 1941, losing to the Bears, 37 -- 9. Both games were close early before their respective opponents went on an offensive surge to break the game open late. In 1944, the Giants reached the championship game, where they faced the Green Bay Packers for the third time in ten seasons. They lost again, this time 14 -- 7 as Ted Fritsch scored two touchdowns, and the Packers defense held on to the lead despite a fourth - quarter touchdown by the Giants. By 1946, Mara had given over complete control of the team to his two sons. Jack controlled the business aspects, while Wellington controlled the on - field operations. In 1946, the Giants again reached the Championship game, for the eighth time in 14 seasons. However, they were beaten by the Sid Luckman - led Bears, 24 -- 14. Before the 1948 season, the Giants signed defensive back Emlen Tunnell, the first African American player in team history, and later the first African American inducted into the Hall of Fame. From 1947 to 1949, they never finished above. 500, but came back with a 10 -- 2 record in 1950. However, they lost to the Cleveland Browns, whom they had beaten twice in the regular season, 8 -- 3, in the 1950 divisional playoff game. In 1949, halfback Gene "Choo - Choo '' Roberts scored a league - high 17 touchdowns, and in 1950, he set a team record that would stand for over 50 years, when he rushed for 218 yards on November 12. Following the 1953 season, an important transition in Giants history occurred. After being the team 's coach for 23 years, Steve Owen was fired by Wellington and Jack Mara, and replaced by Jim Lee Howell. Wellington later described the move by calling it "the hardest decision I 'd ever made ''. New York went 7 -- 5 in 1954 under Howell. In their 31st and final season playing their home games at the Polo Grounds in 1955, they went 5 -- 1 -- 1 over their final seven games to finish 6 -- 5 -- 1. They were led by rejuvenated running back Frank Gifford, who played the entire season solely on offense for the first time in several years. The Giants won their fourth NFL Championship in 1956. Playing their home games at Yankee Stadium for the first time, New York won the Eastern Division with an 8 -- 3 -- 1 record. In the NFL Championship Game on an icy field against the Chicago Bears, the Giants wore sneakers as they had 22 years previous. They dominated the Bears, winning 47 -- 7. The 1956 Giants featured a number of future Hall of Fame players, including Gifford, Sam Huff, and Roosevelt Brown. Equally notable, the team featured as its coordinators future Hall of Fame head coaches Tom Landry (defense) and Vince Lombardi (offense). The Giants had another successful year in 1958. They tied for the Eastern Division regular season title with a 9 -- 3 record by defeating the Cleveland Browns 13 -- 10 on the last day of the regular season. They beat the Browns again a week later in a one - game playoff to determine the division winner. They advanced to play the Baltimore Colts in the NFL Championship Game. This game, which would become known as "The Greatest Game Ever Played '', is considered a watershed moment in league history, and marked the beginning of the rise of professional football into the dominant sport in the American market. The game was competitive. The Giants got off to an early 3 -- 0 lead, then the Colts scored two touchdowns to take a 14 -- 3 halftime lead. In the 3rd quarter, New York 's defense made a goal line stand, which became a turning point in the game. New York, who had trouble mounting drives to that point, then had a 95 - yard drive which culminated in a touchdown, making the score 14 -- 10. They drove again in the 4th quarter, with quarterback Charlie Conerly throwing a 15 - yard touchdown pass to Frank Gifford to take the lead, 17 -- 14. The Colts put together one last drive with less than two minutes left. The standout player was receiver Raymond Berry, who caught three passes for 62 yards, the last one for 22 yards to the New York 13 - yard line. With seven seconds left in regulation, Steve Myhra kicked a 20 - yard field goal to tie the score 17 -- 17, sending a game to overtime for the first time in NFL history. After winning the coin toss and receiving the ball, the Giants offense stalled and was forced to punt. From their own 20, the Colts drove the ball to the New York 1 - yard line, where Alan Ameche ran for a touchdown to give the Colts the championship, 23 -- 17. New York 's success continued in the 1960s. They finished 9 -- 3 in 1959 and faced the Colts in a championship game rematch. They lost again, this time in a far less dramatic game, 31 -- 16. Led by quarterback Y.A. Tittle and head coach Allie Sherman, the Giants won three consecutive Eastern Division titles from 1961 -- 1963. In 1961, they were beaten 37 -- 0 by the Packers. In 1962, they went into the championship game with a league - best 12 -- 2 record and a nine -- game winning streak, but they lost to the Packers again, 16 -- 7. The Giants finished with an 11 -- 3 record in 1963 and faced the Bears in the NFL championship game. On an icy field in Chicago, the Giants ' defense played well, but the Bears newly invented zone defense intercepted Tittle five times and battered him throughout the game. Sherman resisted calls from players such as linebacker Sam Huff to replace the struggling Tittle. The Giants defense held the Bears in check, but they lost 14 -- 10, their third straight NFL Championship Game defeat. The Giants ' run of championship game appearances combined with their large market location translated into financial success. By the early 1960s, the Giants were receiving $175,000 a game under the NFL 's television contract with CBS -- four times as much as small - market Green Bay, which was one of the most successful teams of the era. However, in the league 's new contract, the Maras convinced the other owners that it would be in the best interest of the NFL to share television revenue equally, a practice which is still current, and is credited with strengthening the league. After the 1963 season, the team fell apart. A roster filled with mostly older veterans plus some bad personnel moves (e.g. the dispatching of Rosey Grier, Sam Huff, and Don Chandler) lead to a quick exit from the top of the standings. The Giants finished 2 -- 10 -- 2 in 1964, beginning an 18 - season playoff drought. The seasons of 1964 through 1980 in team history have often been referred to as "the wilderness years '' for several reasons: 1) The franchise lost its status as an elite NFL team by posting only two winning seasons, against twelve losing and three. 500 seasons during this span; 2) The Giants became a "team of nomads, '' calling four different stadiums home in the 1970s (Yankee Stadium, the Yale Bowl, Shea Stadium, and finally Giants Stadium in 1976); 3) New York tried several head coach and quarterback combinations during this period, but with almost no success (from 1964 to 1983, no coach or starting quarterback could boast even a. 500 record). The team rebounded with a 7 -- 7 record in 1965, (mostly due to the acquisition of quarterback Earl Morrall during the offseason) before compiling a league - worst 1 -- 12 -- 1 record and allowing over 500 points on defense in 1966. This season also included a 72 -- 41 loss to the rival Washington Redskins at D.C. Stadium in the highest - scoring game in league history. Interest in the team was waning, especially with the rapid rise of the New York Jets, with their wide - open style of play and charismatic quarterback Joe Namath. The Giants acquired quarterback Fran Tarkenton from the Minnesota Vikings before the 1967 season in exchange for their 1st - and 2nd - round draft picks, and showed improvement. They finished 7 -- 7 in 1967 and were 7 -- 3 through ten games in 1968. Leaving them one game behind Capitol Division leader Dallas. However, New York dropped its final four games to again finish 7 -- 7. Notably, in 1968, one of Tarkenton 's favorite targets, wide receiver Homer Jones made the Pro Bowl; it was n't until 2010 that another Giants receiver, (Steve Smith), would make the Pro Bowl. Since Smith, Victor Cruz (2012) and Odell Beckham Jr. (2014 -- 16) have made it to the Pro Bowl. Jones ' average of 22.3 yards per reception for his career is still an NFL record. During the 1969 preseason, the Giants lost their first meeting with the Jets, 37 -- 14 at the Yale Bowl in New Haven, Connecticut. Following the game, Wellington Mara fired coach Allie Sherman and replaced him with former Giants fullback Alex Webster. On opening day of the 1969 regular season, Tarkenton led New York to a 24 -- 23 victory over his former team, the Vikings, by throwing two touchdown passes in the 4th quarter. The Giants went 6 -- 8 that season. They showed marked improvement in 1970; after an 0 -- 3 start, they rebounded to finish 9 -- 5, narrowly missing the playoffs by losing their final game to the Los Angeles Rams. Tarkenton had one of his best seasons as a Giant and made his fourth straight Pro Bowl. Running back Ron Johnson was also selected to the Pro Bowl; the halfback ran for 1,027 yards, becoming the first Giant to gain 1,000 yards rushing in a season. In 1971, Johnson missed most of the season with a knee injury, and New York dropped to 4 -- 10, resulting in Tarkenton being traded back to the Vikings. The Giants rallied somewhat in 1972 to finish 8 -- 6. Journeyman quarterback Norm Snead (acquired in the trade for Tarkenton) led the league in completion percentage and had his best season. Other standouts and Pro Bowl selections that year were running back Johnson, who rushed for 1,182 yards (breaking his own team record) and caught 45 passes, tight end Bob Tucker, who followed up his 1971 NFC - leading 59 - catch season with 55 in 1972, and defensive stars Jack Gregory and John Mendenhall. The Giants boasted the top offense in the NFC and after a season - finishing 23 -- 3 win at Dallas to secure their second winning campaign in three years, the future looked bright. However, after the 1972 season, New York would endure one of the worst periods in its history. Desiring their own home stadium, in 1973, the Giants reached an agreement with the New Jersey Sports and Exposition Authority to play their home games at a new, state - of - the - art, dedicated football stadium. Later named Giants Stadium, it was to be built at a new sports complex in East Rutherford, New Jersey. As the complex was being built, and their current home at Yankee Stadium was being renovated, they would be without a home for three years, and dubbed "the orphans of the NFL. '' Their final full season at Yankee Stadium was 1972. After playing their first two games there in 1973, the Giants played the rest of their home games in 1973, as well as all of their home games in 1974, at the Yale Bowl in New Haven, Connecticut. This was done out of a desire to have their own home field, as opposed to having to share Shea Stadium with the Jets. However, between access problems, neighborhood issues, the fact that the Yale Bowl was not ideally suited for pro football (the stadium did not have lights, nor does it today), the age of the stadium (built in 1914), and the lack of modern amenities, the Giants reconsidered their decision and agreed to share Shea Stadium with the Jets in 1975. New York left the Yale Bowl after losing all seven home games played there in 1974 and compiling a home record of 1 -- 11 over that two - year stretch. One of the bright spots in this era was tight end Bob Tucker. From 1970 through 1977, Tucker was one of the top tight ends in the NFL. He amassed 327 receptions, 4,376 yards, and 22 touchdowns during his years as a Giant. Despite their new home and heightened fan interest, New York still played subpar football in 1976 and 1977. In 1978, the Giants started the year 5 -- 6 and on November 19, 1978, played the Philadelphia Eagles at home with a chance to solidify their playoff prospects. However, the season imploded in one of the most improbable finishes in NFL history. The Giants led 17 -- 12 and had possession of the ball with only 30 seconds left. They had to just kneel the ball to end the game, as the Eagles had no time outs. However, instead of kneeling the ball, offensive coordinator Bob Gibson ordered New York quarterback Joe Pisarcik to hand the ball off to fullback Larry Csonka. Csonka was unprepared to receive the handoff, and the ball rolled off his hip and bounced free. Eagles safety Herman Edwards picked up the loose ball and ran, untouched, for a score, giving the Eagles an improbable 19 -- 17 victory. This play is referred to as "The Miracle in the Meadowlands '' among Eagles fans, and "The Fumble '' among Giants fans. In the aftermath of the defeat, Gibson was fired, and the Giants lost three out of their last four games to finish out of the playoffs for the 15th straight season, leading them to let coach John McVay go as well. However, following the 1978 season came the steps that would, in time, return New York to the pinnacle of the NFL. New York decided to hire a general manager for the first time in franchise history following the 1978 season. The search grew contentious and fractured the relationship between owners Wellington and Tim Mara. Finally, the Maras asked NFL Commissioner Pete Rozelle to step in with a recommendation. Rozelle recommended George Young, who worked in personnel for the Miami Dolphins and had been an assistant coach for the Baltimore Colts. Young was hired, but the rift between the Maras lasted for several years. Young hired San Diego Chargers assistant Ray Perkins as head coach and drafted unknown quarterback Phil Simms from Morehead State to the surprise of many. New York continued to struggle, finishing 6 -- 10 in 1979 and 4 -- 12 in 1980. With the 2nd overall pick in the 1981 draft, the Giants selected linebacker Lawrence Taylor. The impact that Taylor had on the Giants ' defense was immediate. He was named the NFL Defensive Rookie of the Year and NFL Defensive Player of the Year, becoming, to date, the only rookie to ever win the NFL Defensive Player of the Year award. His arrival raised the Giants linebacking corps -- which already included future Hall of Famer Harry Carson and Pro Bowler Brad Van Pelt -- into one of the NFL 's best. It also predicated New York 's transformation from allowing 425 points in 1980 to 257 in 1981. Another bright spot was the rushing game; keyed by the acquisition (via trade from the Houston Oilers) of running back Rob Carpenter in early October. Carpenter rushed for 748 yards and scored five touchdowns thru the balance of the season and the Giants went 9 -- 7. They defeated the Eagles in the first round of the playoffs, 27 -- 21, then lost to the eventual Super Bowl champion San Francisco 49ers 38 -- 24 in the divisional playoffs. In the strike - shortened 1982 season, the Giants lost their first two games before the strike, and their first game upon returning. They won their next three games to even their record at 3 -- 3. Perkins then announced that he was leaving to take the head coaching job at Alabama after the season, and the team lost the next two games, effectively eliminating them from the playoffs (despite defeating the Eagles in the season finale to go 4 -- 5). Taylor remained a bright spot, repeating as the league 's Defensive Player of the Year. Young chose Bill Parcells, the Giants ' defensive coordinator, as the team 's new head coach. Parcells first year proved difficult. In his first major decision, he named Scott Brunner as his starting quarterback over Phil Simms. At first, it appeared his decision was justified, especially after a 27 -- 3 Monday night victory over Green Bay gave New York a 2 -- 2 record. But then they lost 10 of their final 12 games. Parcells ignored fans ' protests and stuck with Brunner for most of the year, although Jeff Rutledge saw considerable late - season action. Simms finally played in a week six game against the Eagles, only to suffer a season -- ending thumb injury. Simms won the starting job back in 1984, and Brunner was traded. The Giants had a resurgent season, highlighted by a second - half stretch where they won five of six games. Despite losing their last two to finish 9 -- 7, they still made the playoffs. In the first round, they defeated the highly favored Los Angeles Rams 16 -- 13 on the road before losing 21 -- 10 to the eventual Super Bowl champion San Francisco 49ers. Simms threw for 4,044 yards, making him the first Giant to pass for 4,000 yards in a season. The Giants success continued in 1985 by going 10 -- 6. The defense carried the team and led the NFL in sacks with 68. They won their first - round playoff game, 17 -- 3 over the defending champion 49ers. It was New York 's first postseason win at home since 1958, and their first ever at Giants Stadium. In the divisional playoffs, they lost 21 -- 0 to the eventual Super Bowl champion Chicago Bears. Many of the players that would play key roles on New York 's Super Bowl teams emerged in 1985. Joe Morris became the feature back, running for 1,338 yards, scoring 21 touchdowns, and making the Pro Bowl. Second - year receiver Lionel Manuel led the Giants with 49 receptions, and rookie tight end Mark Bavaro had 37 catches. Simms threw every pass for New York for the second consecutive season, and passed for over 3,800 yards. Defensive end Leonard Marshall recorded 15.5 sacks, and Taylor added 13. New York entered the 1986 season as one of the favorites to win the Super Bowl. They had their first test in a Monday Night game against the defending NFC East champion Dallas Cowboys. They lost at Texas Stadium, 31 -- 28. However, they won their next five in a row and 14 of their last 15, to finish the season with a 14 -- 2 record. One of the signature plays of the season occurred during a Monday Night game in December. Here is a description of the play taken from a Monday Night Football broadcast in 2005: "On December 1st, 1986... with the Giants trailing, (Mark) Bavaro catches an innocent pass from Phil Simms over the middle. It takes nearly seven 49ers defenders to finally drag him down, some of which are carried for almost 20 yards, including future Hall of Famer Ronnie Lott. Bavaro 's inspiring play jump starts the Giants, who win the game and eventually the Super Bowl. '' New York 's defense allowed 236 points during the season, second fewest in the NFL, and Taylor set a team record with 20.5 sacks. He won a record third Defensive Player of the Year Award, and was named league MVP. The Giants defeated San Francisco 49 -- 3 in the NFC Divisional Playoffs, then Washington 17 -- 0 in the NFC Championship Game. The Giants advanced to play the Denver Broncos in Super Bowl XXI in front of 101,063 fans at the Rose Bowl. After falling behind 10 -- 9 at halftime, they came back to beat the Broncos 39 -- 20. Simms was named the game 's MVP after completing 22 of 25 (88 %) of his passes -- a Super Bowl record. In 1987, the Giants lost their first two games before the players strike. Unlike the players strike five years previous, NFL owners made a decision to use replacement players, but still lost all three replacement games, putting them at 0 -- 5 when the strike ended. Though the Giants went 6 -- 4 over their final 10 games, they finished out of the playoffs at 6 -- 9. Bright spots for the season included tight end Mark Bavaro, who led the team in catches with 55, and three New York 's linebackers making the Pro Bowl -- Taylor, Carson, and Carl Banks. New York 's 1988 season got off to a turbulent start due an offseason scandal involving Taylor. Taylor had abused cocaine, violating the NFL 's substance abuse policy and was suspended for the first four games of the season. Taylor 's over-the - edge lifestyle was becoming an increasing concern for fans and team officials. After his return, however, Taylor recorded 15.5 sacks in 12 games. The intense worry and scrutiny would prove to be for naught as for the rest of his career Taylor would pass his drug tests. Predictably, the Giants struggled to start the season. They were 2 -- 2 when Taylor returned from his suspension. With Taylor back and playing well, however, they won six out of their next eight games. After two straight losses, the Giants won their next three contests to set up a win - or - go - home game against the Jets in the season finale. The Jets defeated the Giants 27 -- 21. When the Eagles beat the Cowboys, and the 49ers lost to the Rams later that night, the Eagles won the NFC East and the Rams clinched the final Wild Card berth. The Giants finished on the outside looking in despite a 10 -- 6 record, because in the tiebreakers, they were swept in the season series by Philadelphia and had a worse conference record than the Rams. The Giants ' 12 -- 4 record in 1989 was the NFC 's second - best (only to San Francisco 's 14 -- 2 record) They lost their divisional playoff game in overtime to the Rams, 19 -- 13. The highlight of the game was wide receiver Flipper Anderson 's catch of the game - winning touchdown pass. After catching the ball, Anderson made a long run to the end zone, silencing the crowd in attendance. In 1989, free - agent acquisition Ottis Anderson ran for 1,023 yards and caught 28 passes. Rookie Dave Meggett also emerged as a threat on third downs and special teams, catching 34 passes for 531 yards, and making the Pro Bowl. The Giants won their first 10 games of the 1990 season, setting a record for the best start in the team 's history. The San Francisco 49ers also got off to a strong start, matching New York with their own 10 -- 0 start. Although both teams lost their next game, their Week 13 matchup was still eagerly anticipated. The Giants held the 49ers ' vaunted offense to seven points, but scored just three themselves. New York won the following week against Minnesota before facing the Buffalo Bills in their regular season home finale. Despite holding a significant advantage in time of possession, they lost 17 -- 13, for their third loss in four games. To compound New York 's problems, Phil Simms went down with an injury that would sideline him for the rest of the year. His replacement, Jeff Hostetler, was an unproven career backup, who had thrown a mere 68 passes coming into the season. The Giants won their final two games to secure a 13 -- 3 record and a first - round playoff bye as the NFC 's # 2 seed. They defeated Chicago 31 -- 3 in the Divisional Playoffs, setting up a rematch with the 49ers in San Francisco for the NFC Championship. As they had in Week 13, the Giants ' defense held San Francisco 's offense in check. In the game 's waning moments, nose tackle Erik Howard caused a Roger Craig fumble, and Taylor recovered it. New York drove downfield into San Francisco territory, and on the game 's last play, kicker Matt Bahr hit a 42 - yard field goal to defeat the 49ers, 15 -- 13. The win set up another rematch, this time in the Super Bowl against the Buffalo Bills. Super Bowl XXV took place amidst a background of war and patriotism. The Persian Gulf War had begun less than two weeks previous, and the nation rallied around the Super Bowl as a symbol of America. The Giants got off to a quick 3 -- 0 lead; however, the Bills scored the next 12 points. The Giants responded by running a nearly eight - minute drive, which culminated in a 14 - yard touchdown pass from Hostetler to Stephen Baker. The Giants received the second - half kickoff and mounted a record - setting drive. The opening drive ran for over nine minutes (a Super Bowl record) and culminated in a 1 - yard touchdown run by Ottis Anderson, giving the Giants a 17 -- 12 lead. On the first play of the 4th quarter, the Bills ' Thurman Thomas ran for a 31 - yard touchdown that put Buffalo back in front, 19 -- 17. On the ensuing possession, the Giants drove down to the Buffalo 4 - yard line, and Bahr made a 21 - yard field goal, which gave the Giants a 20 -- 19 lead. Both teams exchanged possessions before the Bills began one final drive, driving down to the Giants 29 - yard line to set up what would be a potential game - winning 47 - yard field goal attempt by Scott Norwood. In what would become the game 's signature moment, Norwood 's attempt missed wide right, and the Giants won their second Super Bowl, 20 -- 19. The Giants set a Super Bowl record for time of possession with a mark of 40: 33, and Ottis Anderson was named MVP of the game after rushing for 102 yards and a touchdown. The 1990 season marked the end of an era. After the Super Bowl, defensive coordinator Bill Belichick left to become head coach of the Cleveland Browns. Parcells also decided to leave the Giants in the spring of 1991 to pursue a career in broadcasting. In addition, there was an ownership change in what had been one of the most stable front offices in professional sports. In February 1991, Tim Mara was diagnosed with cancer, and he sold his 50 % interest in the team to Bob Tisch for a reported $80 million. This marked the first time since their inception in 1925 that the Giants had not been wholly owned and controlled by the Mara family. Following the departure of Parcells and Belichick -- who many people saw as the likely successor to Parcells -- the surprise choice as head coach was running backs coach Ray Handley. Handley, however, was a somewhat reluctant coach, whose approach stood in stark contrast to the passionate and emotional style employed by Parcells. As with Parcells eight years previous, one of Handley 's first major decisions involved replacing Phil Simms as starting quarterback. Jeff Hostetler was named the team 's starter. Though the Giants won their opening game in an NFC Championship Game rematch against the 49ers, 16 -- 14, they lost three out of their next four games to drop to 2 -- 3. Though they rallied to finish the season 8 -- 8, and Simms reclaimed his starting job later in the year, the excitement that surrounded the Giants the previous year was gone. One of the few promising young players to emerge on the team was second -- year running back Rodney Hampton, who led the Giants in rushing with 1,059 yards. Through the 1991 season, it was clear that the team 's core players on defense had aged quickly. This deterioration continued in 1992, when Lawrence Taylor ruptured his Achilles tendon in the team 's tenth game, and the Giants promptly lost six out of their last seven games to finish the year 6 -- 10. The defense continued its descent, finishing 26th in the league in points allowed after leading the league in that category in 1990. Handley, who had become unpopular with both players and fans, was fired after the end of the regular season. Handley was replaced by former Denver Broncos head coach Dan Reeves, who led the Broncos to three Super Bowls in four years, one against the Giants. After his dismissal from the Broncos, Reeves took the unusual step of lobbying for the job. After being rebuffed by a number of candidates, George Young was pleased that someone with Reeves 's credentials wanted the job. Reeves ' impact was immediate. As Parcells had done in 1984, Reeves named Simms his starting quarterback. The defense returned to form, and allowed more than 20 points once all season. With two regular season games left, the Giants were 11 -- 3 and appeared poised for a first - round playoff bye. They were upset by a Phoenix Cardinals team, who came into the game with just five wins, 17 -- 6, in the next - to - last week of the season, setting up a winner -- take -- all contest against Dallas in the final regular season game. Though the Giants played well, it was Emmitt Smith 's memorable performance with a separated shoulder that led the Cowboys to a 16 -- 13 overtime win, giving the Cowboys a sweep of the season series and home - field advantage throughout the NFC Playoffs. Despite the loss, the Giants made the playoffs as a Wild Card and won their first - round game, 17 -- 10 over the Vikings. However, they were defeated by the San Francisco 49ers 44 -- 3 in the divisional playoffs. Simms played in all 16 games, completing nearly 62 % of his passes, and threw for over 3,000 yards and 15 touchdowns. Simms, Hampton, offensive linemen Jumbo Elliot and center Bart Oates made the Pro Bowl, and Reeves was named Coach of the Year by the Associated Press. After the season, Lawrence Taylor and Phil Simms, the two biggest figures of the late 1980s and early 1990s Giants teams, retired. Before the 1994 season, Reeves named Dave Brown, who had been a # 1 supplemental draft choice in 1992, the Giants starting quarterback. Though Brown led the Giants to wins in their first three games, they lost their next seven. The Giants recovered to win their last six games of the season, but missed the playoffs. During the winning streak, they never allowed more than 20 points in a game. The Giants regressed to a 5 -- 11 record in 1995. Much of the blame for the Giants ' poor performance was placed on Brown. He put up lackluster numbers for the second straight year. Though the Giants defense still played well, and young players like Michael Strahan and Jessie Armstead began to emerge, the Giants inspired tepid interest league - wide and sent no players to the Pro Bowl for the second straight year. The Giants had another losing season in 1996, finishing 6 -- 10. Though Brown again started every game for the Giants, he turned in one of the worst seasons of any starting quarterback in the NFL, throwing for just 12 touchdowns against 20 interceptions. The Giants ' offense was one of the worst in the NFL and, unlike in previous years, the defense was unable to carry the team. After missing the playoffs for three consecutive seasons, Reeves was fired. The Giants hired former Arizona Cardinals offensive coordinator Jim Fassel as their head coach before the 1997 season. With the team 's offense floundering once again and a 2 -- 3 record after five games, Fassel turned to inexperienced Danny Kanell as the starting quarterback over Dave Brown. The Giants experienced a resurgent season, finishing 10 -- 5 -- 1 and winning the NFC East. They hosted a first -- round playoff game against the Minnesota Vikings. The Giants led the Vikings for most of the game, including 22 -- 13 in the 4th quarter, but following a muffed onside kick, the Vikings booted a last - second field goal to win 23 -- 22. Following the season, George Young left the Giants. He was replaced by Ernie Accorsi, a veteran general manager who had successful stints building the Baltimore Colts and Cleveland Browns. The Giants regressed to an 8 -- 8 record in 1998. The strength of the team during the season was their defense, which featured two Pro Bowlers in Armstead and Strahan. However, the offense continued to struggle. Dave Brown had been released before the season and replaced by Kanell and Kent Graham. However, neither quarterback provided Pro Bowl - caliber play. Before the 1999 season, the Giants signed quarterback Kerry Collins. Collins had been the first -- ever draft choice of the Carolina Panthers, and in his second season, led them to the NFC Championship game. However, problems with alcohol abuse, conflicts with his teammates, and questions about his character led to his release from the Panthers. Although many people questioned the wisdom of Accorsi and the Giants giving Collins a $16.9 million contract, Accorsi was confident in Collins ' abilities. In 1999, Tiki Barber emerged as a solid pass -- catching running back, catching 66 passes. Wide receiver Amani Toomer also had a breakout season, accumulating over 1,100 yards receiving and six touchdowns, and Ike Hilliard finished just shy of 1,000 yards receiving. The defense rebounded, ranking 11th in the league, and Armstead and Strahan again were selected to the Pro Bowl. Though the Giants stood at 7 -- 6 and poised for a playoff berth, they lost their final three games to miss the playoffs. The 2000 season was considered a make - or - break year for Fassel. The conventional wisdom was that Fassel needed to have a strong year and a playoff appearance to save his job. After back - to - back losses at home against the St. Louis Rams and Detroit Lions, the Giants fell to 7 -- 4, and their playoff prospects were in question. At a press conference following the loss to Detroit, Fassel guaranteed that "(t) his team is going to the playoffs. '' The Giants responded, winning the next week 's game against Arizona and the rest of their regular season games to finish the season 12 -- 4 and earn a bye and home - field advantage as the NFC 's top seed. The Giants won their first playoff game against the Eagles, 20 -- 10, and then defeated the Vikings 41 -- 0 in the NFC Championship game. They advanced to play the Baltimore Ravens in Super Bowl XXXV. Though the Giants kept the game close early and went into halftime down only 10 -- 0, the Ravens dominated the second half. The Ravens ' defense harassed Kerry Collins all game long, and he had one of the worst games in Super Bowl history. Collins completed only 15 of 39 passes for 112 yards and four interceptions, and the Ravens won the game, 34 -- 7. The Giants ' only score came on a Ron Dixon 97 - yard kickoff return for a touchdown late in the 3rd quarter. On the ensuing kickoff, the Ravens ' Jermaine Lewis scored a touchdown on an 84 - yard return. The Giants were unable to build on their Super Bowl appearance. They ended the 2001 season 7 -- 9 and out of the playoffs for the third time in four seasons. Collins continued his success as the team 's quarterback, throwing for over 3,700 yards and 19 touchdowns, and Strahan broke the NFL record by recording 22.5 sacks In 2002, Collins had one of the best seasons of his career, throwing for over 4,000 yards, and Barber rushed for 1,386 yards and caught 69 passes for 597 yards. Rookie tight end Jeremy Shockey caught 74 passes for a total of 894 yards. The team started 6 -- 6, but made the playoffs as a wild card by winning their last four regular season games. In the wild card playoffs, the Giants built a 38 -- 14 3rd - quarter lead against San Francisco. However, the 49ers rallied, scoring a field goal, and three touchdowns to take a 39 -- 38 lead with a minute left in the game. Collins then drove the Giants down to the 49ers 23 - yard line with six seconds left, setting up a potential game winning 41 - yard field goal attempt for Matt Bryant. On the final play of the game, 40 - year - old long snapper Trey Junkin -- who had just been signed for this playoff game -- snapped the ball low, and punter Matt Allen could not spot the ball properly for the attempt. Allen picked the ball up and threw an unsuccessful pass downfield to offensive lineman Rich Seubert as time expired, and the Giants lost 39 -- 38. The Giants started the 2003 season 4 -- 4, but lost their final eight games. With two games remaining in the season, Jim Fassel requested a meeting with team management, and asked, if he was to be fired, that they do so now rather than wait until the end of the season. Management complied with his request, formally firing Fassel on (or around) December 17, 2003, but allowing him to coach the team 's final two games. After a brief search, Ernie Accorsi hired former Jacksonville Jaguars coach Tom Coughlin to be the Giants ' head coach. Coughlin was considered a disciplinarian, in contrast to the departed Fassel, whose lenient style was criticized in his final years with the club. Accorsi coveted quarterback Eli Manning, brother of Peyton and son of Archie, in the 2004 NFL Draft. Manning had indicated before the draft that he did not want to play for the San Diego Chargers, who held the top pick. The Chargers drafted him nonetheless, and then traded him to the Giants for their first round picks in 2004 and 2005. The Giants released Kerry Collins, who was unhappy with a backup role, and signed veteran quarterback Kurt Warner. The plan was for Warner to be the starter, while the team groomed Manning to ultimately take over the job. After losing to the Eagles in the 2004 season opener, the Giants, with Warner at quarterback, won five of their next six games, making them 5 -- 2. After losing two close games, to the Bears and Cardinals, to drop to 5 -- 4, Coughlin announced that Manning would start the rest of the season. Manning struggled, and the Giants did not score more than 14 points in their next four games. He performed better later in the season, but the Giants finished the season 6 -- 10. Barber established a career - high in rushing with 1,518 yards. He also had 52 catches and a total of 15 touchdowns. The Giants started 4 -- 2 in 2005. Then, on October 25, patriarch Wellington Mara died after a brief illness at the age of 89. Mara had been involved with the Giants since he was nine years old, when he worked for them as a ball boy. Except a tour of duty in the military during World War II, Mara spent his entire adult life with the team. He was beloved by many of the players, and was noted for making an effort to get to know each of them. The Giants dedicated their next game to Mara, and defeated the Redskins 36 -- 0. Just twenty days after Mara 's death, on November 15, the other Giants Executive Officer, Bob Tisch, died at the age of 79. The Giants honored Tisch by defeating the Eagles 27 -- 17 in their next game. Barber set a new team single - game rushing record with 220 yards, and the team 's single - season record with 1,860 yards in a victory over the Kansas City Chiefs. The Giants finished 11 -- 5 and hosted the Carolina Panthers in the wild card playoffs, but lost 23 -- 0. The Giants regressed to an 8 -- 8 record in 2006. The season was characterized by inconsistent play, criticism of the coaching by the media and players, and Manning 's struggles. They won five straight following a 1 -- 2 start, giving them a two - game lead in the NFC East, but they lost six of their last seven games, and the players publicly clashed with Coughlin. One of the team 's worst losses was a 24 -- 21 defeat to Tennessee, in which the team surrendered a 21 - point 4th - quarter lead. Following a season - ending win at Washington, the Giants made the playoffs as a wild card in spite of their record, but were defeated 23 -- 20 by Philadelphia. Barber led the Giants with 1,662 yards rushing and over 2,000 yards from scrimmage, Manning threw for 3,244 yards and 24 touchdowns, and Jeremy Shockey led the team in receptions. Defensively, the team struggled against the pass (28th in the league) and gaining a consistent pass rush (tied for 23rd in the league in sacks). In 2007, the Giants made the playoffs for the third consecutive season. In a September game against the Eagles, they tied the NFL record for most sacks in a game by sacking Philadelphia quarterback Donovan McNabb 12 times, with Osi Umenyiora recording six of those sacks. They became the third NFL franchise to win 600 games when they defeated the Atlanta Falcons 31 -- 10 in October. That same month, they also played in the NFL 's first regular season game outside of North America, in London 's Wembley Stadium, where they beat Miami 13 -- 10. They ended the regular season 10 -- 6 and defeated Tampa Bay 24 -- 14 in the first round of the playoffs, earning Manning and Coughlin their first playoff victories with the Giants. The next week, the Giants won their ninth consecutive road game by beating the top - seeded Dallas Cowboys 21 -- 17. In the NFC championship game, Lawrence Tynes kicked an overtime field goal to give them a 23 -- 20 road victory over the Green Bay Packers. In Super Bowl XLII, the Giants defeated the previously unbeaten New England Patriots 17 -- 14. The signature play of the game came on a 3rd - and - 5, with the Giants on their own 44 - yard line, down 14 -- 10, and 1: 15 remaining in the 4th quarter. Manning dropped back to pass, but was surrounded by New England pass rushers. Escaping three tackles, he threw a long pass to David Tyree, who caught the ball against his own helmet, while being covered by Patriots safety Rodney Harrison. Four plays later, Manning threw the game - winning touchdown pass to Plaxico Burress with 35 seconds left. Manning won the game 's MVP award by completing 19 of 34 passes for 255 yards and two touchdowns. The Giants ' win is considered one of the biggest upsets in Super Bowl history. Michael Strahan retired after the game as the team 's all - time leader in sacks. The Giants ' 12 -- 4 record in 2008 earned them a first - round bye in the playoffs. They won 11 of their first 12 games before stumbling to lose four of their last five, including a 23 -- 11 loss to the Eagles in the NFC Divisional Playoffs. Manning threw for 3,238 yards, 21 touchdowns, and 10 interceptions, and was named to the Pro Bowl after the season. Other standouts included Brandon Jacobs and Derrick Ward who both rushed for 1,000 yards (who helped the Giants lead the NFL in rushing yards); Justin Tuck, who led the team with 12 sacks; and Antonio Pierce, who was the team 's leading tackler. The Giants featured a balanced offense with no receiver topping 600 receiving yards. The Giants won their first five games in 2009, but lost their next four. After beating the Atlanta Falcons in overtime, they lost badly to Denver on Thanksgiving. They defeated Dallas 31 -- 24 in Week 13, then lost 45 -- 38 to Philadelphia the next week. At 8 -- 6, they still had a chance to make the playoffs, but losses to Carolina and Minnesota to finish the season left them out of the playoffs at 8 -- 8. In the spring of 2010, construction on the New Meadowlands Stadium (now MetLife Stadium) was completed, and the Giants and Jets opened it in August with their annual preseason game. In the regular season, they won their home opener against Carolina, 31 -- 18, avenging their late - season loss from the previous year. They went on the road to play Indianapolis in the second "Manning Bowl '' in Week 2. Peyton outplayed Eli (who threw for just 161 yards) in a 38 -- 14 Colts victory. Discipline became a growing problem for the Giants during the season. In the Colts game, Jacobs threw his helmet into the stands, and in the next game, offensive tackle David Diehl ripped off the helmet of Tennessee Titans cornerback Cortland Finnegan. During the 2011 preseason, the Giants lost tight end Kevin Boss, wide receiver Steve Smith, guard Rich Seubert, linebacker Keith Bulluck, wide receiver Derek Hagan, and Pro Bowl center Shaun O'Hara to free agency. However, the 2011 season also saw the emerging of second - year wide receiver Victor Cruz and second - year tight end Jake Ballard. The Giants opened the season against the Washington Redskins on the 10th anniversary of the September 11th attacks, with both New York City and Washington being a target of the attacks. The Redskins beat the Giants 28 -- 14, but the Giants won their next three games, against the Rams, Eagles, and Cardinals. After a loss against the Seattle Seahawks, they went on another three - game winning streak. A key victory was an upset of the New England Patriots 24 -- 20 at Gillette Stadium. The victory ended the Patriots ' NFL record home - game winning streak, after a touchdown pass from Manning to Ballard with 15 seconds left. However, the Giants lost their next three games, before regaining their position atop the NFC East with a tightly contested 37 -- 34 win over the Dallas Cowboys on December 11. After splitting their next two games against the Redskins and New York Jets, a victory over the Cowboys in the last game of the regular season clinched a postseason appearance for the Giants. In the first round of the playoffs, the Giants defeated the Atlanta Falcons 24 -- 2. After giving up an early safety in the first half, Eli Manning threw three consecutive touchdowns. Running backs Ahmad Bradshaw and Brandon Jacobs combined for 172 yards rushing, a season - high for the Giants. With the victory, the Giants advanced to the second round against the top - seeded Green Bay Packers. The following week, the Giants defeated the Packers 37 -- 20. Manning threw for 330 yards and three touchdowns, two of them to wide receiver Hakeem Nicks. This earned the Giants a spot in the NFC Championship Game against the San Francisco 49ers. They won that game in overtime, 20 -- 17 with Lawrence Tynes scoring the game - winning field goal as he did four years earlier in the same game against the Green Bay Packers. The New York Giants won Super Bowl XLVI against the New England Patriots by a score of 21 -- 17. The winning touchdown drive began with a 38 - yard reception by receiver Mario Manningham. As in Super Bowl XLII, Eli Manning was the game 's MVP, defeating Tom Brady for a second time in the Super Bowl. Despite winning 2 Super Bowl championships in 5 years, the 2012 (9 - 7), 2013 (7 - 9), and 2014 (6 - 10) seasons saw the Giants missing the playoffs 3 years in a row. A bright spot of the 2014 season was rookie Odell Beckham Jr., who burst onto the scene catching 91 passes on 132 targets, for 1,305 yards and 12 touchdowns, and in doing so winning Offensive Rookie of the Year. The Giants struggled in 2015, finishing 6 - 10 again and third in the NFC East due to the Giants defense blowing the lead in the final minutes in 6 of their 10 games. Despite their defensive struggles, Quarterback Eli Manning threw for a career - high 35 touchdown passes and also set career highs in attempts and completions. After the season, head coach Tom Coughlin resigned after 12 seasons. With new head coach Ben McAdoo, the Giants began a rocky 2 - 3 start after starting 2 - 0. The Giants rebounded their rocky start and went on a 6 game winning streak for the first time since 2010 which lasted from Week 6 to Week 13. In that span, the Giants improved from their last two season. The Giants clinched a 10 win season for the first time since 2010 with their Week 15 win over the Detroit Lions. Despite losing to the Philadelphia Eagles in Week 16, the Giants clinched a playoff trip when the Tampa Bay Buccaneers lost to the New Orleans Saints on Christmas Eve ending the Giants 5 year playoff drought. Buffalo Bills Miami Dolphins New England Patriots New York Jets Baltimore Ravens Cincinnati Bengals Cleveland Browns Pittsburgh Steelers Houston Texans Indianapolis Colts Jacksonville Jaguars Tennessee Titans Denver Broncos Kansas City Chiefs Los Angeles Chargers Oakland Raiders Dallas Cowboys New York Giants Philadelphia Eagles Washington Redskins Chicago Bears Detroit Lions Green Bay Packers Minnesota Vikings Atlanta Falcons Carolina Panthers New Orleans Saints Tampa Bay Buccaneers Arizona Cardinals Los Angeles Rams San Francisco 49ers Seattle Seahawks
who plays the reporter in the last post
The last Post (tv series) - Wikipedia The Last Post is a British television drama series first broadcast in the United Kingdom on BBC One on 1 October 2017. It is set in the backdrop of the Aden Emergency and a unit of the Royal Military Police depicting the conflict and the relationships of the men and their families together and with the local population. Executive producer of Bonafide Films ' Margery Bone proposed a story about British army life on an army base. She approached Peter Moffat who had a military family background. Moffat suggested setting the series in the 1960s at the end of the British Empire with military personnel posted with their wives to a strategically important place. This gave a way to explore twentieth century British attitudes through a tightly knit group of people. The producers needed a location similar to Aden with coast, mountains and desert and a colonial architectural influence, settling on Cape Town. A disused British naval base overlooking Simon 's Town bay provided a scale which is often difficult to achieve for television.
where was the movie school of rock filmed
School of Rock - wikipedia School of Rock is a 2003 musical comedy film directed by Richard Linklater, produced by Scott Rudin, and written by Mike White. The film stars Jack Black, Joan Cusack, Mike White, Sarah Silverman, Joey Gaydos Jr. and Miranda Cosgrove. Black plays struggling rock singer and guitarist Dewey Finn, who is kicked out of his band and subsequently disguises as a substitute teacher at a prestigious prep school. After witnessing the musical talent in his students in their music class, Dewey forms a band of fourth - graders to attempt to win the upcoming Battle of the Bands and pay off his rent. School of Rock was released on October 3, 2003 by Paramount Pictures, grossing $131 million worldwide on a $35 million budget. It was the highest grossing musical comedy of all time until it was overtaken in 2015 by Pitch Perfect 2. A stage musical adaptation opened on Broadway in December 2015, and a television adaptation for Nickelodeon premiered on March 12, 2016. No Vacancy, a rock band, performs at a nightclub three weeks before auditioning for the Battle of the Bands. Guitarist Dewey Finn creates on - stage antics, including a stage dive that abruptly ends the performance. The next morning, Dewey wakes in the apartment he lives in with Ned Schneebly and his girlfriend, Patty Di Marco. They inform Dewey he must make up for his share of the rent, which is four months overdue. When Dewey meets No Vacancy at a rehearsal session, he finds out that he has been replaced by another guitarist named Spider. Later, while attempting to sell some of his equipment for rent money, Dewey answers a phone call from Rosalie Mullins, the principal of the Horace Green prep school, inquiring for Ned about a short - term position as a substitute teacher. Desperate for money, Dewey impersonates Ned and is hired. On his first day at the school, Dewey adopts the name "Mr. S '' and spends his first day behaving erratically, much to the class ' confusion. The next day, Dewey overhears a music class and devises a plan to form them into a new band to audition for Battle of the Bands. He casts Zack Mooneyham as lead guitarist, Freddy Jones as drummer, cello player Katie on bass, Lawrence on keyboard, and himself as lead vocalist and guitarist. He assigns the rest of the class to various roles of backup singers, groupies, roadies, with Summer as band manager. The project takes over normal lessons, but helps the students to embrace their talents and overcome their problems. He reassures Lawrence, who is worried about not being cool enough for the band, Zack, whose overbearing father disapproves of rock, and Tomika, an overweight girl who is too self - conscious to even audition for backup singer despite an amazing voice. During one eloquent lesson, he teaches the kids that rock and roll is the way to "Stick it to the Man '' and stand up for themselves. Band "groupies '' Michelle and Eleni, with Summer 's approval, pitch the band name "The School of Rock. '' Two weeks into his hiring, Dewey sneaks his key band members out of school to audition for a spot in the competition, while the rest of the class stay behind to maintain cover. When Freddy wanders off, Dewey retrieves him but the group is rejected because the bill is full. After Summer tricks the staff into thinking that they have a terminal illness, the band is auditioned. The next day, Mullins decides to check on his teaching progress, forcing Dewey to teach the actual material. Mullins explains that a parents ' night will take place at the school the day before Battle of the Bands, rendering Dewey somewhat concerned. As Dewey prepares for the parents ' night, Ned receives a paycheck from the school via mail, soon realizing that Dewey impersonated him. During the parents ' meeting, the parents question what Dewey was teaching the kids until Ned, Patty, with the police confront Dewey. With Mullins bursting in to question what is going on, Dewey reveals his true identity, saying that he is not a certified teacher and flees to his apartment. Dewey and Patty argue and Ned intervenes; however, he informs Dewey that he should move out. The next morning, the parents go on an uproar in front of Mullins at her office, while the kids decide not to let their hard work go to waste. When the new substitute discovers that the kids are missing, she informs Mullins, and Mullins and the parents race to the competition. A school bus comes to pick up Dewey, who leads the kids to the Battle of the Bands and decides that they play the song written by Zack earlier in the film. Initially dismissed as a gimmick, the band wins over the entire crowd. Much to Dewey 's dismay, No Vacancy wins, but the audience to chant for School of Rock and demand an encore. Some time later, an after school program known as the School of Rock has opened as Dewey continues to coach the students he played with before while Ned teaches beginners. Screenwriter Mike White 's concept for the film was inspired by the Langley Schools Music Project. Various aspects of the plot were recognized as being similar to the 1957 Broadway hit The Music Man. Jack Black once witnessed a stage dive gone wrong involving Ian Astbury of rock band The Cult, which made its way into the film; "I went to see a reunion, in Los Angeles, of The Cult... it was just a bunch of jaded Los Angelinos out there, and they did n't catch him and he plummeted straight to the ground. Later I thought it was so hilarious. So that was put into the script. '' Many scenes from the movie were shot around the New York City area. The school portrayed in School of Rock is actually Main Hall at Wagner College in Staten Island, New York. In the commentary, the kids say that all of the hallway scenes were shot in one hallway. One of the theaters used in many of the shots was at Union County Performing Arts Center located in Rahway, New Jersey. The eponymous album was released on September 30, 2003. Sammy James Jr. of the band The Mooney Suzuki penned the title track with screenwriter Mike White, and the band backed up Jack Black and the child musicians on the soundtrack recording of the song. The film 's director, Richard Linklater, scouted the country for talented 13 - year - old musicians to play the rock and roll music featured on the soundtrack and in the film. The soundtrack includes "Immigrant Song '' by Led Zeppelin, a band that has rarely granted permission for use of their songs in film and television. Richard Linklater came up with the idea to shoot a video on the stage used at the end of the film, with Jack Black begging the band for permission with the crowd extras cheering and chanting behind him. The video was sent directly to the living members of Led Zeppelin, and permission was granted for the song. The video is included on the DVD. * Featured on the Soundtrack album School of Rock received an approval rating of 92 % on Rotten Tomatoes based on 192 reviews with an average rating of 7.8 / 10. The site 's critical consensus reads, "Black 's exuberant, gleeful performance turns School of Rock into a hilarious, rocking good time. '' On Metacritic, the film has a score of 82 out of 100, based on 41 critics, indicating "universal acclaim ''. School of Rock opened at # 1 with a weekend gross of $19,622,714 from 2,614 theaters for an average of $7,507 per venue. In its second weekend, the film declined just 21 percent, earning another $15,487,832 after expanding to 2,929 theaters, averaging $5,288 per venue, and bringing the 10 - day gross to $39,671,396. In its third weekend, it dropped only 28 percent, making another $11,006,233 after expanding once again to 2,951 theaters, averaging $3,730 per venue, and bringing the 17 - day gross to $54,898,025. It spent a total of six weeks among the Top 10 films and eventually grossed $81,261,177 in the United States and Canada and another $50,015,772 in international territories for a total gross of $131,282,949 worldwide, almost four times its budget of $35 million. This made School of Rock the highest - grossing musical comedy of all time, until it was overtaken in 2015 by Pitch Perfect 2. The film was nominated for several awards, including Black receiving a Golden Globe nomination for Best Actor -- Comedy or Musical (which he lost to Bill Murray for Lost in Translation), and winning an MTV Movie Award for Best Comedic Performance. In 2008, Jack Black said that a sequel was being considered. It was later reported that director Richard Linklater and producer Scott Rudin would return. Mike White was returning as screenwriter, titled School of Rock 2: America Rocks, which picks up with Finn leading a group of summer school students on a cross-country field trip that delves into the history of rock ' n ' roll. In 2012, Black stated that he believed the sequel was unlikely. "I tried really hard to get all the pieces together, '' he said. "I would n't want to do it without the original writer and director, and we never all got together and saw eye - to - eye on what the script would be. It was not meant to be, unfortunately, '' but added, "never say never ''. On April 5, 2013, Andrew Lloyd Webber announced that he had bought the rights to School of Rock to a stage musical. On December 18, 2014, the musical was officially confirmed and it was announced that the show would receive its world premiere on Broadway in autumn 2015, at the Winter Garden Theatre. The musical has a book by Downton Abbey creator Julian Fellowes. and is directed by Laurence Connor, with choreography by JoAnn M. Hunter, set and costume design by Anna Louizos and lighting by Natasha Katz. The musical features an original score composed by Lloyd Webber, with lyrics by Glenn Slater and sound design by Mick Potter, in addition to music from the original film. School of Rock became Lloyd Webber 's first show opening on Broadway before London since Jesus Christ Superstar in 1971. On August 29, 2013, a 10 - year anniversary screening of the film was held in Austin, Texas at the Paramount Theatre. Those in attendance included director Richard Linklater, Jack Black, Mike White, Miranda Cosgrove and the rest of the young cast members except for Cole Hawkins (who played Leonard). The event, hosted by the Austin Film Society and Cirrus Logic, included a red carpet, a full cast and crew Q&A after the screening, where the now - grown child stars discussed their current pursuits in life, and a VIP after - party performance by the School of Rock band during which "School of Rock '' and "It 's a Long Way to the Top (If You Wanna Rock ' n ' Roll) '' were played. On August 4, 2014, Nickelodeon announced that they were working with Paramount Television on a television show adaptation of the film. Production started in the fall and the series premiered in 2016. It stars Breanna Yde, Ricardo Hurtado, Jade Pettyjohn, Lance Lim, Aidan Miner, and Tony Cavalero.
is this the last season of game of thrones
Game of Thrones (season 8) - wikipedia The eighth and final season of the fantasy drama television series Game of Thrones was announced by HBO in July 2016. Unlike the first six seasons that each had ten episodes and the seventh that had seven episodes, the eighth season will have only six episodes. Like the previous season, it will largely consist of original content not found currently in George R.R. Martin 's A Song of Ice and Fire series, and will instead adapt material Martin has revealed to showrunners about the upcoming novels in the series, The Winds of Winter and A Dream of Spring. The season will be adapted for television by David Benioff and D.B. Weiss. Filming officially began on October 23, 2017. The season is scheduled to premiere in 2019. Series creators and executive producers David Benioff and D.B. Weiss will serve as showrunners for the eighth season. The directors for the eighth season were announced in September 2017. Miguel Sapochnik, who previously directed "The Gift '' and "Hardhome '' on season 5, as well as "Battle of the Bastards '' and "The Winds of Winter '' on season 6 will return as director. He will divide up direction of the first five episodes with David Nutter, who had directed two episodes on seasons two, three and five. The final episode of the show will be directed by Benioff and Weiss, who have previously directed one episode each. At the show 's South by Southwest panel on March 12, 2017, Benioff and Weiss announced the writers for the show to be Dave Hill (episode 1) and Bryan Cogman (episode 2). The showrunners will then divide up the screenplay for the remaining four episodes amongst themselves. Writing for the eighth season started with a 140 - page outline. Benioff said that the divvying up process and who should write what section became more difficult, being that "this would be the last time that we would be doing this. '' In an interview with Entertainment Weekly, HBO programming president Casey Bloys stated that instead of the series finale being a feature film, the final season would be "six one - hour movies '' on television. He continued, "The show has proven that TV is every bit as impressive and in many cases more so, than film. What they 're doing is monumental. '' Filming officially began on October 23, 2017. Co-creators David Benioff and D.B. Weiss have said that the seventh and eighth season would likely consist of fewer episodes, stating that after season six, they were "down to our final 13 episodes after this season. We 're heading into the final lap. '' Benioff and Weiss stated that they were unable to produce 10 episodes in the show 's usual 12 to 14 month time frame, as Weiss said, "It 's crossing out of a television schedule into more of a mid-range movie schedule. '' HBO confirmed in July 2016, that the seventh season would consist of seven episodes, and would premiere later than usual in mid-2017 because of the later filming schedule. Benioff and Weiss later confirmed that the eighth season will consist of six episodes, and is expected to premiere later than usual for the same reason. Benioff and Weiss spoke about the end of the show, saying, "From the beginning we 've wanted to tell a 70 - hour movie. It will turn out to be a 73 - hour movie, but it 's stayed relatively the same of having the beginning, middle and now we 're coming to the end. It would have been really tough if we lost any core cast members along the way, I 'm very happy we 've kept everyone and we get to finish it the way we want to. '' The season is set to air in 2019. Ramin Djawadi is set to return as the composer of the show for the eighth season.
ships in the battle of the coral sea
Battle of the Coral Sea - wikipedia Southeast Asia Burma Southwest Pacific North America Japan Manchuria The Battle of the Coral Sea, fought from 4 to 8 May 1942, was a major naval battle between the Imperial Japanese Navy (IJN) and naval and air forces from the United States and Australia, taking place in the Pacific Theatre of World War II. The battle is historically significant as the first action in which aircraft carriers engaged each other, as well as the first in which neither side 's ships sighted or fired directly upon the other. In an attempt to strengthen their defensive position in the South Pacific, the Japanese decided to invade and occupy Port Moresby (in New Guinea) and Tulagi (in the southeastern Solomon Islands). The plan to accomplish this was called Operation MO, and involved several major units of Japan 's Combined Fleet. These included two fleet carriers and a light carrier to provide air cover for the invasion forces. It was under the overall command of Japanese Admiral Shigeyoshi Inoue. The U.S. learned of the Japanese plan through signals intelligence, and sent two United States Navy carrier task forces and a joint Australian - U.S. cruiser force to oppose the offensive. These were under the overall command of U.S. Admiral Frank J. Fletcher. On 3 -- 4 May, Japanese forces successfully invaded and occupied Tulagi, although several of their supporting warships were sunk or damaged in surprise attacks by aircraft from the U.S. fleet carrier Yorktown. Now aware of the presence of U.S. carriers in the area, the Japanese fleet carriers advanced towards the Coral Sea with the intention of locating and destroying the Allied naval forces. Beginning on 7 May, the carrier forces from the two sides engaged in airstrikes over two consecutive days. On the first day, the U.S. sank the Japanese light carrier Shōhō; meanwhile, the Japanese sank a U.S. destroyer and heavily damaged a fleet oiler (which was later scuttled). The next day, the Japanese fleet carrier Shōkaku was heavily damaged, the U.S. fleet carrier Lexington critically damaged (and later scuttled), and Yorktown damaged. With both sides having suffered heavy losses in aircraft and carriers damaged or sunk, the two forces disengaged and retired from the battle area. Because of the loss of carrier air cover, Inoue recalled the Port Moresby invasion fleet, intending to try again later. Although a tactical victory for the Japanese in terms of ships sunk, the battle would prove to be a strategic victory for the Allies for several reasons. The battle marked the first time since the start of the war that a major Japanese advance had been checked by the Allies. More importantly, the Japanese fleet carriers Shōkaku and Zuikaku -- the former damaged and the latter with a depleted aircraft complement -- were unable to participate in the Battle of Midway the following month, while Yorktown did participate, ensuring a rough parity in aircraft between the two adversaries and contributing significantly to the U.S. victory in that battle. The severe losses in carriers at Midway prevented the Japanese from reattempting to invade Port Moresby from the ocean and helped prompt their ill - fated land offensive over the Kokoda trail. Two months later, the Allies took advantage of Japan 's resulting strategic vulnerability in the South Pacific and launched the Guadalcanal Campaign; this, along with the New Guinea Campaign, eventually broke Japanese defenses in the South Pacific and was a significant contributing factor to Japan 's ultimate surrender in World War II. On 7 December 1941, using aircraft carriers, Japan attacked the U.S. Pacific Fleet at Pearl Harbor, Hawaii. The attack destroyed or crippled most of the Pacific Fleet 's battleships and brought the United States into the war. In launching this war, Japanese leaders sought to neutralize the U.S. fleet, seize territory rich in natural resources, and obtain strategic military bases to defend their far - flung empire. At the same time that they were attacking Pearl Harbor, the Japanese attacked Malaya, causing the United Kingdom, Australia, and New Zealand to join the United States in the war against Japan. In the words of the Imperial Japanese Navy (IJN) Combined Fleet 's "Secret Order Number One '', dated 1 November 1941, the goals of the initial Japanese campaigns in the impending war were to "(eject) British and American strength from the Netherlands Indies and the Philippines, (and) to establish a policy of autonomous self - sufficiency and economic independence. '' To support these goals, during the first few months of 1942, besides Malaya, Japanese forces attacked and successfully took control of the Philippines, Singapore, the Dutch East Indies, Wake Island, New Britain, the Gilbert Islands and Guam, inflicting heavy losses on opposing Allied land, naval and air forces. Japan planned to use these conquered territories to establish a perimeter defense for its empire from which it expected to employ attritional tactics to defeat or exhaust any Allied counterattacks. Shortly after the war began, Japan 's Naval General Staff recommended an invasion of Northern Australia to prevent Australia from being used as a base to threaten Japan 's perimeter defences in the South Pacific. The Imperial Japanese Army (IJA) rejected the recommendation, stating that it did not have the forces or shipping capacity available to conduct such an operation. At the same time, Vice Admiral Shigeyoshi Inoue, commander of the IJN 's Fourth Fleet (also called the South Seas Force) which consisted of most of the naval units in the South Pacific area, advocated the occupation of Tulagi in the southeastern Solomon Islands and Port Moresby in New Guinea, which would put Northern Australia within range of Japanese land - based aircraft. Inoue believed the capture and control of these locations would provide greater security and defensive depth for the major Japanese base at Rabaul on New Britain. The navy 's general staff and the IJA accepted Inoue 's proposal and promoted further operations, using these locations as supporting bases, to seize New Caledonia, Fiji, and Samoa and thereby cut the supply and communication lines between Australia and the United States. In April 1942, the army and navy developed a plan that was titled Operation MO. The plan called for Port Moresby to be invaded from the sea and secured by 10 May. The plan also included the seizure of Tulagi on 2 -- 3 May, where the navy would establish a seaplane base for potential air operations against Allied territories and forces in the South Pacific and to provide a base for reconnaissance aircraft. Upon the completion of MO, the navy planned to initiate Operation RY, using ships released from MO, to seize Nauru and Ocean Island for their phosphate deposits on 15 May. Further operations against Fiji, Samoa and New Caledonia (Operation FS) were to be planned once MO and RY were completed. Because of a damaging air attack by Allied land - and carrier - based aircraft on Japanese naval forces invading the Lae - Salamaua area in New Guinea in March, Inoue requested Japan 's Combined Fleet send carriers to provide air cover for MO. Inoue was especially worried about Allied bombers stationed at air bases in Townsville and Cooktown, Australia, beyond the range of his own bombers, based at Rabaul and Lae. Admiral Isoroku Yamamoto, commander of the Combined Fleet, was concurrently planning an operation for June that he hoped would lure the U.S. Navy 's carriers, none of which had been damaged in the Pearl Harbor attack, into a decisive showdown in the central Pacific near Midway Atoll. In the meantime Yamamoto detached some of his large warships, including two fleet carriers, a light carrier, a cruiser division, and two destroyer divisions, to support MO, and placed Inoue in charge of the naval portion of the operation. Unknown to the Japanese, the U.S. Navy, led by the Communication Security Section of the Office of Naval Communications, had for several years enjoyed some success with penetrating Japanese communication ciphers and codes. By March 1942, the U.S. was able to decipher up to 15 % of the IJN 's Ro or Naval Codebook D code (called "JN - 25B '' by the U.S.), which was used by the IJN for approximately half of its communications. By the end of April, the U.S. was reading up to 85 % of the signals broadcast in the Ro code. In March 1942, the U.S. first noticed mention of the MO operation in intercepted messages. On 5 April, the U.S. intercepted an IJN message directing a carrier and other large warships to proceed to Inoue 's area of operations. On 13 April, the British deciphered an IJN message informing Inoue that the Fifth Carrier Division, consisting of the fleet carriers Shōkaku and Zuikaku, was en route to his command from Formosa via the main IJN base at Truk. The British passed the message to the U.S., along with their conclusion that Port Moresby was the likely target of MO. Admiral Chester W. Nimitz, the new commander of U.S. forces in the Central Pacific, and his staff discussed the deciphered messages and agreed that the Japanese were likely initiating a major operation in the Southwest Pacific in early May with Port Moresby as the probable target. The Allies regarded Port Moresby as a key base for a planned counteroffensive, under General Douglas MacArthur, against Japanese forces in the South West Pacific area. Nimitz 's staff also concluded that the Japanese operation might include carrier raids on Allied bases in Samoa and at Suva. Nimitz, after consultation with Admiral Ernest King, Commander in Chief of the United States Fleet, decided to contest the Japanese operation by sending all four of the Pacific Fleet 's available aircraft carriers to the Coral Sea. By 27 April, further signals intelligence confirmed most of the details and targets of the MO and RY plans. On 29 April, Nimitz issued orders that sent his four carriers and their supporting warships towards the Coral Sea. Task Force 17 (TF 17), commanded by Rear Admiral Fletcher and consisting of the carrier Yorktown, escorted by three cruisers and four destroyers and supported by a replenishment group of two oilers and two destroyers, was already in the South Pacific, having departed Tongatabu on 27 April en route to the Coral Sea. TF 11, commanded by Rear Admiral Aubrey Fitch and consisting of the carrier Lexington with two cruisers and five destroyers, was between Fiji and New Caledonia. TF 16, commanded by Vice Admiral William F. Halsey and including the carriers Enterprise and Hornet, had just returned to Pearl Harbor from the Doolittle Raid in the central Pacific. TF16 immediately departed but would not reach the South Pacific in time to participate in the battle. Nimitz placed Fletcher in command of Allied naval forces in the South Pacific area until Halsey arrived with TF 16. Although the Coral Sea area was under MacArthur 's command, Fletcher and Halsey were directed to continue to report to Nimitz while in the Coral Sea area, not to MacArthur. Based on intercepted radio traffic from TF 16 as it returned to Pearl Harbor, the Japanese assumed that all but one of the U.S. Navy 's carriers were in the central Pacific. The Japanese did not know the location of the remaining carrier, but did not expect a U.S. carrier response to MO until the operation was well under way. During late April, the Japanese submarines Ro - 33 and Ro - 34 reconnoitered the area where landings were planned. The submarines investigated Rossel Island and the Deboyne Group anchorage in the Louisiade Archipelago, Jomard Channel, and the route to Port Moresby from the east. They did not sight any Allied ships in the area and returned to Rabaul on 23 and 24 April respectively. The Japanese Port Moresby Invasion Force, commanded by Rear Admiral Kōsō Abe, included 11 transport ships carrying about 5,000 soldiers from the IJA 's South Seas Detachment plus approximately 500 troops from the 3rd Kure Special Naval Landing Force (SNLF). Escorting the transports was the Port Moresby Attack Force with one light cruiser and six destroyers under the command of Rear Admiral Sadamichi Kajioka. Abe 's ships departed Rabaul for the 840 nmi (970 mi; 1,560 km) trip to Port Moresby on 4 May and were joined by Kajioka 's force the next day. The ships, proceeding at 8 kn (9.2 mph; 15 km / h), planned to transit the Jomard Channel in the Louisiades to pass around the southern tip of New Guinea to arrive at Port Moresby by 10 May. The Allied garrison at Port Moresby numbered around 5,333 men, but only half of these were infantry and all were badly equipped and undertrained. Leading the invasion of Tulagi was the Tulagi Invasion Force, commanded by Rear Admiral Kiyohide Shima, consisting of two minelayers, two destroyers, six minesweepers, two subchasers and a transport ship carrying about 400 troops from the 3rd Kure SNLF. Supporting the Tulagi force was the Covering Group with the light carrier Shōhō, four heavy cruisers, and one destroyer, commanded by Rear Admiral Aritomo Gotō. A separate Cover Force (sometimes referred to as the Support Group), commanded by Rear Admiral Kuninori Marumo and consisting of two light cruisers, the seaplane tender Kamikawa Maru and three gunboats, joined the Covering Group in providing distant protection for the Tulagi invasion. Once Tulagi was secured on 3 or 4 May, the Covering Group and Cover Force were to reposition to help screen the Port Moresby invasion. Inoue directed the MO operation from the cruiser Kashima, with which he arrived at Rabaul from Truk on 4 May. Gotō 's force left Truk on 28 April, cut through the Solomons between Bougainville and Choiseul and took station near New Georgia Island. Marumo 's support group sortied from New Ireland on 29 April headed for Thousand Ships Bay, Santa Isabel Island, to establish a seaplane base on 2 May to support the Tulagi assault. Shima 's invasion force departed Rabaul on 30 April. The Carrier Strike Force, with the carriers Zuikaku and Shōkaku, two heavy cruisers, and six destroyers, sortied from Truk on 1 May. The strike force was commanded by Vice Admiral Takeo Takagi (flag on cruiser Myōkō), with Rear Admiral Chūichi Hara, on Zuikaku, in tactical command of the carrier air forces. The Carrier Strike Force was to proceed down the eastern side of the Solomon Islands and enter the Coral Sea south of Guadalcanal. Once in the Coral Sea, the carriers were to provide air cover for the invasion forces, eliminate Allied air power at Port Moresby, and intercept and destroy any Allied naval forces which entered the Coral Sea in response. En route to the Coral Sea, Takagi 's carriers were to deliver nine Zero fighter aircraft to Rabaul. Bad weather during two attempts to make the delivery on 2 -- 3 May compelled the aircraft to return to the carriers, stationed 240 nmi (280 mi; 440 km) from Rabaul, and one of the Zeros was forced to ditch in the sea. In order to try to keep to the MO timetable, Takagi was forced to abandon the delivery mission after the second attempt and direct his force towards the Solomon Islands to refuel. To give advance warning of the approach of any Allied naval forces, the Japanese sent submarines I - 22, I - 24, I - 28 and I - 29 to form a scouting line in the ocean about 450 nmi (520 mi; 830 km) southwest of Guadalcanal. Fletcher 's forces had entered the Coral Sea area before the submarines took station, and the Japanese were therefore unaware of their presence. Another submarine, I - 21, which was sent to scout around Nouméa, was attacked by Yorktown aircraft on 2 May. The submarine took no damage and apparently did not realize that it had been attacked by carrier aircraft. Ro - 33 and Ro - 34 were also deployed in an attempt to blockade Port Moresby, arriving off the town on 5 May. Neither submarine engaged any ships during the battle. On the morning of 1 May, TF 17 and TF 11 united about 300 nmi (350 mi; 560 km) northwest of New Caledonia (16 ° 16 ′ S 162 ° 20 ′ E  /  16.267 ° S 162.333 ° E  / - 16.267; 162.333). Fletcher immediately detached TF11 to refuel from the oiler Tippecanoe, while TF 17 refueled from Neosho. TF 17 completed refueling the next day, but TF 11 reported that they would not be finished fueling until 4 May. Fletcher elected to take TF 17 northwest towards the Louisiades and ordered TF 11 to meet TF 44, which was en route from Sydney and Nouméa, on 4 May once refueling was complete. TF 44 was a joint Australia -- U.S. warship force under MacArthur 's command, led by Australian Rear Admiral John Crace and made up of the cruisers HMAS Australia, Hobart, and USS Chicago, along with three destroyers. Once it completed refueling TF 11, Tippecanoe departed the Coral Sea to deliver its remaining fuel to Allied ships at Efate. Early on 3 May, Shima 's force arrived off Tulagi and began disembarking the naval troops to occupy the island. Tulagi was undefended: the small garrison of Australian commandos and a Royal Australian Air Force reconnaissance unit evacuated just before Shima 's arrival. The Japanese forces immediately began construction of a seaplane and communications base. Aircraft from Shōhō covered the landings until early afternoon, when Gotō 's force turned towards Bougainville to refuel in preparation to support the landings at Port Moresby. At 17: 00 on 3 May, Fletcher was notified that the Japanese Tulagi invasion force had been sighted the day before, approaching the southern Solomons. Unknown to Fletcher, TF 11 completed refueling that morning ahead of schedule and was only 60 nmi (69 mi; 110 km) east of TF 17, but was unable to communicate its status because of Fletcher 's orders to maintain radio silence. TF 17 changed course and proceeded at 27 kn (31 mph; 50 km / h) towards Guadalcanal to launch airstrikes against the Japanese forces at Tulagi the next morning. On 4 May, from a position 100 nmi (120 mi; 190 km) south of Guadalcanal (11 ° 10 ′ S 158 ° 49 ′ E  /  11.167 ° S 158.817 ° E  / - 11.167; 158.817), a total of 60 aircraft from TF 17 launched three consecutive strikes against Shima 's forces off Tulagi. Yorktown 's aircraft surprised Shima 's ships and sank the destroyer Kikuzuki (09 ° 07 ′ S 160 ° 12 ′ E  /  9.117 ° S 160.200 ° E  / - 9.117; 160.200) and three of the minesweepers, damaged four other ships, and destroyed four seaplanes which were supporting the landings. The U.S. lost one torpedo bomber and two fighters in the strikes, but all of the aircrew were eventually rescued. After recovering its aircraft late in the evening of 4 May, TF 17 retired towards the south. In spite of the damage suffered in the carrier strikes, the Japanese continued construction of the seaplane base and began flying reconnaissance missions from Tulagi by 6 May. Takagi 's Carrier Striking Force was refueling 350 nmi (400 mi; 650 km) north of Tulagi when it received word of Fletcher 's strike on 4 May. Takagi terminated refueling, headed southeast, and sent scout planes to search east of the Solomons, believing that the U.S. carriers were in that area. Since no Allied ships were in that area, the search planes found nothing. At 08: 16 on 5 May, TF 17 rendezvoused with TF 11 and TF 44 at a predetermined point 320 nmi (370 mi; 590 km) south of Guadalcanal (15 ° S 160 ° E  /  15 ° S 160 ° E  / - 15; 160). At about the same time, four Grumman F4F Wildcat fighters from Yorktown intercepted a Kawanishi H6K reconnaissance flying boat from the Yokohama Air Group of the 25th Air Flotilla based at the Shortland Islands and shot it down 11 nmi (13 mi; 20 km) from TF 11. The aircraft failed to send a report before it crashed, but when it did n't return to base the Japanese correctly assumed that it had been shot down by carrier aircraft. A message from Pearl Harbor notified Fletcher that radio intelligence deduced the Japanese planned to land their troops at Port Moresby on 10 May and their fleet carriers would likely be operating close to the invasion convoy. Armed with this information, Fletcher directed TF 17 to refuel from Neosho. After the refueling was completed on 6 May, he planned to take his forces north towards the Louisiades and do battle on 7 May. In the meantime, Takagi 's carrier force steamed down the east side of the Solomons throughout the day on 5 May, turned west to pass south of San Cristobal (Makira), and entered the Coral Sea after transiting between Guadalcanal and Rennell Island in the early morning hours of 6 May. Takagi commenced refueling his ships 180 nmi (210 mi; 330 km) west of Tulagi in preparation for the carrier battle he expected would take place the next day. On 6 May, Fletcher absorbed TF 11 and TF 44 into TF 17. Believing the Japanese carriers were still well to the north near Bougainville, Fletcher continued to refuel. Reconnaissance patrols conducted from the U.S. carriers throughout the day failed to locate any of the Japanese naval forces, because they were located just beyond scouting range. At 10: 00, a Kawanishi reconnaissance flying boat from Tulagi sighted TF 17 and notified its headquarters. Takagi received the report at 10: 50. At that time, Takagi 's force was about 300 nmi (350 mi; 560 km) north of Fletcher, near the maximum range for his carrier aircraft. Takagi, whose ships were still refueling, was not yet ready to engage in battle. He concluded, based on the sighting report, TF 17 was heading south and increasing the range. Furthermore, Fletcher 's ships were under a large, low - hanging overcast which Takagi and Hara felt would make it difficult for their aircraft to find the U.S. carriers. Takagi detached his two carriers with two destroyers under Hara 's command to head towards TF 17 at 20 kn (23 mph; 37 km / h) in order to be in position to attack at first light the next day while the rest of his ships completed refueling. U.S. B - 17 bombers based in Australia and staging through Port Moresby attacked the approaching Port Moresby invasion forces, including Gotō 's warships, several times during the day on 6 May without success. MacArthur 's headquarters radioed Fletcher with reports of the attacks and the locations of the Japanese invasion forces. MacArthur 's fliers ' reports of seeing a carrier (Shōhō) about 425 nmi (489 mi; 787 km) northwest of TF17 further convinced Fletcher fleet carriers were accompanying the invasion force. At 18: 00, TF 17 completed fueling and Fletcher detached Neosho with a destroyer, Sims, to take station further south at a prearranged rendezvous (16 ° S 158 ° E  /  16 ° S 158 ° E  / - 16; 158). TF 17 then turned to head northwest towards Rossel Island in the Louisiades. Unbeknownst to the two adversaries, their carriers were only 70 nmi (130 km) away from each other by 20: 00 that night. At 20: 00 (13 ° 20 ′ S 157 ° 40 ′ E  /  13.333 ° S 157.667 ° E  / - 13.333; 157.667), Hara reversed course to meet Takagi who completed refueling and was now heading in Hara 's direction. Late on 6 May or early on 7 May, Kamikawa Maru set up a seaplane base in the Deboyne Islands in order to help provide air support for the invasion forces as they approached Port Moresby. The rest of Marumo 's Cover Force then took station near the D'Entrecasteaux Islands to help screen Abe 's oncoming convoy. At 06: 25 on 7 May, TF 17 was 115 nmi (132 mi; 213 km) south of Rossel Island (13 ° 20 ′ S 154 ° 21 ′ E  /  13.333 ° S 154.350 ° E  / - 13.333; 154.350). At this time, Fletcher sent Crace 's cruiser force, now designated Task Group 17.3 (TG 17.3), to block the Jomard Passage. Fletcher understood that Crace would be operating without air cover since TF 17 's carriers would be busy trying to locate and attack the Japanese carriers. Detaching Crace reduced the anti-aircraft defenses for Fletcher 's carriers. Nevertheless, Fletcher decided the risk was necessary to ensure the Japanese invasion forces could not slip through to Port Moresby while he engaged the carriers. Believing Takagi 's carrier force was somewhere north of him, in the vicinity of the Louisiades, beginning at 06: 19, Fletcher directed Yorktown to send 10 Douglas SBD Dauntless dive bombers as scouts to search that area. Hara in turn believed Fletcher was south of him and advised Takagi to send the aircraft to search that area. Takagi, approximately 300 nmi (350 mi; 560 km) east of Fletcher (13 ° 12 ′ S 158 ° 05 ′ E  /  13.200 ° S 158.083 ° E  / - 13.200; 158.083), launched 12 Nakajima B5Ns at 06: 00 to scout for TF 17. Around the same time, Gotō 's cruisers Kinugasa and Furutaka launched four Kawanishi E7K2 Type 94 floatplanes to search southeast of the Louisiades. Augmenting their search were several floatplanes from Deboyne, four Kawanishi H6Ks from Tulagi, and three Mitsubishi G4M bombers from Rabaul. Each side readied the rest of its carrier attack aircraft to launch immediately once the enemy was located. At 07: 22 one of Takagi 's carrier scouts, from Shōkaku, reported U.S. ships bearing 182 ° (just west of due south), 163 nmi (188 mi; 302 km) from Takagi. At 07: 45, the scout confirmed that it had located "one carrier, one cruiser, and three destroyers ''. Another Shōkaku scout aircraft quickly confirmed the sighting. The Shōkaku aircraft actually sighted and misidentified the oiler Neosho and destroyer Sims, which had earlier been detailed away from the fleet to a southern rendezvous point. Believing that he had located the U.S. carriers, Hara, with Takagi 's concurrence, immediately launched all of his available aircraft. A total of 78 aircraft -- 18 Zero fighters, 36 Aichi D3A dive bombers, and 24 torpedo aircraft -- began launching from Shōkaku and Zuikaku at 08: 00 and were on their way by 08: 15 towards the reported sighting. At 08: 20, one of the Furutaka aircraft found Fletcher 's carriers and immediately reported it to Inoue 's headquarters at Rabaul, which passed the report on to Takagi. The sighting was confirmed by a Kinugasa floatplane at 08: 30. Takagi and Hara, confused by the conflicting sighting reports they were receiving, decided to continue with the strike on the ships to their south, but turned their carriers towards the northwest to close the distance with Furutaka 's reported contact. Takagi and Hara considered that the conflicting reports might mean that the U.S. carrier forces were operating in two separate groups. At 08: 15, a Yorktown SBD piloted by John L. Nielsen sighted Gotō 's force screening the invasion convoy. Nielsen, making an error in his coded message, reported the sighting as "two carriers and four heavy cruisers '' at 10 ° 3 ′ S 152 ° 27 ′ E  /  10.050 ° S 152.450 ° E  / - 10.050; 152.450, 225 nmi (259 mi; 417 km) northwest of TF17. Fletcher concluded that the Japanese main carrier force was located and ordered the launch of all available carrier aircraft to attack. By 10: 13, the U.S. strike of 93 aircraft -- 18 Grumman F4F Wildcats, 53 Douglas SBD Dauntless dive bombers, and 22 Douglas TBD Devastator torpedo bombers -- was on its way. At 10: 19, Nielsen landed and discovered his coding error. Although Gotō 's force included the light carrier Shōhō, Nielsen thought that he saw two cruisers and four destroyers and thus the main fleet. At 10: 12, Fletcher received a report of an aircraft carrier, ten transports, and 16 warships 30 nmi (35 mi; 56 km) south of Nielsen 's sighting at 10 ° 35 ′ S 152 ° 36 ′ E  /  10.583 ° S 152.600 ° E  / - 10.583; 152.600. The B - 17s actually saw the same thing as Nielsen: Shōhō, Gotō 's cruisers, plus the Port Moresby Invasion Force. Believing that the B - 17 's sighting was the main Japanese carrier force (which was in fact well to the east), Fletcher directed the airborne strike force towards this target. At 09: 15, Takagi 's strike force reached its target area, sighted Neosho and Sims, and searched in vain for the U.S. carriers. Finally, at 10: 51 Shōkaku scout aircrews realized they were mistaken in their identification of the oiler and destroyer as aircraft carriers. Takagi now realized the U.S. carriers were between him and the invasion convoy, placing the invasion forces in extreme danger. Takagi ordered his aircraft to immediately attack Neosho and Sims and then return to their carriers as quickly as possible. At 11: 15, the torpedo bombers and fighters abandoned the mission and headed back towards the carriers with their ordnance while the 36 dive bombers attacked the two U.S. ships. Four dive bombers attacked Sims and the rest dived on Neosho. The destroyer was hit by three bombs, broke in half, and sank immediately, killing all but 14 of her 192 - man crew. Neosho was hit by seven bombs. One of the dive bombers, hit by anti-aircraft fire, crashed into the oiler. Heavily damaged and without power, Neosho was left drifting and slowly sinking (16 ° 09 ′ S 158 ° 03 ′ E  /  16.150 ° S 158.050 ° E  / - 16.150; 158.050). Before losing power, Neosho was able to notify Fletcher by radio that she was under attack and in trouble, but garbled any further details as to just who or what was attacking her and gave wrong coordinates (16 ° 25 ′ S 157 ° 31 ′ E  /  16.417 ° S 157.517 ° E  / - 16.417; 157.517) for its position. The U.S. strike aircraft sighted Shōhō a short distance northeast of Misima Island at 10: 40 and deployed to attack. The Japanese carrier was protected by six Zeros and two Mitsubishi A5M fighters flying combat air patrol (CAP), as the rest of the carrier 's aircraft were being prepared below decks for a strike against the U.S. carriers. Gotō 's cruisers surrounded the carrier in a diamond formation, 3,000 -- 5,000 yd (2,700 -- 4,600 m) off each of Shōhō 's corners. Attacking first, Lexington 's air group, led by Commander William B. Ault, hit Shōhō with two 1,000 lb (450 kg) bombs and five torpedoes, causing severe damage. At 11: 00, Yorktown 's air group attacked the burning and now almost stationary carrier, scoring with up to 11 more 1,000 lb (450 kg) bombs and at least two torpedoes. Torn apart, Shōhō sank at 11: 35 (10 ° 29 ′ S 152 ° 55 ′ E  /  10.483 ° S 152.917 ° E  / - 10.483; 152.917). Fearing more air attacks, Gotō withdrew his warships to the north, but sent the destroyer Sazanami back at 14: 00 to rescue survivors. Only 203 of the carrier 's 834 - man crew were recovered. Three U.S. aircraft were lost in the attack: two SBDs from Lexington and one from Yorktown. All of Shōhō 's aircraft complement of 18 was lost, but three of the CAP fighter pilots were able to ditch at Deboyne and survived. At 12: 10, using a prearranged message to signal TF 17 on the success of the mission, Lexington SBD pilot and squadron commander Robert E. Dixon radioed "Scratch one flat top! Signed Bob. '' The U.S. aircraft returned and landed on their carriers by 13: 38. By 14: 20, the aircraft were rearmed and ready to launch against the Port Moresby Invasion Force or Gotō 's cruisers. Fletcher was concerned that the locations of the rest of the Japanese fleet carriers were still unknown. He was informed that Allied intelligence sources believed that up to four Japanese carriers might be supporting the MO operation. Fletcher concluded that by the time his scout aircraft found the remaining carriers it would be too late in the day to mount a strike. Thus, Fletcher decided to hold off on another strike this day and remain concealed under the thick overcast with fighters ready in defense. Fletcher turned TF 17 southwest. Apprised of the loss of Shōhō, Inoue ordered the invasion convoy to temporarily withdraw to the north and ordered Takagi, at this time located 225 nmi (259 mi; 417 km) east of TF 17, to destroy the U.S. carrier forces. As the invasion convoy reversed course, it was bombed by eight U.S. Army B - 17s, but was not damaged. Gotō and Kajioka were told to assemble their ships south of Rossel Island for a night surface battle if the U.S. ships came within range. At 12: 40, a Deboyne - based seaplane sighted and reported Crace 's detached cruiser and destroyer force on a bearing of 175 °, 78 nmi (90 mi; 144 km) from Deboyne. At 13: 15, an aircraft from Rabaul sighted Crace 's force but submitted an erroneous report, stating the force contained two carriers and was located, bearing 205 °, 115 nmi (213 km) from Deboyne. Based on these reports, Takagi, who was still awaiting the return of all of his aircraft from attacking Neosho, turned his carriers due west at 13: 30 and advised Inoue at 15: 00 that the U.S. carriers were at least 430 nmi (490 mi; 800 km) west of his location and that he would therefore be unable to attack them that day. Inoue 's staff directed two groups of attack aircraft from Rabaul, already airborne since that morning, towards Crace 's reported position. The first group included 12 torpedo - armed G4M bombers and the second group comprised 19 Mitsubishi G3M land attack aircraft armed with bombs. Both groups found and attacked Crace 's ships at 14: 30 and claimed to have sunk a "California - type '' battleship and damaged another battleship and cruiser. In reality, Crace 's ships were undamaged and shot down four G4Ms. A short time later, three U.S. Army B - 17s mistakenly bombed Crace, but caused no damage. Crace at 15: 26 radioed Fletcher he could not complete his mission without air support. Crace retired southward to a position about 220 nmi (250 mi; 410 km) southeast of Port Moresby to increase the range from Japanese carrier - or land - based aircraft while remaining close enough to intercept any Japanese naval forces advancing beyond the Louisiades through either the Jomard Passage or the China Strait. Crace 's ships were low on fuel, and as Fletcher was maintaining radio silence (and had not informed him in advance), Crace had no idea of Fletcher 's location, status, or intentions. Shortly after 15: 00, Zuikaku monitored a message from a Deboyne - based reconnaissance aircraft reporting (incorrectly) Crace 's force altered course to 120 ° true (southeast). Takagi 's staff assumed the aircraft was shadowing Fletcher 's carriers and determined if the Allied ships held that course, they would be within striking range shortly before nightfall. Takagi and Hara were determined to attack immediately with a select group of aircraft, minus fighter escort, even though it meant the strike would return after dark. To try to confirm the location of the U.S. carriers, at 15: 15 Hara sent a flight of eight torpedo bombers as scouts to sweep 200 nmi (230 mi; 370 km) westward. About that same time, the dive bombers returned from their attack on Neosho and landed. Six of the weary dive bomber pilots were told they would be immediately departing on another mission. Choosing his most experienced crews, at 16: 15 Hara launched 12 dive bombers and 15 torpedo planes with orders to fly on a heading of 277 ° to 280 nmi (320 mi; 520 km). The eight scout aircraft reached the end of their 200 nmi (230 mi; 370 km) search leg and turned back without seeing Fletcher 's ships. At 17: 47, TF 17 -- operating under thick overcast 200 nmi (230 mi; 370 km) west of Takagi -- detected the Japanese strike on radar heading in their direction, turned southeast into the wind, and vectored 11 CAP Wildcats, including one piloted by James H. Flatley, to intercept. Taking the Japanese formation by surprise, the Wildcats shot down seven torpedo bombers and one dive bomber, and heavily damaged another torpedo bomber (which later crashed), at a cost of three Wildcats lost. Having taken heavy losses in the attack, which also scattered their formations, the Japanese strike leaders canceled the mission after conferring by radio. The Japanese aircraft all jettisoned their ordnance and reversed course to return to their carriers. The sun set at 18: 30. Several of the Japanese dive bombers encountered the U.S. carriers in the darkness, around 19: 00, and briefly confused as to their identity, circled in preparation for landing before anti-aircraft fire from TF 17 's destroyers drove them away. By 20: 00, TF 17 and Takagi were about 100 nmi (120 mi; 190 km) apart. Takagi turned on his warships ' searchlights to help guide the 18 surviving aircraft back and all were recovered by 22: 00. In the meantime, at 15: 18 and 17: 18 Neosho was able to radio TF 17 she was drifting northwest in a sinking condition. Neosho 's 17: 18 report gave wrong coordinates, which hampered subsequent U.S. rescue efforts to locate the oiler. More significantly, the news informed Fletcher his only nearby available fuel supply was gone. As nightfall ended aircraft operations for the day, Fletcher ordered TF 17 to head west and prepared to launch a 360 ° search at first light. Crace also turned west to stay within striking range of the Louisiades. Inoue directed Takagi to make sure he destroyed the U.S. carriers the next day, and postponed the Port Moresby landings to 12 May. Takagi elected to take his carriers 120 nmi (140 mi; 220 km) north during the night so he could concentrate his morning search to the west and south and ensure that his carriers could provide better protection for the invasion convoy. Gotō and Kajioka were unable to position and coordinate their ships in time to attempt a night attack on the Allied warships. Both sides expected to find each other early the next day, and spent the night preparing their strike aircraft for the anticipated battle as their exhausted aircrews attempted to get a few hours ' sleep. In 1972, U.S. Vice Admiral H.S. Duckworth, after reading Japanese records of the battle, commented, "Without a doubt, May 7, 1942, vicinity of Coral Sea, was the most confused battle area in world history. '' Hara later told Yamamoto 's chief of staff, Admiral Matome Ugaki, he was so frustrated with the "poor luck '' the Japanese experienced on 7 May that he felt like quitting the navy. At 06: 15 on 8 May, from a position 100 nmi (120 mi; 190 km) east of Rossel Island (10 ° 25 ′ S 154 ° 5 ′ E  /  10.417 ° S 154.083 ° E  / - 10.417; 154.083), Hara launched seven torpedo bombers to search the area bearing 140 -- 230 °, out to 250 nmi (290 mi; 460 km) from the Japanese carriers. Assisting in the search were three Kawanishi H6Ks from Tulagi and four G4M bombers from Rabaul. At 07: 00, the carrier striking force turned to the southwest and was joined by two of Gotō 's cruisers, Kinugasa and Furutaka, for additional screening support. The invasion convoy, Gotō, and Kajioka steered towards a rendezvous point 40 nmi (46 mi; 74 km) east of Woodlark Island to await the outcome of the carrier battle. During the night, the warm frontal zone with low clouds which had helped hide the U.S. carriers on 7 May moved north and east and now covered the Japanese carriers, limiting visibility to between 2 and 15 nmi (2.3 and 17.3 mi; 3.7 and 27.8 km). At 06: 35, TF 17 -- operating under Fitch 's tactical control and positioned 180 nmi (210 mi; 330 km) southeast of the Lousiades, launched 18 SBDs to conduct a 360 ° search out to 200 nmi (230 mi; 370 km). The skies over the U.S. carriers were mostly clear, with 17 nmi (20 mi; 31 km) visibility. At 08: 20, a Lexington SBD piloted by Joseph G. Smith spotted the Japanese carriers through a hole in the clouds and notified TF 17. Two minutes later, a Shōkaku search plane commanded by Kenzō Kanno sighted TF 17 and notified Hara. The two forces were about 210 nmi (240 mi; 390 km) apart. Both sides raced to launch their strike aircraft. At 09: 15, the Japanese carriers launched a combined strike of 18 fighters, 33 dive bombers, and 18 torpedo planes, commanded by Lieutenant Commander Kakuichi Takahashi. The U.S. carriers each launched a separate strike. Yorktown 's group consisted of six fighters, 24 dive bombers, and nine torpedo planes and was on its way by 09: 15. Lexington 's group of nine fighters, 15 dive bombers, and 12 torpedo planes was off at 09: 25. Both the U.S. and Japanese carrier warship forces turned to head directly for each other 's location at high speed in order to shorten the distance their aircraft would have to fly on their return legs. Yorktown 's dive bombers, led by William O. Burch, reached the Japanese carriers at 10: 32, and paused to allow the slower torpedo squadron to arrive so that they could conduct a simultaneous attack. At this time, Shōkaku and Zuikaku were about 10,000 yd (9,100 m) apart, with Zuikaku hidden under a rain squall of low - hanging clouds. The two carriers were protected by 16 CAP Zero fighters. The Yorktown dive bombers commenced their attacks at 10: 57 on Shōkaku and hit the radically maneuvering carrier with two 1,000 lb (450 kg) bombs, tearing open the forecastle and causing heavy damage to the carrier 's flight and hangar decks. The Yorktown torpedo planes missed with all of their ordnance. Two U.S. dive bombers and two CAP Zeros were shot down during the attack. Lexington 's aircraft arrived and attacked at 11: 30. Two dive bombers attacked Shōkaku, hitting the carrier with one 1,000 lb (450 kg) bomb, causing further damage. Two other dive bombers dove on Zuikaku, missing with their bombs. The rest of Lexington 's dive bombers were unable to find the Japanese carriers in the heavy clouds. Lexington 's TBDs missed Shōkaku with all 11 of their torpedoes. The 13 CAP Zeros on patrol at this time shot down three Wildcats. With her flight deck heavily damaged and 223 of her crew killed or wounded, Shōkaku was unable to conduct further aircraft operations. Her captain, Takatsugu Jōjima, requested permission from Takagi and Hara to withdraw from the battle, to which Takagi agreed. At 12: 10, Shōkaku, accompanied by two destroyers, retired to the northeast. At 10: 55, Lexington 's CXAM - 1 radar detected the inbound Japanese aircraft at a range of 68 nmi (78 mi; 126 km) and vectored nine Wildcats to intercept. Expecting the Japanese torpedo bombers to be at a much lower altitude than they actually were, six of the Wildcats were stationed too low, and thus missed the Japanese aircraft as they passed by overhead. Because of the heavy losses in aircraft suffered the night before, the Japanese could not execute a full torpedo attack on both carriers. Lieutenant Commander Shigekazu Shimazaki, commanding the Japanese torpedo planes, sent 14 to attack Lexington and four to attack Yorktown. A Wildcat shot down one and 8 patrolling Yorktown SBDs destroyed three more as the Japanese torpedo planes descended to take attack position. Four SBDs were shot down by Zeros escorting the torpedo planes. The Japanese attack began at 11: 13 as the carriers, stationed 3,000 yd (2,700 m) apart, and their escorts opened fire with anti-aircraft guns. The four torpedo planes which attacked Yorktown all missed. The remaining torpedo planes successfully employed a pincer attack on Lexington, which had a much larger turning radius than Yorktown, and, at 11: 20, hit her with two Type 91 torpedoes. The first torpedo buckled the port aviation gasoline stowage tanks. Undetected, gasoline vapors spread into surrounding compartments. The second torpedo ruptured the port water main, reducing water pressure to the three forward firerooms and forcing the associated boilers to be shut down. The ship could still make 24 kn (28 mph; 44 km / h) with her remaining boilers. Four of the Japanese torpedo planes were shot down by anti-aircraft fire. The 33 Japanese dive bombers circled to attack from upwind, and thus did not begin their dives from 14,000 ft (4,300 m) until three to four minutes after the torpedo planes began their attacks. The 19 Shōkaku dive bombers, under Takahashi, lined up on Lexington while the remaining 14, directed by Tamotsu Ema, targeted Yorktown. Escorting Zeros shielded Takahashi 's aircraft from four Lexington CAP Wildcats which attempted to intervene, but two Wildcats circling above Yorktown were able to disrupt Ema 's formation. Takahashi 's bombers damaged Lexington with two bomb hits and several near misses, causing fires which were contained by 12: 33. At 11: 27, Yorktown was hit in the centre of her flight deck by a single 250 kg (550 lb), semi-armour - piercing bomb which penetrated four decks before exploding, causing severe structural damage to an aviation storage room and killing or seriously wounding 66 men. Up to 12 near misses damaged Yorktown 's hull below the waterline. Two of the dive bombers were shot down by a CAP Wildcat during the attack. As the Japanese aircraft completed their attacks and began to withdraw, believing that they inflicted fatal damage to both carriers, they ran a gauntlet of CAP Wildcats and SBDs. In the ensuing aerial duels, three SBDs and three Wildcats for the U.S., and three torpedo bombers, one dive bomber, and one Zero for the Japanese were downed. By 12: 00, the U.S. and Japanese strike groups were on their way back to their respective carriers. During their return, aircraft from the two adversaries passed each other in the air, resulting in more air - to - air altercations. Kanno 's and Takahashi 's aircraft were shot down, killing both of them. The strike forces, with many damaged aircraft, reached and landed on their respective carriers between 12: 50 and 14: 30. In spite of damage, Yorktown and Lexington were both able to recover aircraft from their returning air groups. During recovery operations, for various reasons the U.S. lost an additional five SBDs, two TBDs, and a Wildcat, and the Japanese lost two Zeros, five dive bombers, and one torpedo plane. Forty - six of the original 69 aircraft from the Japanese strike force returned from the mission and landed on Zuikaku. Of these, three more Zeros, four dive bombers and five torpedo planes were judged damaged beyond repair and were immediately jettisoned into the sea. As TF 17 recovered its aircraft, Fletcher assessed the situation. The returning aviators reported they heavily damaged one carrier, but that another had escaped damage. Fletcher noted that both his carriers were hurt and that his air groups had suffered high fighter losses. Fuel was also a concern due to the loss of Neosho. At 14: 22, Fitch notified Fletcher that he had reports of two undamaged Japanese carriers and that this was supported by radio intercepts. Believing that he faced overwhelming Japanese carrier superiority, Fletcher elected to withdraw TF17 from the battle. Fletcher radioed MacArthur the approximate position of the Japanese carriers and suggested that he attack with his land - based bombers. Around 14: 30, Hara informed Takagi that only 24 Zeros, eight dive bombers, and four torpedo planes from the carriers were currently operational. Takagi was worried about his ships ' fuel levels; his cruisers were at 50 % and some of his destroyers were as low as 20 %. At 15: 00, Takagi notified Inoue his fliers had sunk two U.S. carriers -- Yorktown and a "Saratoga - class '' -- but heavy losses in aircraft meant he could not continue to provide air cover for the invasion. Inoue, whose reconnaissance aircraft sighted Crace 's ships earlier that day, recalled the invasion convoy to Rabaul, postponed MO to 3 July, and ordered his forces to assemble northeast of the Solomons to begin the RY operation. Zuikaku and her escorts turned towards Rabaul while Shōkaku headed for Japan. Aboard Lexington, damage control parties put out the fires and restored her to operational condition, but at 12: 47, sparks from unattended electric motors ignited gasoline fumes near the ship 's central control station. The resulting explosion killed 25 men and started a large fire. Around 14: 42, another large explosion occurred, starting a second severe fire. A third explosion occurred at 15: 25 and at 15: 38 the ship 's crew reported the fires as uncontrollable. Lexington 's crew began abandoning ship at 17: 07. After the carrier 's survivors were rescued, including Admiral Fitch and the ship 's captain, Frederick C. Sherman, at 19: 15 the destroyer Phelps fired five torpedoes into the burning ship, which sank in 2,400 fathoms at 19: 52 (15 ° 15 ′ S 155 ° 35 ′ E  /  15.250 ° S 155.583 ° E  / - 15.250; 155.583). Two hundred and sixteen of the carrier 's 2,951 - man crew went down with the ship, along with 36 aircraft. Phelps and the other assisting warships left immediately to rejoin Yorktown and her escorts, which departed at 16: 01, and TF17 retired to the southwest. Later that evening, MacArthur informed Fletcher that eight of his B - 17s had attacked the invasion convoy and that it was retiring to the northwest. That evening, Crace detached Hobart, which was critically low on fuel, and the destroyer Walke, which was having engine trouble, to proceed to Townsville. Crace overheard radio reports saying the enemy invasion convoy had turned back, but, unaware Fletcher had withdrawn, he remained on patrol with the rest of TG17. 3 in the Coral Sea in case the Japanese invasion force resumed its advance towards Port Moresby. On 9 May, TF 17 altered course to the east and proceeded out of the Coral Sea via a route south of New Caledonia. Nimitz ordered Fletcher to return Yorktown to Pearl Harbor as soon as possible after refueling at Tongatabu. During the day, U.S. Army bombers attacked Deboyne and Kamikawa Maru, inflicting unknown damage. In the meantime, having heard nothing from Fletcher, Crace deduced that TF17 had departed the area. At 01: 00 on 10 May, hearing no further reports of Japanese ships advancing towards Port Moresby, Crace turned towards Australia and arrived at Cid Harbor, 130 nmi (150 mi; 240 km) south of Townsville, on 11 May. At 22: 00 on 8 May, Yamamoto ordered Inoue to turn his forces around, destroy the remaining Allied warships, and complete the invasion of Port Moresby. Inoue did not cancel the recall of the invasion convoy, but ordered Takagi and Gotō to pursue the remaining Allied warship forces in the Coral Sea. Critically low on fuel, Takagi 's warships spent most of 9 May refueling from the fleet oiler Tōhō Maru. Late in the evening of 9 May, Takagi and Gotō headed southeast, then southwest into the Coral Sea. Seaplanes from Deboyne assisted Takagi in searching for TF 17 on the morning of 10 May. Fletcher and Crace were already well on their way out of the area. At 13: 00 on 10 May, Takagi concluded that the enemy was gone and decided to turn back towards Rabaul. Yamamoto concurred with Takagi 's decision and ordered Zuikaku to return to Japan to replenish her air groups. At the same time, Kamikawa Maru packed up and departed Deboyne. At noon on 11 May, a U.S. Navy PBY on patrol from Nouméa sighted the drifting Neosho (15 ° 35 ′ S 155 ° 36 ′ E  /  15.583 ° S 155.600 ° E  / - 15.583; 155.600). The U.S. destroyer Henley responded and rescued 109 Neosho and 14 Sims survivors later that day, then scuttled the tanker with gunfire. On 10 May, Operation RY commenced. After the operation 's flagship, minelayer Okinoshima, was sunk by the U.S. submarine S - 42 on 12 May (05 ° 06 ′ S 153 ° 48 ′ E  /  5.100 ° S 153.800 ° E  / - 5.100; 153.800), the landings were postponed until 17 May. In the meantime, Halsey 's TF 16 reached the South Pacific near Efate and, on 13 May, headed north to contest the Japanese approach to Nauru and Ocean Island. On 14 May, Nimitz, having obtained intelligence concerning the Combined Fleet 's upcoming operation against Midway, ordered Halsey to make sure that Japanese scout aircraft sighted his ships the next day, after which he was to return to Pearl Harbor immediately. At 10: 15 on 15 May, a Kawanishi reconnaissance aircraft from Tulagi sighted TF 16 445 nmi (512 mi; 824 km) east of the Solomons. Halsey 's feint worked. Fearing a carrier air attack on his exposed invasion forces, Inoue immediately canceled RY and ordered his ships back to Rabaul and Truk. On 19 May, TF 16 -- which returned to the Efate area to refuel -- turned towards Pearl Harbor and arrived there on 26 May. Yorktown reached Pearl the following day. Shōkaku reached Kure, Japan, on 17 May, almost capsizing en route during a storm due to her battle damage. Zuikaku arrived at Kure on 21 May, having made a brief stop at Truk on 15 May. Acting on signals intelligence, the U.S. placed eight submarines along the projected route of the carriers ' return paths to Japan, but the submarines were not able to make any attacks. Japan 's Naval General Staff estimated that it would take two to three months to repair Shōkaku and replenish the carriers ' air groups. Thus, both carriers would be unable to participate in Yamamoto 's upcoming Midway operation. The two carriers rejoined the Combined Fleet on 14 July and were key participants in subsequent carrier battles against U.S. forces. The five I - class submarines supporting the MO operation were retasked to support an attack on Sydney Harbour three weeks later as part of a campaign to disrupt Allied supply lines. En route to Truk the submarine I - 28 was torpedoed on 17 May by the U.S. submarine Tautog and sunk with all hands. The battle was the first naval engagement in history in which the participating ships never sighted or fired directly at each other. Instead, manned aircraft acted as the offensive artillery for the ships involved. Thus, the respective commanders were participating in a new type of warfare, carrier - versus - carrier, with which neither had any experience. In H.P. Willmot 's words, the commanders "had to contend with uncertain and poor communications in situations in which the area of battle had grown far beyond that prescribed by past experience but in which speeds had increased to an even greater extent, thereby compressing decision - making time. '' Because of the greater speed with which decisions were required, the Japanese were at a disadvantage as Inoue was too far away at Rabaul to effectively direct his naval forces in real time, in contrast to Fletcher who was on - scene with his carriers. The Japanese admirals involved were often slow to communicate important information to one another. Research has examined how commanders ' choices affected the battle 's outcome. Two studies used mathematical models to estimate the impact of various alternatives. For example, suppose the U.S. carriers had chosen to sail separately (though still nearby), rather than together. The models indicated the Americans would have suffered slightly less total damage, with one ship sunk but the other unharmed. However, the battle 's overall outcome would have been similar. By contrast, suppose one side had located its opponent early enough to launch a first strike, so that only the opponent 's survivors could have struck back. The modeling suggested striking first would have provided a decisive advantage, even more beneficial than having an extra carrier. The experienced Japanese carrier aircrews performed better than those of the U.S., achieving greater results with an equivalent number of aircraft. The Japanese attack on the U.S. carriers on 8 May was better coordinated than the U.S. attack on the Japanese carriers. The Japanese suffered much higher losses to their carrier aircrews, losing ninety aircrew killed in the battle compared with thirty - five for the U.S. side. Japan 's cadre of highly skilled carrier aircrews with which it began the war were, in effect, irreplaceable because of an institutionalised limitation in its training programs and the absence of a pool of experienced reserves or advanced training programs for new airmen. Coral Sea started a trend which resulted in the irreparable attrition of Japan 's veteran carrier aircrews by the end of October 1942. The U.S. did not perform as expected, but they learned from their mistakes in the battle and made improvements to their carrier tactics and equipment, including fighter tactics, strike coordination, torpedo bombers and defensive strategies, such as anti-aircraft artillery, which contributed to better results in later battles. Radar gave the U.S. a limited advantage in this battle, but its value to the U.S. Navy increased over time as the technology improved and the Allies learned how to employ it more effectively. Following the loss of Lexington, improved methods for containing aviation fuel and better damage control procedures were implemented by the U.S. Coordination between the Allied land - based air forces and the U.S. Navy was poor during this battle, but this too would improve over time. Japanese and U.S. carriers faced off against each other again in the battles of Midway, the Eastern Solomons, and the Santa Cruz Islands in 1942, and the Philippine Sea in 1944. Each of these battles was strategically significant, to varying degrees, in deciding the course and ultimate outcome of the Pacific War. Both sides publicly claimed victory after the battle. In terms of ships lost, the Japanese won a tactical victory by sinking a U.S. fleet carrier, an oiler, and a destroyer -- 41,826 long tons (42,497 t) -- versus a light carrier, a destroyer, and several smaller warships -- 19,000 long tons (19,000 t) -- sunk by the U.S. side. Lexington represented, at that time, 25 % of U.S. carrier strength in the Pacific. The Japanese public was informed of the victory with overstatement of the U.S. damage and understatement of their own. In strategic terms, the Allies won because the seaborne invasion of Port Moresby was averted, lessening the threat to the supply lines between the U.S. and Australia. Although the withdrawal of Yorktown from the Coral Sea conceded the field, the Japanese were forced to abandon the operation that had initiated the Battle of Coral Sea in the first place. The battle marked the first time that a Japanese invasion force was turned back without achieving its objective, which greatly lifted the morale of the Allies after a series of defeats by the Japanese during the initial six months of the Pacific Theatre. Port Moresby was vital to Allied strategy and its garrison could well have been overwhelmed by the experienced Japanese invasion troops. The U.S. Navy also exaggerated the damage it inflicted, which was to cause the press to treat its reports of Midway with more caution. The results of the battle had a substantial effect on the strategic planning of both sides. Without a hold in New Guinea, the subsequent Allied advance, arduous though it was, would have been more difficult. For the Japanese, who focused on the tactical results, the battle was seen as merely a temporary setback. The results of the battle confirmed the low opinion held by the Japanese of U.S. fighting capability and supported their overconfident belief that future carrier operations against the U.S. were assured of success. One of the most significant effects of the Coral Sea battle was the loss of Shōkaku and Zuikaku to Yamamoto for his planned battle in the air with the U.S. carriers at Midway (Shōhō was to have been employed at Midway in a tactical role supporting the Japanese invasion ground forces). The Japanese believed that they sank two carriers in the Coral Sea, but this still left at least two more U.S. Navy carriers, Enterprise and Hornet, which could help defend Midway. The aircraft complement of the U.S. carriers was larger than that of their Japanese counterparts, which, when combined with the land - based aircraft at Midway, meant that the Combined Fleet no longer enjoyed a significant numerical aircraft superiority over the U.S. Navy for the impending battle. In fact, the U.S. would have three carriers to oppose Yamamoto at Midway, because, despite the damage the ship suffered during the Coral Sea battle, Yorktown was able to return to Hawaii. Although estimates were that the damage would take two weeks to repair, Yorktown put to sea only 48 hours after entering drydock at Pearl Harbor, which meant that she was available for the next confrontation with the Japanese. At Midway, Yorktown 's aircraft played crucial roles in sinking two Japanese fleet carriers. Yorktown also absorbed both Japanese aerial counterattacks at Midway which otherwise would have been directed at Enterprise and Hornet. In contrast to the strenuous efforts by the U.S. to employ the maximum forces available for Midway, the Japanese apparently did not even consider trying to include Zuikaku in the operation. No effort appears to have been made to combine the surviving Shōkaku aircrews with Zuikaku 's air groups or to quickly provide Zuikaku with replacement aircraft so she could participate with the rest of the Combined Fleet at Midway. Shōkaku herself was unable to conduct further aircraft operations, with her flight deck heavily damaged, and she required almost three months of repair in Japan. Historians H.P. Willmott, Jonathan Parshall, and Anthony Tully believe Yamamoto made a significant strategic error in his decision to support MO with strategic assets. Since Yamamoto had decided the decisive battle with the U.S. was to take place at Midway, he should not have diverted any of his important assets, especially fleet carriers, to a secondary operation like MO. Yamamoto 's decision meant Japanese naval forces were weakened just enough at both the Coral Sea and Midway battles to allow the Allies to defeat them in detail. Willmott adds, if either operation was important enough to commit fleet carriers, then all of the Japanese carriers should have been committed to each in order to ensure success. By committing crucial assets to MO, Yamamoto made the more important Midway operation dependent on the secondary operation 's success. Moreover, Yamamoto apparently missed the other implications of the Coral Sea battle: the unexpected appearance of U.S. carriers in exactly the right place and time to effectively contest the Japanese, and U.S. Navy carrier aircrews demonstrating sufficient skill and determination to do significant damage to the Japanese carrier forces. These would be repeated at Midway, and as a result, Japan lost four fleet carriers, the core of her naval offensive forces, and thereby lost the strategic initiative in the Pacific War. Parshall and Tully point out that, due to U.S. industrial strength, once Japan lost its numerical superiority in carrier forces as a result of Midway, Japan could never regain it. Parshall and Tully add, "The Battle of the Coral Sea had provided the first hints that the Japanese high - water mark had been reached, but it was the Battle of Midway that put up the sign for all to see. '' The Australians and U.S. forces in Australia were initially disappointed with the outcome of the Battle of the Coral Sea, fearing the MO operation was the precursor to an invasion of the Australian mainland and the setback to Japan was only temporary. In a meeting held in late May, the Australian Advisory War Council described the battle 's result as "rather disappointing '' given that the Allies had advance notice of Japanese intentions. General MacArthur provided Australian Prime Minister John Curtin with his assessment of the battle, stating that "all the elements that have produced disaster in the Western Pacific since the beginning of the war '' were still present as Japanese forces could strike anywhere if supported by major elements of the IJN. Because of the severe losses in carriers at Midway, the Japanese were unable to support another attempt to invade Port Moresby from the sea, forcing Japan to try to take Port Moresby by land. Japan began its land offensive towards Port Moresby along the Kokoda Track on 21 July from Buna and Gona. By then, the Allies had reinforced New Guinea with additional troops (primarily Australian) starting with the Australian 14th Brigade which embarked at Townsville on 15 May. The added forces slowed, then eventually halted the Japanese advance towards Port Moresby in September 1942, and defeated an attempt by the Japanese to overpower an Allied base at Milne Bay. In the meantime, the Allies learned in July that the Japanese had begun building an airfield on Guadalcanal. Operating from this base the Japanese would threaten the shipping supply routes to Australia. To prevent this from occurring, the U.S. chose Tulagi and nearby Guadalcanal as the target of their first offensive. The failure of the Japanese to take Port Moresby, and their defeat at Midway, had the effect of dangling their base at Tulagi and Guadalcanal without effective protection from other Japanese bases. Tulagi and Guadalcanal were four hours flying time from Rabaul, the nearest large Japanese base. Three months later, on 7 August 1942, 11,000 United States Marines landed on Guadalcanal, and 3,000 U.S. Marines landed on Tulagi and nearby islands. The Japanese troops on Tulagi and nearby islands were outnumbered and killed almost to the last man in the Battle of Tulagi and Gavutu -- Tanambogo and the U.S. Marines on Guadalcanal captured an airfield under construction by the Japanese. Thus began the Guadalcanal and Solomon Islands campaigns that resulted in a series of attritional, combined - arms battles between Allied and Japanese forces over the next year which, in tandem with the New Guinea campaign, eventually neutralized Japanese defenses in the South Pacific, inflicted irreparable losses on the Japanese military -- especially its navy -- and contributed significantly to the Allies ' eventual victory over Japan. The delay in the advance of Japanese forces also allowed the Marine Corps to land on Funafuti on 2 October 1942, with a Naval Construction Battalion (Seabees) building airfields on three of the atolls of Tuvalu from which USAAF B - 24 Liberator bombers of the Seventh Air Force operated. The atolls of Tuvalu acted as a staging post during the preparation for the Battle of Tarawa and the Battle of Makin that commenced on 20 November 1943, which was the implementation of Operation Galvanic.
when did ant and dec star in byker grove
Ant & Dec - wikipedia Anthony McPartlin OBE (born 18 November 1975) and Declan Donnelly OBE (born 25 September 1975), known collectively as Ant & Dec, are an English comedy TV presenting, television producing, acting and former music duo from Newcastle upon Tyne, England. The duo met as actors on the children 's television show, Byker Grove, during which and in their subsequent pop career they were respectively known as PJ & Duncan -- the names of the characters they played on the show. As Ant & Dec, they have had a successful career as television presenters, presenting shows such as SMTV Live, CD: UK, Friends Like These, Pop Idol, Ant & Dec 's Saturday Night Takeaway, I 'm a Celebrity... Get Me Out of Here!, PokerFace, Push the Button, Britain 's Got Talent, Red or Black?, and Text Santa. In 2006, they returned to acting with the film Alien Autopsy. The duo presented the annual Brit Awards in 2001, 2015 and 2016. Ant is the taller out of the two at 5 ft 8 in (1.73 m), and Dec is two inches shorter at 5 ft 6 in (1.68 m). To assist with identification, they follow the 180 - degree rule; with the exception of some early publicity shots. They have their own production company, Mitre Television, where they produce their shows. In a 2004 poll for the BBC, Ant & Dec were named the eighteenth most influential people in British culture. McPartlin and Donnelly met while working on the BBC children 's drama Byker Grove in 1989. After a shaky start, they soon became best friends. They have achieved such popularity as a duo that they are hardly seen apart on screen. It is reported that they are each insured against the other 's death, with the amount reportedly being around £ 1 million. Although Ant had gained some television experience with a brief stint on the children 's television series Why Do n't You?, which was broadcast on the BBC, Dec was the first of the two to acquire his place on the BBC children 's drama Byker Grove. He joined in 1989, playing Duncan. A year later, Ant joined the cast to play PJ. Their friendship began when their storylines collided, creating a friendship off - screen as well as on. Dec also played a stable boy in the adaptation of the novel The Cinder Path in his teenage years. They also went on to co-star in the 2006 sci - fi comedy movie Alien Autopsy. After leaving television, the duo turned their hand to pop music. Their first single was a song they performed while part of Byker Grove, entitled Tonight I 'm Free. The single had some success, and the duo went on to record two albums under their character names of PJ & Duncan. Their most famous hit during this period was the BRIT Award nominated Let 's Get Ready To Rhumble, for which the video and moves were choreographed by Mark Short, who had previously worked with Tina Turner and Peter Andre. For their third album, the duo reinvented themselves under their real names of Ant & Dec. The album featured their signature single "Shout ''. During their time, the pair released sixteen singles and three studio albums; however, none of their releases managed to reach number one, with their highest UK chart position being number three. The duo did, however, reach the top ten in Germany and Japan, and even had a number - one single in Germany, with their cover of the Everly Brothers ' All I Have to Do Is Dream. Success also struck in other European countries. The duo had a short lived revival in the music industry, releasing a song for the 2002 FIFA World Cup, entitled We 're on the Ball. The track peaked at No. 3, being beaten by Will Young and Gareth Gates. On 23 March 2013 Ant and Dec performed Let 's Get Ready To Rhumble as part of their show Ant and Dec 's Saturday Night Takeaway which powered the song to number one on the UK iTunes chart and on Sunday 31 March 2013 the track was revealed as the Official UK Number 1 single on The Official Chart on BBC Radio 1. All money made from the re-release was donated to charity. Ant & Dec got their first presenting job in 1994, while they were still releasing music under the alias of PJ & Duncan. They co-presented a Saturday - morning children 's show entitled Gimme 5, which was broadcast on CITV. The show only lasted two series before being dropped from the airwaves. In 1995, the duo were once again offered a job on CBBC, this time presenting their own series, entitled The Ant & Dec Show. The series was broadcast from 1995 to 1997, and in 1996, Ant & Dec won two BAFTA Awards, one for ' Best Children 's Show ' and one for ' Best Sketch Comedy Show '. In 1997, a VHS release, entitled The Ant & Dec Show -- Confidential, was made available in shops, and featured an hour of the best bits from three years of the programme, as well as specially recorded sketches and music videos. In 1998, the duo switched to Channel 4, presenting an early - evening children 's show entitled Ant & Dec Unzipped. This show also won a BAFTA, but was dropped from the airwaves after just one series. ITV soon signed the duo in August 1998, and within weeks, were assigned to present ITV1 's Saturday morning programmes SMTV Live and CD: UK, alongside old friend Cat Deeley. The duo presented the shows alongside Deeley for three years, becoming the most popular ITV Saturday morning show. The programme 's success was the mix of games such as Eat My Goal, Wonkey Donkey and Challenge Ant, sketches such as "Dec Says '' and the "Secret of My Success '', and the chemistry between Ant, Dec and Cat. Two SMTV VHS releases, compiling the best bits from both shows, were released in 2000 and 2001 respectively. Ant & Dec also starred in the children 's TV series Engie Benjy during their time on SMTV. Ant & Dec made their permanent departure from children 's television in 2001 after trying out formats like Friends Like These for BBC One in 2000 and Slap Bang with Ant & Dec for ITV in 2001 (which was basically SMTV in the evening even playing Challenge Ant against adults). They have since said that the main reason they left SMTV was because the Pop Idol live finals were due to begin on Saturday nights on ITV in December 2001. Ant & Dec 's first primetime presenting job came in the form of BBC Saturday - night game show Friends Like These, which was first broadcast in 1999 and made them known as presenters - a shift change from their acting days. The duo presented four series of the programme between 1999 and 2001. In 2001, the duo 's contract with ITV was renewed for a further three years, following their appearances on SMTV Live and CD: UK, and received their first primetime presenting job on the station, presenting brand new Saturday night reality series Pop Idol and down to this success they had to leave SMTV behind. Pop Idol was broadcast for only two series before was replaced in 2004 by The X Factor, to which former Smash Hits editor Kate Thornton was assigned presenting duties. In 2005, as part of the ITV 's 50th birthday celebrations, they were back on television fronting Ant & Dec 's Gameshow Marathon, a celebration of some of ITV 's most enduring gameshows from the past 50 years. They hosted The Price Is Right, Family Fortunes, Play Your Cards Right, Bullseye, Take Your Pick!, The Golden Shot and Sale of the Century. In 2002, Ant & Dec created and presented their own show, entitled Ant & Dec 's Saturday Night Takeaway. The show has so far run for fifteen series, with the latest, including the 100th episode, airing in early 2018. The first series was not an overall success, but with the introduction of "Ant & Dec Undercover '', "What 's Next? '', "Ant v Dec '' and "Little Ant and Dec '', the show became a hit. During the fourth series, Dec broke his arm, thumb and suffered a concussion whilst completing a challenge for the ' Ant vs. Dec ' segment of the show. The incident involved learning how to ride a motorbike and jumping through a ring of fire. During the challenge, Dec failed to pull hard enough on the throttle of the bike, causing it to topple over and sending him flying through the air. The accident caused the pair to miss the Comic Relief charity telethon of 2005. In 2006, the first episode of series five saw the duo abseil down the side of the 22 - storey high London Studios, where the show was filmed. Two DVDs, a best - bits book and a board game of the series were released during 2004. The show was rested after 2009 as Ant & Dec said they were running out of ideas and it became stale, as many of the popular features such as "Little Ant & Dec '' and "Undercover '' were dropped. Saturday Night Takeaway returned in 2013 and was a massive success; Ant & Dec resurrected previous hit features such as "Undercover '', "Little Ant and Dec '' (albeit with a new Little Ant and Dec) Win the Ads and Ant v Dec, with new host Ashley Roberts. They also brought in new features such as the Supercomputer, Vegas or Bust, the End of the show ' show ' where Ant and Dec perform with an act such as Riverdance or an Orchestra, and "I 'm a Celebrity, get out of my ear! '' where they have an earpiece in a celebs ear and they tell them what to do while being filmed by secret cameras. The series was such a success that ITV recommissioned it for 2014 even before the 2013 series ended. In August 2002, Ant & Dec fronted I 'm a Celebrity... Get Me Out of Here!. They drew their highest viewing figures to date in February 2004: nearly 15 million tuned in to watch the third series. In May 2006, they were assigned to present coverage of the charity football match Soccer Aid. They were then invited back to present coverage of the second match in September 2008 but have been replaced by Dermot O'Leary from 2010 as the match clashed with Britain 's Got Talent. In June 2006, they announced they had created a new game - show format for ITV, entitled PokerFace. The show featured members of the public gambling high stakes of money in an attempt to win the ultimate prize. The first series began airing on 10 July 2006, and was aired for seven consecutive nights. The second series was broadcast in early 2007, and saw a move to a prime - time Saturday slot. Ratings for the series fell to below 3.5 million, and the series was subsequently axed in March 2007. A board game of the format was released in 2008. In April 2007, the duo signed a two - year golden handcuffs deal with ITV, reportedly worth £ 40 million, securing their career at the station until the end of 2009. In June 2007, they were offered the job of presenters on new ITV reality platform Britain 's Got Talent by Simon Cowell. The series features contestants aiming to win £ 100,000 and spot on the bill at the Royal Variety Performance, while performing and being judged by Cowell, actress Amanda Holden and former Daily Mirror editor Piers Morgan. The series was highly successful, drawing in nearly 12 million viewers. A second series was broadcast in May 2008, and drew in nearly 14.5 million viewers. They have since returned to present ten series of the show, and have also had a regular feature on the ITV2 spin - off show Britain 's Got More Talent. The pair filmed a series of six episodes for a new American game show, Wanna Bet?, in November 2007. The episodes was broadcast in 2008, but failed to attract enough interest for a second series to be commissioned. What You Wrote, another format created by the duo, was due to air in Autumn 2008 but was reportedly axed by ITV. In 2010, the duo debuted a replacement for Ant & Dec 's Saturday Night Takeaway, entitled Ant & Dec 's Push the Button. The series was a success albeit not in the same way as Saturday Night Takeaway, and a second run of the programme was broadcast in 2011 but Ant and Dec later dropped the show in favour of reviving Saturday Night Takeaway. Ant & Dec have also presented the game show Red or Black?, a creation of Cowell 's, airing live on ITV in 2011 with a second series in 2012, but this was not a ratings success and was cancelled after the second series. On 24 December 2011, they presented ITV 's charity initiative Text Santa with Holly Willoughby. Text Santa returned in 2012, 2013 and 2014 with Ant & Dec co-hosting alongside Christine Bleakley, Phillip Schofield, Holly Willoughby, Alesha Dixon and Paddy McGuinness. In January 2016, Ant and Dec presented When Ant and Dec Met The Prince: 40 Years of The Prince 's Trust, a one - off documentary for ITV. In 2016, they also presented The Queen 's 90th Birthday Celebration, broadcast live on ITV. In November 2016 the pair signed a new three - year deal with ITV estimated to be worth £ 30 million. In 2006, a celebration of the show Spitting Image saw Ant and Dec having their own puppets made. They have also been made into cartoon characters on the comedy show 2DTV, and face masks in Avid Merrion 's Bo Selecta. Waxworks of the duo could once be found in London 's Madame Tussauds. In April 2008, it was reported that Ant & Dec 's production company, Gallowgate Productions, had purchased the rights to Byker Grove and SMTV Live, after the production companies that made them, Zenith Entertainment and Blaze Television, had both gone bankrupt in 2007. According to reports, the duo decided to purchase the rights to stop digital channels showing repeats of the programmes. On 28 September 2008, it was reported that the pair were attacked by the Taliban whilst in Afghanistan to present a Pride of Britain Award. In December 2008, the duo starred in a seasonal advert, their first in seven years, for the supermarket chain Sainsbury 's. The duo appeared alongside chef Jamie Oliver. In March 2009, the duo filmed a short film for inclusion on Comic Relief, which highlighted their story upon visiting a community centre for young carers in the North East. In September 2009, the duo released their official autobiography, entitled "Ooh! What a Lovely Pair. Our Story ''. In October 2010, the duo appeared in several Nintendo adverts playing both the Wii and Nintendo DS. In 2011 and 2014, they both appeared on the ITV2 comedy panel show Celebrity Juice. From February 2013 to March 2015 they appeared in adverts for supermarket Morrisons. Between February 2016 and March 2018, they had appeared in adverts for car company Suzuki. In 2015, the pair made a cameo appearance on the U.S. adaptation of Saturday Night Takeaway, NBC 's Best Time Ever with Neil Patrick Harris. The duo are also executive producers on the show. On 10 June 2016 it was announced that the duo would be awarded OBE status by Her Majesty Queen Elizabeth II at an investiture later that year. The pair said they were both "shocked, but incredibly honoured. '' Anthony McPartlin and Declan Donnelly collected their awards for services to broadcasting and entertainment at Buckingham Palace from Prince Charles on 27 January 2017. Following the brief ceremony, the pair said: 2018 is notable in the pair 's career for being affected greatly by an incident involving Anthony McPartlin on 18 March 2018 where McPartlin was involved in a road traffic accident in London which led to him being arrested on suspicion of drink - driving, following a live episode of Saturday Night Takeaway, which led to him suspending further presenting duties the following day for the rest of Saturday Night Takeaway and Britain 's Got Talent. Therefore, Declan Donnelly hosted the remaining two episodes of Takeaway and the Britain 's Got talent live episodes by himself, the first time he had presented solo in 30 years without McPartlin, who had returned to rehab. However, McPartlin did still appear in a pre-recorded segment of Takeaway and the audition episodes of Britain 's Got Talent (series 12) as these had been already filmed in January and February that year. Law firm Olswang were commissioned to investigate the 2005 British Comedy Awards when the producers overturned the voting public 's first choice, The Catherine Tate Show in favour of Ant and Dec 's Saturday Night Takeaway for the People 's Choice Award. The incident was also the subject of an investigation by media regulator Ofcom. Following allegations of fraud in 2007, an investigation by auditors Deloitte Touche Tohmatsu discovered that two shows, Ant & Dec 's Gameshow Marathon and Ant & Dec 's Saturday Night Takeaway, had defrauded viewers participating in phone - ins. The programmes were subject to a further investigation by Ofcom which found that between January 2003 and October 2006 Ant & Dec 's Saturday Night Takeaway had: Between September and October 2005, Ant & Dec 's Gameshow Marathon had: The pair were ridiculed for their alleged participation in the fraud on the front cover of the satirical magazine Private Eye. On 10 September 2008, Ant & Dec announced that the frauds "will never happen again '', insisting that a "high - tech system '' and strict rules will ensure viewers can not lose out with poorly monitored premium rate phone lines. On 30 September 2008, it was reported that Ant & Dec were being sued for $ US 30 million by Greek American stand - up comedian and actor ANT for using the name ' Ant ' in the United States. The lawsuit, among other things, alleges trademark infringement and fraud. The suit was dismissed in May 2010. The pair have had a UK registered trademark for ' Ant & Dec ' in the category of ' Entertainment services ' since 2003. They have, albeit infrequently, returned to acting. They played themselves in the film Love Actually (in which Bill Nighy 's character addressed Dec as "Ant or Dec ''). They have returned to their Geordie roots in a one - off tribute to The Likely Lads and also by returning to Byker Grove for Geoff 's funeral. In 1998, the pair starred in the pantomime Snow White and the Seven Dwarfs at Sunderland 's Empire Theatre alongside Donnelly 's partner at the time Clare Buckfield. The show was financially unsuccessful, making £ 20,000 less than it cost to stage, with the duo footing a large share of the shortfall. Ant & Dec 's most recent acting appearance was in the film Alien Autopsy released in April 2006. The film gained positive reviews with critics praising the pair 's acting performance, but lost more than half of its budget at the box office. It has since gained a cult following from fans of the pair. In 2013, they reprised their roles as P.J and Duncan on Ant & Dec 's Saturday Night Takeaway. 1994 1995 1997 1998 2000 2001 2002 2005 2006 2007 2008 2009 2012 2013 2014 2015 2017
where does the superior vena cava carry blood to
Superior vena cava - wikipedia The superior vena cava (SVC) is the superior of the two venae cavae, the great venous trunks that return deoxygenated blood from the systemic circulation to the right atrium of the heart. It is a large - diameter (24 mm), yet short, vein that receives venous return from the upper half of the body, above the diaphragm. (Venous return from the lower half, below the diaphragm, flows through the inferior vena cava.) The SVC is located in the anterior right superior mediastinum. It is the typical site of central venous access (CVA) via a central venous catheter or a peripherally inserted central catheter. Mentions of "the cava '' without further specification usually refer to the SVC. It is formed by the left and right brachiocephalic veins (also referred to as the innominate veins), which also receive blood from the upper limbs, eyes and neck, behind the lower border of the first right costal cartilage. It passes vertically downwards behind first intercostal space and receives azygos vein just before it pierces the fibrous pericardium opposite right second costal cartilage and its lower part is intrapericardial. And then, it ends in the upper and posterior part of the sinus venarum of the right atrium, at the upper right front portion of the heart. It is also known as the cranial vena cava in other animals. No valve divides the superior vena cava from the right atrium. As a result, the (right) atrial and (right) ventricular contractions are conducted up into the internal jugular vein and, through the sternocleidomastoid muscle, can be seen as the jugular venous pressure. Superior vena cava obstruction refers to a partial or complete obstruction of the superior vena cava, typically in the context of cancer such as a cancer of the lung, metastatic cancer, or lymphoma. Obstruction can lead to enlarged veins in the head and neck, and may also cause breathlessness, cough, chest pain, and difficulty swallowing. Pemberton 's sign may be positive. Tumours causing obstruction may be treated with chemotherapy and / or radiotherapy to reduce their effects, and corticosteroids may also be given. In tricuspid valve regurgitation, these pulsations are very strong. Diagram showing completion of development of the parietal veins. Front view of heart and lungs. Heart seen from above. Superior vena cava Transverse section of thorax, showing relations of pulmonary artery. The arch of the aorta and its branches. The brachiocephalic veins, superior vena cava, inferior vena cava, azygos vein, and their tributaries.
who sang that the way god planned it
That 's the Way God Planned it - wikipedia That 's The Way God Planned It is the fourth studio album by American musician Billy Preston, released in August 1969 on Apple Records. The album followed Preston 's collaboration with the Beatles on their "Get Back '' single and was produced by George Harrison. The title track became a hit in the UK when issued as a single. Aside from Harrison, other contributors to the album include Keith Richards, Eric Clapton and Doris Troy. Derek Taylor 's sleevenotes to the original Apple release praised Preston as a wonderful new signing. "Billy Preston is the best thing to happen to Apple this year. He 's young and beautiful and kind and he sings and plays like the son of God. '' Preston himself wrote in the notes: Record Collector 's reviewer writes that "(The album reveals) the organist to be an accomplished, spiritually engaging singer - songwriter. '' In his preview of Apple Records ' 2010 reissues, for Rolling Stone, David Fricke lists That 's the Way God Planned It among his top five non-Beatle Apple albums. Fricke writes of the song "That 's the Way God Planned It '': "(Preston) would have bigger hits in the Seventies but never make a better one than this album 's rapturous title track... The rest of the album is solid church - infused soul, with Preston covering both Bob Dylan and W.C. Handy. '' Reviewing the album for Blues & Soul magazine, Sharon Davis writes that "this is an extremely worthy release; reminding us of Billy 's enormous and irreplaceable contribution to music. '' All songs by Billy Preston, except where noted.
what is the triplets birthday in this is us
This Is Us (TV series) - wikipedia This Is Us is an American television drama series created by Dan Fogelman that premiered on NBC on September 20, 2016. The series stars an ensemble cast featuring Milo Ventimiglia, Mandy Moore, Sterling K. Brown, Chrissy Metz, Justin Hartley, Susan Kelechi Watson, Chris Sullivan, Ron Cephas Jones, Jon Huertas, Alexandra Breckenridge, Niles Fitch, Logan Shroyer, Hannah Zeile, Mackenzie Hancsicsak, Parker Bates, Lonnie Chavis, Eris Baker, and Faithe Herman. The program details the lives and families of two parents, and their three children born on the same day as their father 's birthday. This Is Us is filmed in Los Angeles. The series has received positive reviews and has been nominated for Best Television Series -- Drama at the 74th Golden Globe Awards and Best Drama Series at the 7th Critics ' Choice Awards, as well as being chosen as a Top Television Program by the American Film Institute. Sterling K. Brown has received an Emmy, a Golden Globe, a Critics ' Choice Award, and an NAACP Image Award for his acting in the series. Mandy Moore and Chrissy Metz received Golden Globe nominations for Best Supporting Actress. In 2017, the series received ten Emmy nominations, including Outstanding Drama Series, with Brown winning for Outstanding Lead Actor in a Drama Series. On September 27, 2016, NBC picked up the series for a full season of 18 episodes. In January 2017, NBC renewed the series for two additional seasons of 18 episodes each. The second season premiered on September 26, 2017. The series follows the lives of siblings Kevin, Kate, and Randall (known as the "Big Three ''), and their parents Jack and Rebecca Pearson. It takes place in the present and using flashbacks, at various times in the past. Kevin and Kate are the two surviving members of a triplet pregnancy, born six weeks premature on Jack 's 36th birthday in 1980; their brother is stillborn. Believing they were meant to have three children, Jack and Rebecca, who are white, decide to adopt Randall, a black child born the same day and brought to the same hospital after his biological father abandoned him at a fire station. Jack dies when his children are 17. Most episodes feature a storyline taking place in the present (2016 -- 2018, contemporaneous with airing) and a storyline taking place at a set time in the past; but some episodes are set in one time period or use multiple flashback time periods. Flashbacks often focus on Jack and Rebecca c. 1980 both before and after their babies ' birth, or on the family when the Big Three are children (at least ages 8 -- 10) or adolescents; these scenes usually take place in Pittsburgh, where the Big Three are born and raised. Various other time periods and locations have also served a settings. As adults, Kate lives in Los Angeles, Randall and his family are in New Jersey, and Kevin relocates from Los Angeles to New York City. In May 2017, Hulu acquired the SVOD rights to new and past episodes of the series to air exclusively on Hulu, in addition to NBC.com and the NBC app. The review aggregation website Rotten Tomatoes reported an 89 % approval rating for the first season with an average rating of 7.72 / 10 based on 63 reviews. The website 's critical consensus reads, "Featuring full - tilt heartstring - tugging family drama, This Is Us will provide a suitable surrogate for those who have felt a void in their lives since Parenthood went off the air. '' Metacritic, which uses a weighted average, assigned the season a score of 76 out of 100 based on 34 reviews, indicating "generally favorable reviews ''. Season 2 received a 94 % approval rating from Rotten Tomatoes based on 17 reviews. Entertainment Weekly gave the first few episodes of This Is Us a rating of B, calling it "a refreshing respite from the relational violence and pessimism that marks the other buzz soaps that have bubbled forth from a culture of divisiveness ''. Moreover, they praised all the actors, specifically Sterling K. Brown, for being able to navigate "his scenes with such intelligence, authenticity, and charisma ''.
who was green bay packers quarterback before brett favre
List of Green Bay Packers starting quarterbacks - wikipedia The Green Bay Packers are a professional American football team based in Green Bay, Wisconsin. They are members of the North Division of the National Football Conference (NFC) and are the third - oldest franchise in the National Football League (NFL). The club was founded in 1919 by coach, player, and future Hall of Fame inductee Curly Lambeau and sports and telegraph editor George Whitney Calhoun. The Packers competed against local teams for two seasons before entering the NFL in 1921. The Packers have had 46 starting quarterbacks (QB) in the history of their franchise. The Packers ' past starting quarterbacks include Pro Football Hall of Fame inductees Curly Lambeau, Tony Canadeo, Arnie Herber, Bart Starr and Brett Favre. The team 's first starting quarterback was Norm Barry, while the longest serving was Brett Favre. The Packers ' starting quarterback for the 2016 season was Aaron Rodgers, who was playing in his 12th season in the NFL. They are listed in order of the date of each player 's first start at quarterback for the Packers. does not include 1966 or 1967 NFL championships
who did the romans learn the technology of shipbuilding from
Roman navy - wikipedia The Roman navy (Latin: Classis, lit. "fleet '') comprised the naval forces of the Ancient Roman state. The navy was instrumental in the Roman conquest of the Mediterranean basin, but it never enjoyed the prestige of the Roman legions. Throughout their history, the Romans remained a primarily land - based people and relied partially on their more nautically inclined subjects, such as the Greeks and the Egyptians, to build their ships. Because of that, the navy was never completely embraced by the Roman state, and deemed somewhat "un-Roman ''. In Antiquity, navies and trading fleets did not have the logistical autonomy that modern ships and fleets possess. Unlike modern naval forces, the Roman navy even at its height never existed as an autonomous service but operated as an adjunct to the Roman army. During the course of the First Punic War, the Roman navy was massively expanded and played a vital role in the Roman victory and the Roman Republic 's eventual ascension to hegemony in the Mediterranean Sea. In the course of the first half of the 2nd century BC, Rome went on to destroy Carthage and subdue the Hellenistic kingdoms of the eastern Mediterranean, achieving complete mastery of the inland sea, which they called Mare Nostrum. The Roman fleets were again prominent in the 1st century BC in the wars against the pirates, and in the civil wars that brought down the Republic, whose campaigns ranged across the Mediterranean. In 31 BC, the great naval Battle of Actium ended the civil wars culminating in the final victory of Augustus and the establishment of the Roman Empire. During the Imperial period, the Mediterranean became largely a peaceful "Roman lake ''. In the absence of a maritime enemy, the navy was reduced mostly to patrol, anti-piracy and transport duties. The navy also manned and maintained craft on major frontier rivers such as the Rhine and the Danube for supplying the army. On the fringes of the Empire, in new conquests or, increasingly, in defense against barbarian invasions, the Roman fleets were still engaged in open warfare. The decline of the Empire in the 3rd century took a heavy toll on the navy, which was reduced to a shadow of its former self, both in size and in combat ability. As successive waves of the Völkerwanderung crashed on the land frontiers of the battered Empire, the navy could only play a secondary role. In the early 5th century, the Roman frontiers were breached, and barbarian kingdoms appeared on the shores of the western Mediterranean. One of them, the Vandal Kingdom, raised a navy of its own and raided the shores of the Mediterranean, even sacking Rome, while the diminished Roman fleets were incapable of offering any resistance. The Western Roman Empire collapsed in the late 5th century. The navy of the surviving eastern Roman Empire is known as the Byzantine navy. The exact origins of the Roman fleet are obscure. A traditionally agricultural and land - based society, the Romans rarely ventured out to sea, unlike their Etruscan neighbours. There is evidence of Roman warships in the early 4th century BC, such as mention of a warship that carried an embassy to Delphi in 394 BC, but at any rate, the Roman fleet, if it existed, was negligible. The traditional birth date of the Roman navy is set at ca. 311 BC, when, after the conquest of Campania, two new officials, the duumviri navales classis ornandae reficiendaeque causa, were tasked with the maintenance of a fleet. As a result, the Republic acquired its first fleet, consisting of 20 ships, most likely triremes, with each duumvir commanding a squadron of 10 ships. However, the Republic continued to rely mostly on her legions for expansion in Italy; the navy was most likely geared towards combating piracy and lacked experience in naval warfare, being easily defeated in 282 BC by the Tarentines. This situation continued until the First Punic War: the main task of the Roman fleet was patrolling along the Italian coast and rivers, protecting seaborne trade from piracy. Whenever larger tasks had to be undertaken, such as the naval blockade of a besieged city, the Romans called on the allied Greek cities of southern Italy, the socii navales, to provide ships and crews. It is possible that the supervision of these maritime allies was one of the duties of the four new praetores classici, who were established in 267 BC. The first Roman expedition outside mainland Italy was against the island of Sicily in 265 BC. This led to the outbreak of hostilities with Carthage, which would last until 241 BC. At the time, the Punic city was the unchallenged master of the western Mediterranean, possessing a long maritime and naval experience and a large fleet. Although Rome had relied on her legions for the conquest of Italy, operations in Sicily had to be supported by a fleet, and the ships available by Rome 's allies were insufficient. Thus in 261 BC, the Roman Senate set out to construct a fleet of 100 quinqueremes and 20 triremes. According to Polybius, the Romans seized a shipwrecked Carthaginian quinquereme, and used it as a blueprint for their own ships. The new fleets were commanded by the annually elected Roman magistrates, but naval expertise was provided by the lower officers, who continued to be provided by the socii, mostly Greeks. This practice was continued until well into the Empire, something also attested by the direct adoption of numerous Greek naval terms. Despite the massive buildup, the Roman crews remained inferior in naval experience to the Carthaginians, and could not hope to match them in naval tactics, which required great maneuverability and experience. They therefore employed a novel weapon which transformed sea warfare to their advantage. They equipped their ships with the corvus, possibly developed earlier by the Syracusans against the Athenians. This was a long plank with a spike for hooking onto enemy ships. Using it as a boarding bridge, marines were able to board an enemy ship, transforming sea combat into a version of land combat, where the Roman legionaries had the upper hand. However, it is believed that the corvus ' weight made the ships unstable, and could capsize a ship in rough seas. Although the first sea engagement of the war, the Battle of the Lipari Islands in 260 BC, was a defeat for Rome, the forces involved were relatively small. Through the use of the corvus, the fledgling Roman navy under Gaius Duilius won its first major engagement later that year at the Battle of Mylae. During the course of the war, Rome continued to be victorious at sea: victories at Sulci (258 BC) and Tyndaris (257 BC) were followed by the massive Battle of Cape Ecnomus, where the Roman fleet under the consuls Marcus Atilius Regulus and Lucius Manlius inflicted a severe defeat on the Carthaginians. This string of successes allowed Rome to push the war further across the sea to Africa and Carthage itself. Continued Roman success also meant that their navy gained significant experience, although it also suffered a number of catastrophic losses due to storms, while conversely, the Carthaginian navy suffered from attrition. The Battle of Drepana in 249 BC resulted in the only major Carthaginian sea victory, forcing the Romans to equip a new fleet from donations by private citizens. In the last battle of the war, at Aegates Islands in 241 BC, the Romans under Gaius Lutatius Catulus displayed superior seamanship to the Carthaginians, notably using their rams rather than the now - abandoned corvus to achieve victory. After the Roman victory, the balance of naval power in the Western Mediterranean had shifted from Carthage to Rome. This ensured Carthaginian acquiescence to the conquest of Sardinia and Corsica, and also enabled Rome to deal decisively with the threat posed by the Illyrian pirates in the Adriatic. The Illyrian Wars marked Rome 's first involvement with the affairs of the Balkan peninsula. Initially, in 229 BC, a fleet of 200 warships was sent against Queen Teuta, and swiftly expelled the Illyrian garrisons from the Greek coastal cities of modern - day Albania. Ten years later, the Romans sent another expedition in the area against Demetrius of Pharos, who had rebuilt the Illyrian navy and engaged in piracy up into the Aegean. Demetrius was supported by Philip V of Macedon, who had grown anxious at the expansion of Roman power in Illyria. The Romans were again quickly victorious and expanded their Illyrian protectorate, but the beginning of the Second Punic War (218 -- 201 BC) forced them to divert their resources westwards for the next decades. Due to Rome 's command of the seas, Hannibal, Carthage 's great general, was forced to eschew a sea - borne invasion, instead choosing to bring the war over land to the Italian peninsula. Unlike the first war, the navy played little role on either side in this war. The only naval encounters occurred in the first years of the war, at Lilybaeum (218 BC) and the Ebro River (217 BC), both resulting Roman victories. Despite an overall numerical parity, for the remainder of the war the Carthaginians did not seriously challenge Roman supremacy. The Roman fleet was hence engaged primarily with raiding the shores of Africa and guarding Italy, a task which included the interception of Carthaginian convoys of supplies and reinforcements for Hannibal 's army, as well as keeping an eye on a potential intervention by Carthage 's ally, Philip V. The only major action in which the Roman fleet was involved was the siege of Syracuse in 214 - 212 BC with 130 ships under Marcus Claudius Marcellus. The siege is remembered for the ingenious inventions of Archimedes, such as mirrors that burned ships or the so - called "Claw of Archimedes '', which kept the besieging army at bay for two years. A fleet of 160 vessels was assembled to support Scipio Africanus ' army in Africa in 202 BC, and, should his expedition fail, evacuate his men. In the event, Scipio achieved a decisive victory at Zama, and the subsequent peace stripped Carthage of its fleet. Rome was now the undisputed master of the Western Mediterranean, and turned her gaze from defeated Carthage to the Hellenistic world. Small Roman forces had already been engaged in the First Macedonian War, when, in 214 BC, a fleet under Marcus Valerius Laevinus had successfully thwarted Philip V from invading Illyria with his newly built fleet. The rest of the war was carried out mostly by Rome 's allies, the Aetolian League and later the Kingdom of Pergamon, but a combined Roman - Pergamene fleet of ca. 60 ships patrolled the Aegean until the war 's end in 205 BC. In this conflict, Rome, still embroiled in the Punic War, was not interested in expanding her possessions, but rather in thwarting the growth of Philip 's power in Greece. The war ended in an effective stalemate, and was renewed in 201 BC, when Philip V invaded Asia Minor. A naval battle off Chios ended in a costly victory for the Pergamene - Rhodian alliance, but the Macedonian fleet lost many warships, including its flagship, a deceres. Soon after, Pergamon and Rhodes appealed to Rome for help, and the Republic was drawn into the Second Macedonian War. In view of the massive Roman naval superiority, the war was fought on land, with the Macedonian fleet, already weakened at Chios, not daring to venture out of its anchorage at Demetrias. After the crushing Roman victory at Cynoscephalae, the terms imposed on Macedon were harsh, and included the complete disbandment of her navy. Almost immediately following the defeat of Macedon, Rome became embroiled in a war with the Seleucid Empire. This war too was decided mainly on land, although the combined Roman - Rhodian navy also achieved victories over the Seleucids at Myonessus and Eurymedon. These victories, which were invariably concluded with the imposition of peace treaties that prohibited the maintenance of anything but token naval forces, spelled the disappearance of the Hellenistic royal navies, leaving Rome and her allies unchallenged at sea. Coupled with the final destruction of Carthage, and the end of Macedon 's independence, by the latter half of the 2nd century BC, Roman control over all of what was later to be dubbed mare nostrum ("our sea '') had been established. Subsequently, the Roman navy was drastically reduced, depending on its Socii navales. In the absence of a strong naval presence however, piracy flourished throughout the Mediterranean, especially in Cilicia, but also in Crete and other places, further reinforced by money and warships supplied by King Mithridates VI of Pontus, who hoped to enlist their aid in his wars against Rome. In the First Mithridatic War (89 -- 85 BC), Sulla had to requisition ships wherever he could find them to counter Mithridates ' fleet. Despite the makeshift nature of the Roman fleet however, in 86 BC Lucullus defeated the Pontic navy at Tenedos. Immediately after the end of the war, a permanent force of ca. 100 vessels was established in the Aegean from the contributions of Rome 's allied maritime states. Although sufficient to guard against Mithridates, this force was totally inadequate against the pirates, whose power grew rapidly. Over the next decade, the pirates defeated several Roman commanders, and raided unhindered even to the shores of Italy, reaching Rome 's harbor, Ostia. According to the account of Plutarch, "the ships of the pirates numbered more than a thousand, and the cities captured by them four hundred. '' Their activity posed a growing threat for the Roman economy, and a challenge to Roman power: several prominent Romans, including two praetors with their retinue and the young Julius Caesar, were captured and held for ransom. Perhaps most important of all, the pirates disrupted Rome 's vital lifeline, namely the massive shipments of grain and other produce from Africa and Egypt that were needed to sustain the city 's population. The resulting grain shortages were a major political issue, and popular discontent threatened to become explosive. In 74 BC, with the outbreak of the Third Mithridatic War, Marcus Antonius (the father of Mark Antony) was appointed praetor with extraordinary imperium against the pirate threat, but signally failed in his task: he was defeated off Crete in 72 BC, and died shortly after. Finally, in 67 BC the Lex Gabinia was passed in the Plebeian Council, vesting Pompey with unprecedented powers and authorizing him to move against them. In a massive and concerted campaign, Pompey cleared the seas from the pirates in only three months. Afterwards, the fleet was reduced again to policing duties against intermittent piracy. In 56 BC, for the first time a Roman fleet engaged in battle outside the Mediterranean. This occurred during Julius Caesar 's Gallic Wars, when the maritime tribe of the Veneti rebelled against Rome. Against the Veneti, the Romans were at a disadvantage, since they did not know the coast, and were inexperienced in fighting in the open sea with its tides and currents. Furthermore, the Veneti ships were superior to the light Roman galleys. They were built of oak and had no oars, being thus more resistant to ramming. In addition, their greater height gave them an advantage in both missile exchanges and boarding actions. In the event, when the two fleets encountered each other in Quiberon Bay, Caesar 's navy, under the command of D. Brutus, resorted to the use of hooks on long poles, which cut the halyards supporting the Veneti sails. Immobile, the Veneti ships were easy prey for the legionaries who boarded them, and fleeing Veneti ships were taken when they became becalmed by a sudden lack of winds. Having thus established his control of the English Channel, in the next years Caesar used this newly built fleet to carry out two invasions of Britain. The last major campaigns of the Roman navy in the Mediterranean until the late 3rd century AD would be in the civil wars that ended the Republic. In the East, the Republican faction quickly established its control, and Rhodes, the last independent maritime power in the Aegean, was subdued by Gaius Cassius Longinus in 43 BC, after its fleet was defeated off Kos. In the West, against the triumvirs stood Sextus Pompeius, who had been given command of the Italian fleet by the Senate in 43 BC. He took control of Sicily and made it his base, blockading Italy and stopping the politically crucial supply of grain from Africa to Rome. After suffering a defeat from Sextus in 42 BC, Octavian initiated massive naval armaments, aided by his closest associate, Marcus Agrippa: ships were built at Ravenna and Ostia, the new artificial harbor of Portus Julius built at Cumae, and soldiers and rowers levied, including over 20,000 manumitted slaves. Finally, Octavian and Agrippa defeated Sextus in the Battle of Naulochus in 36 BC, putting an end to all Pompeian resistance. Octavian 's power was further enhanced after his victory against the combined fleets of Mark Antony and Cleopatra, Queen of Egypt, in the Battle of Actium in 31 BC, where Antony had assembled 500 ships against Octavian 's 400 ships. This last naval battle of the Roman Republic definitively established Octavian as the sole ruler over Rome and the Mediterranean world. In the aftermath of his victory, he formalized the Fleet 's structure, establishing several key harbors in the Mediterranean (see below). The now fully professional navy had its main duties consist of protecting against piracy, escorting troops and patrolling the river frontiers of Europe. It remained however engaged in active warfare in the periphery of the Empire. Under Augustus and after the conquest of Egypt there were increasing demands from the Roman economy to extend the trade lanes to India. The Arabian control of all sea routes to India was an obstacle. One of the first naval operations under princeps Augustus was therefore the preparation for a campaign on the Arabian Peninsula. Aelius Gallus, the prefect of Egypt ordered the construction of 130 transports and subsequently carried 10,000 soldiers to Arabia. But the following march through the desert towards Yemen failed and the plans for control of the Arabian peninsula had to be abandoned. At the other end of the Empire, in Germania, the navy played an important role in the supply and transport of the legions. In 15 BC an independent fleet was installed at the Lake Constance. Later, the generals Drusus and Tiberius used the Navy extensively, when they tried to extend the Roman frontier to the Elbe. In 12 BC Drusus ordered the construction of a fleet of 1,000 ships and sailed them along the Rhine into the North Sea. The Frisii and Chauci had nothing to oppose the superior numbers, tactics and technology of the Romans. When these entered the river mouths of Weser and Ems, the local tribes had to surrender. In 5 BC the Roman knowledge concerning the North and Baltic Sea was fairly extended during a campaign by Tiberius, reaching as far as the Elbe: Plinius describes how Roman naval formations came past Heligoland and set sail to the north - eastern coast of Denmark, and Augustus himself boasts in his Res Gestae: "My fleet sailed from the mouth of the Rhine eastward as far as the lands of the Cimbri to which, up to that time, no Roman had ever penetrated either by land or by sea... ''. The multiple naval operations north of Germania had to be abandoned after the battle of the Teutoburg Forest in the year 9 AD. In the years 15 and 16, Germanicus carried out several fleet operations along the rivers Rhine and Ems, without permanent results due to grim Germanic resistance and a disastrous storm. By 28, the Romans lost further control of the Rhine mouth in a succession of Frisian insurgencies. From 43 to 85, the Roman navy played an important role in the Roman conquest of Britain. The classis Germanica rendered outstanding services in multitudinous landing operations. In 46, a naval expedition made a push deep into the Black Sea region and even travelled on the Tanais. In 47 a revolt by the Chauci, who took to piratical activities along the Gallic coast, was subdued by Gnaeus Domitius Corbulo. By 57 an expeditionary corps reached Chersonesos (see Charax, Crimea). It seems that under Nero, the navy obtained strategically important positions for trading with India; but there was no known fleet in the Red Sea. Possibly, parts of the Alexandrian fleet were operating as escorts for the Indian trade. In the Jewish revolt, from 66 to 70, the Romans were forced to fight Jewish ships, operating from a harbour in the area of modern Tel Aviv, on Israel 's Mediterranean coast. In the meantime several flotilla engagements on the Sea of Galilee took place. In 68, as his reign became increasingly insecure, Nero raised legio I Adiutrix from sailors of the praetorian fleets. After Nero 's overthrow, in 69, the "Year of the four emperors '', the praetorian fleets supported Emperor Otho against the usurper Vitellius, and after his eventual victory, Vespasian formed another legion, legio II Adiutrix, from their ranks. Only in the Pontus did Anicetus, the commander of the Classis Pontica, support Vitellius. He burned the fleet, and sought refuge with the Iberian tribes, engaging in piracy. After a new fleet was built, this revolt was subdued. During the Batavian rebellion of Gaius Julius Civilis (69 - 70), the rebels got hold of a squadron of the Rhine fleet by treachery, and the conflict featured frequent use of the Roman Rhine flotilla. In the last phase of the war, the British fleet and legio XIV were brought in from Britain to attack the Batavian coast, but the Cananefates, allies of the Batavians, were able to destroy or capture a large part of the fleet. In the meantime, the new Roman commander, Quintus Petillius Cerialis, advanced north and constructed a new fleet. Civilis attempted only a short encounter with his own fleet, but could not hinder the superior Roman force from landing and ravaging the island of the Batavians, leading to the negotiation of a peace soon after. In the years 82 to 85, the Romans under Gnaeus Julius Agricola launched a campaign against the Caledonians in modern Scotland. In this context the Roman navy significantly escalated activities on the eastern Scottish coast. Simultaneously multiple expeditions and reconnaissance trips were launched. During these the Romans would capture the Orkney Islands (Orcades) for a short period of time and obtained information about the Shetland Islands. There is some speculation about a Roman landing in Ireland, based on Tacitus reports about Agricola contemplating the island 's conquest, but no conclusive evidence to support this theory has been found. Under the Five Good Emperors the navy operated mainly on the rivers; so it played an important role during Trajan 's conquest of Dacia and temporarily an independent fleet for the Euphrates and Tigris rivers was founded. Also during the wars against the Marcomanni confederation under Marcus Aurelius several combats took place on the Danube and the Tisza. Under the aegis of the Severan dynasty, the only known military operations of the navy were carried out under Septimius Severus, using naval assistance on his campaigns along the Euphrates and Tigris, as well as in Scotland. Thereby Roman ships reached inter alia the Persian Gulf and the top of the British Isles. As the 3rd century dawned, the Roman Empire was at its peak. In the Mediterranean, peace had reigned for over two centuries, as piracy had been wiped out and no outside naval threats occurred. As a result, complacency had set in: naval tactics and technology were neglected, and the Roman naval system had become moribund. After 230 however and for fifty years, the situation changed dramatically. The so - called "Crisis of the Third Century '' ushered a period of internal turmoil, and the same period saw a renewed series of seaborne assaults, which the imperial fleets proved unable to stem. In the West, Picts and Irish ships raided Britain, while the Saxons raided the North Sea, forcing the Romans to abandon Frisia. In the East, the Goths and other tribes from modern Ukraine raided in great numbers over the Black Sea. These invasions began during the rule of Trebonianus Gallus, when for the first time Germanic tribes built up their own powerful fleet in the Black Sea. Via two surprise attacks (256) on Roman naval bases in the Caucasus and near the Danube, numerous ships fell into the hands of the Germans, whereupon the raids were extended as far as the Aegean Sea; Byzantium, Athens, Sparta and other towns were plundered and the responsible provincial fleets were heavily debilitated. It was not until the attackers made a tactical error, that their onrush could be stopped. In 267 -- 270 another, much fiercer series of attacks took place. A fleet composed of Heruli and other tribes raided the coasts of Thrace and the Pontus. Defeated off Byzantium by general Venerianus, the barbarians fled into the Aegean, and ravaged many islands and coastal cities, including Athens and Corinth. As they retreated northwards over land, they were defeated by Emperor Gallienus at Nestos. However, this was merely the prelude to an even larger invasion that was launched in 268 / 269: several tribes banded together (the Historia Augusta mentions Scythians, Greuthungi, Tervingi, Gepids, Peucini, Celts and Heruli) and allegedly 2,000 ships and 325,000 men strong, raided the Thracian shore, attacked Byzantium and continued raiding the Aegean as far as Crete, while the main force approached Thessalonica. Emperor Claudius II however was able to defeat them at the Battle of Naissus, ending the Gothic threat for the time being. Barbarian raids also increased along the Rhine frontier and in the North Sea. Eutropius mentions that during the 280s, the sea along the coasts of the provinces of Belgica and Armorica was "infested with Franks and Saxons ''. To counter them, Maximian appointed Carausius as commander of the British Fleet. However, Carausius rose up in late 286 and seceded from the Empire with Britannia and parts of the northern Gallic coast. With a single blow Roman control of the channel and the North Sea was lost, and emperor Maximinus was forced to create a completely new Northern Fleet, but in lack of training it was almost immediately destroyed in a storm. Only in 293, under Caesar Constantius Chlorus did Rome regain the Gallic coast. A new fleet was constructed in order to cross the Channel, and in 296, with a concentric attack on Londinium the insurgent province was retaken. By the end of the 3rd century, the Roman navy had declined dramatically. Although Emperor Diocletian is held to have strengthened the navy, and increased its manpower from 46,000 to 64,000 men, the old standing fleets had all but vanished, and in the civil wars that ended the Tetrarchy, the opposing sides had to mobilize the resources and commandeered the ships of the Eastern Mediterranean port cities. These conflicts thus brought about a renewal of naval activity, culminating in the Battle of the Hellespont in 324 between the forces of Constantine I under Caesar Crispus and the fleet of Licinius, which was the only major naval confrontation of the 4th century. Vegetius, writing at the end of the 4th century, testifies to the disappearance of the old praetorian fleets in Italy, but comments on the continued activity of the Danube fleet. In the 5th century, only the eastern half of the Empire could field an effective fleet, as it could draw upon the maritime resources of Greece and the Levant. Although the Notitia Dignitatum still mentions several naval units for the Western Empire, these were apparently too depleted to be able to carry out much more than patrol duties. At any rate, the rise of the naval power of the Vandal Kingdom under Geiseric in North Africa, and its raids in the Western Mediterranean, were practically uncontested. Although there is some evidence of West Roman naval activity in the first half of the 5th century, this is mostly confined to troop transports and minor landing operations. The historian Priscus and Sidonius Apollinaris affirm in their writings that by the mid-5th century, the Western Empire essentially lacked a war navy. Matters became even worse after the disastrous failure of the fleets mobilized against the Vandals in 460 and 468, under the emperors Majorian and Anthemius. For the West, there would be no recovery, as the last Western Emperor, Romulus Augustulus, was deposed in 476. In the East however, the classical naval tradition survived, and in the 6th century, a standing navy was reformed. The East Roman (Byzantine) navy would remain a formidable force in the Mediterranean until the 11th century. The bulk of a galley 's crew was formed by the rowers, the remiges (sing. remex) or eretai (sing. eretēs) in Greek. Despite popular perceptions, the Roman fleet, and ancient fleets in general, relied throughout their existence on rowers of free status, and not on galley slaves. Slaves were employed only in times of pressing manpower demands or extreme emergency, and even then, they were freed first. In Imperial times, non-citizen freeborn provincials (peregrini), chiefly from nations with a maritime background such as Greeks, Phoenicians, Syrians and Egyptians, formed the bulk of the fleets ' crews. During the early Principate, a ship 's crew, regardless of its size, was organized as a centuria. Crewmen could sign on as marines (Called Marinus), rowers / seamen, craftsmen and various other jobs, though all personnel serving in the imperial fleet were classed as milites ("soldiers ''), regardless of their function; only when differentiation with the army was required, were the adjectives classiarius or classicus added. Along with several other instances of prevalence of army terminology, this testifies to the lower social status of naval personnel, considered inferior to the auxiliaries and the legionaries. Emperor Claudius first gave legal privileges to the navy 's crewmen, enabling them to receive Roman citizenship after their period of service. This period was initially set at a minimum of 26 years (one year more than the legions), and was later expanded to 28. Upon honorable discharge (honesta missio), the sailors received a sizable cash payment as well. As in the army, the ship 's centuria was headed by a centurion with an optio as his deputy, while a beneficiarius supervised a small administrative staff. Among the crew were also a number of principales (junior officers) and immunes (specialists exempt from certain duties). Some of these positions, mostly administrative, were identical to those of the army auxiliaries, while some (mostly of Greek provenance) were peculiar to the fleet. An inscription from the island of Cos, dated to the First Mithridatic War, provides us with a list of a ship 's officers, the nautae: the gubernator (kybernētēs in Greek) was the helmsman or pilot, the celeusta (keleustēs in Greek) supervised the rowers, a proreta (prōreus in Greek) was the look - out stationed at the bow, a pentacontarchos was apparently a junior officer, and an iatros (Lat. medicus), the ship 's doctor. Each ship was commanded by a trierarchus, whose exact relationship with the ship 's centurion is unclear. Squadrons, most likely of ten ships each, were put under a nauarchus, who often appears to have risen from the ranks of the trierarchi. The post of nauarchus archigubernes or nauarchus princeps appeared later in the Imperial period, and functioned either as a commander of several squadrons or as an executive officer under a civilian admiral, equivalent to the legionary primus pilus. All these were professional officers, usually peregrini, who had a status equal to an auxiliary centurion (and were thus increasingly called centuriones (classiarii) after ca. 70 AD). Until the reign of Antoninus Pius, their careers were restricted to the fleet. Only in the 3rd century were these officers equated to the legionary centurions in status and pay, and could henceforth be transferred to a similar position in the legions. During the Republic, command of a fleet was given to a serving magistrate or promagistrate, usually of consular or praetorian rank. In the Punic Wars for instance, one consul would usually command the fleet, and another the army. In the subsequent wars in the Eastern Mediterranean, praetors would assume the command of the fleet. However, since these men were political appointees, the actual handling of the fleets and of separate squadrons was entrusted to their more experienced legates and subordinates. It was therefore during the Punic Wars that the separate position of praefectus classis ("fleet prefect '') first appeared. Initially subordinate to the magistrate in command, after the fleet 's reorganization by Augustus, the praefectus classis became a procuratorial position in charge of each of the permanent fleets. These posts were initially filled either from among the equestrian class, or, especially under Claudius, from the Emperor 's freedmen, thus securing imperial control over the fleets. From the period of the Flavian emperors, the status of the praefectura was raised, and only equestrians with military experience who had gone through the militia equestri were appointed. Nevertheless, the prefects remained largely political appointees, and despite their military experience, usually in command of army auxiliary units, their knowledge of naval matters was minimal, forcing them to rely on their professional subordinates. The difference in importance of the fleets they commanded was also reflected by the rank and the corresponding pay of the commanders. The prefects of the two praetorian fleets were ranked procuratores ducenarii, meaning they earned 200,000 sesterces annually, the prefects of the Classis Germanica, the Classis Britannica and later the Classis Pontica were centenarii (i.e. earning 100,000 sesterces), while the other fleet prefects were sexagenarii (i.e. they received 60,000 sesterces). The generic Roman term for an oar - driven galley warship was "long ship '' (Latin: navis longa, Greek: naus makra), as opposed to the sail - driven navis oneraria (from onus, oneris: burden), a merchant vessel, or the minor craft (navigia minora) like the scapha. The navy consisted of a wide variety of different classes of warships, from heavy polyremes to light raiding and scouting vessels. Unlike the rich Hellenistic Successor kingdoms in the East however, the Romans did not rely on heavy warships, with quinqueremes (Gk. pentērēs), and to a lesser extent quadriremes (Gk. tetrērēs) and triremes (Gk. triērēs) providing the mainstay of the Roman fleets from the Punic Wars to the end of the Civil Wars. The heaviest vessel mentioned in Roman fleets during this period was the hexareme, of which a few were used as flagships. Lighter vessels such as the liburnians and the hemiolia, both swift types invented by pirates, were also adopted as scouts and light transport vessels. During the final confrontation between Octavian and Mark Antony, Octavian 's fleet was composed of quinqueremes, together with some "sixes '' and many triremes and liburnians, while Antony, who had the resources of Ptolemaic Egypt to draw upon, fielded a fleet also mostly composed of quinqueremes, but with a sizeable complement of heavier warships, ranging from "sixes '' to "tens '' (Gk. dekērēs). Later historical tradition made much of the prevalence of lighter and swifter vessels in Octavian 's fleet, with Vegetius even explicitly ascribing Octavian 's victory to the liburnians. This prominence of lighter craft in the historical narrative is perhaps best explained in light of subsequent developments. After Actium, the operational landscape had changed: for the remainder of the Principate, no opponent existed to challenge Roman naval hegemony, and no massed naval confrontation was likely. The tasks at hand for the Roman navy were now the policing of the Mediterranean waterways and the border rivers, suppression of piracy, and escort duties for the grain shipments to Rome and for imperial army expeditions. Lighter ships were far better suited to these tasks, and after the reorganization of the fleet following Actium, the largest ship kept in service was a hexareme, the flagship of the Classis Misenensis. The bulk of the fleets was composed of the lighter triremes and liburnians (Latin: liburna, Greek: libyrnis), with the latter apparently providing the majority of the provincial fleets. In time, the term "liburnian '' came to mean "warship '' in a generic sense. In addition, there were smaller oared vessels, such as the navis actuaria, with 30 oars (15 on each bank), a ship primarily used for transport in coastal and fluvial operations, for which its shallow draught and flat keel were ideal. In late Antiquity, it was succeeded in this role by the navis lusoria ("playful ship ''), which was extensively used for patrols and raids by the legionary flotillas in the Rhine and Danube frontiers. Roman ships were commonly named after gods (Mars, Iuppiter, Minerva, Isis), mythological heroes (Hercules), geographical maritime features such as Rhenus or Oceanus, concepts such as Harmony, Peace, Loyalty, Victory (Concordia, Pax, Fides, Victoria) or after important events (Dacicus for the Trajan 's Dacian Wars or Salamina for the Battle of Salamis). They were distinguished by their figurehead (insigne or parasemum), and, during the Civil Wars at least, by the paint schemes on their turrets, which varied according to each fleet. In Classical Antiquity, a ship 's main weapon was the ram (rostra, hence the name navis rostrata for a warship), which was used to sink or immobilize an enemy ship by holing its hull. Its use, however, required a skilled and experienced crew and a fast and agile ship like a trireme or quinquereme. In the Hellenistic period, the larger navies came instead to rely on greater vessels. This had several advantages: the heavier and sturdier construction lessened the effects of ramming, and the greater space and stability of the vessels allowed the transport not only of more marines, but also the placement of deck - mounted ballistae and catapults. Although the ram continued to be a standard feature of all warships and ramming the standard mode of attack, these developments transformed the role of a warship: from the old "manned missile '', designed to sink enemy ships, they became mobile artillery platforms, which engaged in missile exchange and boarding actions. The Romans in particular, being initially inexperienced at sea combat, relied upon boarding actions through the use of the corvus. Although it brought them some decisive losses, it was discontinued because it tended to unbalance the quinqueremes in high seas; two Roman fleets are recorded to have been lost during storms in the First Punic War. During the Civil Wars, a number of technical innovations, which are attributed to Agrippa, took place: the harpax, a catapult - fired grappling hook, which was used to clamp onto an enemy ship, reel it in and board it, in a much more efficient way than with the old corvus, and the use of collapsible fighting towers placed one apiece bow and stern, which were used to provide the boarders with supporting fire. After the end of the civil wars, Augustus reduced and reorganized the Roman armed forces, including the navy. A large part of the fleet of Mark Antony was burned, and the rest was withdrawn to a new base at Forum Iulii (modern Fréjus), which remained operative until the reign of Claudius. However, the bulk of the fleet was soon subdivided into two praetorian fleets at Misenum and Ravenna, supplemented by a growing number of minor ones in the provinces, which were often created on an ad hoc basis for specific campaigns. This organizational structure was maintained almost unchanged until the 4th century. The two major fleets were stationed in Italy and acted as a central naval reserve, directly available to the Emperor (hence the designation "praetorian ''). In the absence of any naval threat, their duties mostly involved patrolling and transport duties. These were not confined to the waters around Italy, but throughout the Mediterranean. There is epigraphic evidence for the presence of sailors of the two praetorian fleets at Piraeus and Syria. These two fleets were: The various provincial fleets were smaller than the praetorian fleets and composed mostly of lighter vessels. Nevertheless, it was these fleets that saw action, in full campaigns or raids on the periphery of the Empire. In addition, there is significant archaeological evidence for naval activity by certain legions, which in all likelihood operated their own squadrons: legio XXII Primigenia in the Upper Rhine and Main rivers, legio X Fretensis in the Jordan River and the Sea of Galilee, and several legionary squadrons in the Danube frontier. Our main source for the structure of the late Roman military is the Notitia Dignitatum, which corresponds to the situation of the 390s for the Eastern Empire and the 420s for the Western Empire. Notable in the Notitia is the large number of smaller squadrons that have been created, most of these fluvial and of a local operational role. The Classis Pannonica and the Classis Moesica were broken up into several smaller squadrons, collectively termed Classis Histrica, authority of the frontier commanders (duces). with bases at Mursa in Pannonia II, Florentia in Pannonia Valeria, Arruntum in Pannonia I, Viminacium in Moesia I and Aegetae in Dacia ripensis. Smaller fleets are also attested on the tributaries of the Danube: the Classis Arlapensis et Maginensis (based at Arelape and Comagena) and the Classis Lauriacensis (based at Lauriacum) in Pannonia I, the Classis Stradensis et Germensis, based at Margo in Moesia I, and the Classis Ratianensis, in Dacia ripensis. The naval units were complemented by port garrisons and marine units, drawn from the army. In the Danube frontier these were: In the West, and in particular in Gaul, several fluvial fleets had been established. These came under the command of the magister peditum of the West, and were: It is notable that, with the exception of the praetorian fleets (whose retention in the list does not necessarily signify an active status), the old fleets of the Principate are missing. The Classis Britannica vanishes under that name after the mid-3rd century; its remnants were later subsumed in the Saxon Shore system. By the time of the Notitia Dignitatum, the Classis Germanica has ceased to exist (it is last mentioned under Julian in 359), most probably due to the collapse of the Rhine frontier after the Crossing of the Rhine by the barbarians in winter 405 - 406, and the Mauretanian and African fleets had been disbanded or taken over by the Vandals. As far as the East is concerned, we know from legal sources that the Classis Alexandrina and the Classis Seleucena continued to operate, and that in ca. 400 a Classis Carpathia was detached from the Syrian fleet and based at the Aegean island of Karpathos. A fleet is known to have been stationed at Constantinople itself, but no further details are known about it. Major Roman ports were:
where do heads of state stay in washington
President 's Guest House - wikipedia Federal (Blair House and Lee House) The President 's Guest House, commonly known as Blair House, is a complex of four formerly separate buildings -- Blair House, Lee House, Peter Parker House, and 704 Jackson Place -- located in Washington, D.C., the capital of the United States. A major interior renovation of these 19th century residences between the 1950s and 1980s resulted in their reconstitution as a single facility. The President 's Guest House is one of several residences owned by the United States government for use by the President and Vice President of the United States; other such residences include the White House, Camp David, One Observatory Circle, the Presidential Townhouse, and Trowbridge House. The President 's Guest House has been called "the world 's most exclusive hotel '' because it is primarily used to host visiting dignitaries and other guests of the president. It is larger than the White House and closed to the public. Strictly speaking, "Blair House '' refers to one of four existing structures that were merged to form a single building. The U.S. State Department generally uses the name Blair House to refer to the entire facility, saying, "Blair House is the building officially known as The President 's Guest House. '' The General Services Administration refers to the entire complex as the "President 's Guest House '' and uses the name Blair House to denote the historic Blair House portion of the facility. Blair House was constructed in 1824; it is the oldest of the four structures that comprise the President 's Guest House. The original brick house was built as a private home for Joseph Lovell, eighth Surgeon General of the United States Army. It was acquired in 1836 by Francis Preston Blair, a newspaper publisher and influential advisor to President Andrew Jackson, and remained in his family for the following century. Francis Blair 's son Montgomery Blair, who served as Postmaster General in Abraham Lincoln 's administration, succeeded his father as resident of Blair House. At a conference at Blair House in 1861, it was decided Admiral David Farragut would command an assault on New Orleans during the American Civil War. In 1939, a commemorative marker was placed at Blair House by the United States Department of the Interior, becoming the first building to acquire a federally recognized landmark designation; prior landmarks had been monuments and historic sites other than buildings. It would be formally designated a National Historic Landmark in 1973. Beginning in 1942, the Blair family began leasing the property to the U.S. government for use by visiting dignitaries; the government purchased the property outright the following December. The move was prompted in part by a request from Eleanor Roosevelt, who found the casual familiarity Winston Churchill displayed during his visits to the White House off - putting. On one occasion, Churchill tried to enter Franklin Roosevelt 's private apartments at 3: 00 a.m. to wake the president for a conversation. During much of the presidency of Harry Truman, Blair House served as the temporary residence of President Truman while the interior of the White House was being renovated. On November 1, 1950, Puerto Rican nationalists Griselio Torresola and Oscar Collazo attempted to assassinate President Truman in Blair House. The assassination was foiled, in part by White House policeman Leslie Coffelt, who killed Torresola but was mortally wounded by him. In 1859, Francis Preston Blair built a house next to Blair House for his daughter Elizabeth Blair Lee and son - in - law Samuel Phillips Lee; the property became known as Lee House. The Peter Parker House located at 700 Jackson Place and an adjacent home at 704 Jackson Place were constructed in 1860. Peter Parker House is so named because it was originally the home of physician Peter Parker. The U.S. government acquired both properties between 1969 and 1970, after having rented them for office space. Peter Parker House previously served as the headquarters of the Civil War Centennial Commission and the Carnegie Endowment for International Peace, and is like Blair House a National Historic Landmark. During a renovation in the early 1950s, Blair House and Lee House were joined into a single facility that was informally known as Blair -- Lee House. In the early 1980s, Congress appropriated $9.7 million for the property 's further renovation and improvement. Federally appropriated funds were augmented with $5 million in private donations. The Jackson Place properties were internally combined into a single building and then merged with Blair -- Lee House by way of a connecting structure occupying the alleyway that had separated them. The renovation and merger of the four properties resulted in their closure from 1982 through 1988. Notable guests who have stayed at the President 's Guest House or the formerly separate Blair House include Vyacheslav Molotov, Emperor Akihito, Queen Elizabeth II, Charles de Gaulle, François Mitterrand, Vladimir Putin, Boris Yeltsin, Hosni Mubarak, Margaret Thatcher, Javier Perez de Cuellar, Nambaryn Enkhbayar, Aung San Suu Kyi, Narendra Modi, Lee Hsien Loong, Hamid Karzai, Benjamin Netanyahu and Justin Trudeau. In addition to foreign dignitaries, the President 's Guest House has traditionally been made available by the outgoing President of the United States to the President - elect in the five days prior to his inauguration. In 1992, Bill Clinton chose to stay at the Hay -- Adams Hotel instead of the guest house and, in 2009, a request by President - elect Barack Obama to take - up residence at the President 's Guest House two weeks early was rejected because of its prior commitment to former Australian prime minister John Howard. During the state funeral of a former President of the United States, the former president 's family customarily resides in the guest house for the duration of the observances. The President 's Guest House is located at the intersection of Pennsylvania Avenue and Jackson Place. Its southern side faces the Eisenhower Executive Office Building, while its eastern side faces Lafayette Square. To its western side along Pennsylvania Avenue, it is adjacent to the Renwick Gallery. Its northern side along Jackson Place abuts Trowbridge House, a separate presidential residence. Immediately behind the gardens of the President 's Guest House is the New Executive Office Building. The Ross Garden is an enclosed garden at the rear of the property; it is named after Arthur Ross, who established an endowment to maintain the grounds in perpetuity. The residence consists of 119 rooms, including 14 bedrooms and 35 bathrooms. At nearly 70,000 sq ft (6,500 m), the President 's Guest House is -- by floor area -- larger than the White House. The Coffelt Memorial Room is located in the basement of the property; it is named after police officer Leslie Coffelt, who was killed while defending Blair House against an attack by Puerto Rican separatists in 1950. The room is used as a day room by the United States Secret Service Uniformed Division detachment assigned to the property. It was dedicated in 1990 and contains a portrait of Coffelt and his framed medals, which were donated by his step - daughter. The Dillon Drawing Room, which was originally known as the Lee Drawing Room, was renamed in honor of former U.S. Secretary of the Treasury C. Douglas Dillon, who donated its unique wallpaper, a Chinese print from 1770. Dillon 's wife Phyllis purchased the wallpaper on the recommendation of interior designer Eleanor Brown in 1964. The wallpaper was removed and refurbished between 1982 and 1988. The room is furnished with 18th century English pieces, along with Chinese vases from the Ming and Qing (Kangxi reign) dynasties. The Dillon Drawing Room is used by resident heads of state and chiefs of government to formally receive visitors. The head - of - state suite is the apartments designated for use by the principal resident. It consists of a sitting room, two bedrooms with adjoining dressing rooms, two bathrooms, and a powder room. It is furnished with 18th - century English antiques, which were valued at more than $1 million in 1987. The small library in the Blair House wing is stocked with approximately 1,500 books. Guests staying at the house traditionally present a book to deposit in the library. A portrait of Francis Blair hangs over the library 's fireplace mantle. The centerpiece of the Lincoln Room is a portrait of Abraham Lincoln painted by 19th - century American portraitist Edward Dalton Marchant; it is one of a number of drawings, paintings, and photographs of Lincoln used to decorate this room. The sitting room in the Blair House wing of the complex was originally used by the Blair family to receive U.S. presidents. In this room in 1861, Montgomery Blair, acting on Lincoln 's orders, offered the command of the Union Army to Robert E. Lee, an offer that Lee declined. In the Truman Study is a fireplace mantel that was originally installed in the White House. It was removed to Blair House during Truman 's occupancy, when he used this room as his personal office. In 1987, the mantel was refinished in white enamel with gold - leaf accents. In 2004, before the state funeral of Ronald Reagan, Nancy Reagan used the Truman Study to receive visitors. The centerpiece of the Treaty Room in the former Peter Parker House is a 22 - seat mahogany table that sits on an 1890 Sarouk rug. A photographic portrait of Empress Dowager Cixi that was presented as a diplomatic gift to the United States by Qing Dynasty China in 1905 hangs in the room. The Lee Dining Room is used for formal banquets. It is lit by an 1825 Irish crystal chandelier. One hundred place settings of fine china and 150 place settings of sterling silver flatware were acquired from Tiffany & Co. in 1988 for use in the dining room. The President 's Guest House is owned by the U.S. government and is managed by the office of the Chief of Protocol of the United States in cooperation with the Diplomatic Security Service, the Department of State 's Bureau of Administration, and the Department of State 's Office of Fine Arts. Maintenance and operation of the facility are paid for by the U.S. government. The Blair House Restoration Fund, a private organization, finances the preservation of historic furnishings and art. The board of trustees of the Blair House Restoration Fund is chaired by Selwa Roosevelt. The house is operated by full - time staff who are non-residential but customarily live - in during periods of occupancy by a visiting dignitary. In 2001, the staff included a general manager, an assistant general manager, two butlers, a doorman, four housekeepers, two chefs, a launderer, a curator, and several maintenance workers. Security for the facility is provided by the United States Secret Service during periods of occupancy by foreign heads of state and chiefs of government. During visits by other guests such as foreign ministers, the Diplomatic Security Service assumes the leading role. When a visiting foreign dignitary is in residence at the President 's Guest House, the dignitary 's official standard is displayed on the building 's flagpole. In cases where dignitaries have no official standards, the dignitary 's national flag is displayed instead. If two or more foreign visitors of equal rank are visiting Washington, neither is invited to stay at the President 's Guest House to avoid the perception of favoritism.
who won the last season of face off all stars
Face Off (season 11) - wikipedia The eleventh season of the Syfy reality television series Face Off (Styled as Face Off All Stars) premiered on January 24, 2017. This season features returning contestants ("All - Stars '') from previous seasons. Unlike previous seasons, the contestants will not compete individually, but instead, will be paired in teams of two. Each team will compete to win immunity during one week, while eliminations will take place the following week. From episode 9 forward, the competition is individual and one contestant is eliminated each week. Cig Neutron was declared the winner on the season 's finale. Prizes for this season include a Hyundai Veloster and $100,000. Progress Color Key Due to the new layout of the show, no one was eliminated this week and the winners receive immunity going into next week 's challenge. Unlike other challenges, the judges were not present in this episode. Instead, a focus group made up of Monster High fans judge the artists ' make - ups. Guest: Natasha Berling: Vice President of Design & Rebecca Shipman and Natalie Villegas: Project Designers Guest Judge: John Landis Guest Judge: Paul W.S. Anderson Guest Judge: Suzanne Todd Guest Judge: Marcus Nispel Guest Judge: Lois Burwell
now my troubles will have trouble with me
I Had trouble in Getting to Solla Sollew - wikipedia I Had Trouble in Getting to Solla Sollew is a 1965 children 's book by Dr. Seuss. The story features classic Seuss rhymes and drawings in his distinctive pen and ink style. The book is a first - person narrative told by a young narrator who experiences troubles in his life (mostly aggressive small animals that bite and sting) and wishes to escape them. He sets out for the city of Solla Sollew ("where they never have troubles / at least very few '') and learns that he must face his problems instead of running away from them. He then goes back home to deal with his "troubles '', arming himself with a big bat and resolving that "Now my troubles are going to have troubles with me! '' The journey includes several fantastic, and troublesome, encounters. In one instance, the protagonist is forced to haul a wagon for a bossy companion. ("' This is called teamwork. I furnish the brains. You furnish the muscles, the aches and the pains. ' '') In another scene, he is drafted into the army under the command of the fearsome (and, ultimately, cowardly) General Genghis Kahn Schmitz, who abandons him at a critical moment. As the story opens, the young protagonist (resembling a cat or dog) lives a happy and carefree life in the Valley of Vung, but one day, all that changes when he goes out for a stroll to look at daisies and hurts himself by tripping over a rock, which sets off the troubles he will soon face. The protagonist vows to be more careful, but a green - headed Quilligan Quail bites his tail from behind ("I learned there are troubles of more than one kind; Some come from ahead and some come from behind ''). Worse still, a Skritz dives to sting his neck and a Skrink bites his toe, proving that troubles can come from all directions. As the protagonist tries to fight off his troubles, a man on a One Wheel Wubble drawn by a camel comes up and explains that like the protagonist, he too is experiencing a troubled life and has decided to escape his troubles by going to Solla Sollew, a city on the beautiful banks of the river Wah - Hoo, and known to never have troubles (at least very few). He invites the protagonist to come along with him. Eager to escape his troubles, the protagonist joins the wubble driver, but after a long night of traveling, the camel gets sick and starts to bubble. At first, the driver and protagonist pull him on the wubble, but for the rest of the day, the driver acts lazy and has the protagonist do all the hard work. The next day they thankfully discover a camel doctor, Dr. Sam Snell, who diagnoses their camel with a bad case of gleeks and orders him to go to bed for at least twenty weeks. The driver makes it up to the protagonist by telling him he can catch the 4: 42 bus to Solla Sollew at the nearest bus stop, but when the protagonist gets to the bus stop he learns from a sign tacked on a stick that the Solla Sollew bound bus is out of service (the driver, Butch Myers, apparently destroyed all his tires from accidentally running over four nails), leaving him to hike for one hundred miles. Soon, the poor protagonist is caught in a storm. A kindly stranger tells him that the storm is the infamous "Midwinter Jicker '' and allows the protagonist to take shelter in his house, where a family of mice and a family of owls are also taking shelter. After many a sleepless night and dreaming of sleeping in Solla Sollew, the protagonist awakens to find that the flood - waters have washed the house over a cliff, with him still inside. He spends twelve days in the flood - waters, until somebody rescues him by throwing down a rope. The protagonist climbs the rope, only to discover that his savior is General Genghis Kahn Schmitz, who immediately drafts him into his army for an upcoming battle against the Perilous Poozer of Pompelmoose Pass, a lion - like creature. At the pass, the General discovers he and his army are outnumbered by too many Poozers and orders an immediate retreat without fighting, leaving the protagonist to face the Poozers alone, armed only with "a shooter and one little bean ''. The protagonist manages to escape the Poozers by diving down an air vent marked "Vent No. 5 '' but has to spend the next three days trying to find his way through a network of tunnels inhabited by birds, all going in the wrong direction. Close to the end of the third day, he finally finds a door and discovers he 's come out at the beautiful banks of the river Wah - Hoo. Realizing he 's reached his goal, the protagonist rushes out to Solla Sollew. At the gates of Solla Sollew, the protagonist is greeted by a friendly doorman. The doorman explains to the protagonist about the only trouble the city has: a key - slapping slippard who takes charge of the city has moved into the lock of the door, which happens to be the only way into Solla Sollew, and bugs the doorman by continuously slapping the key out of the keyhole whenever he tries to insert it, thereby preventing anyone from entering the city. Because killing a slippard is considered an omen of bad luck, the doorman can not evict this pest in that regard. However, he decides instead to leave Solla Sollew for the city of Boola Boo Ball, on the banks of the beautiful river Woo - Wall, and where troubles are completely nonexistent ("No troubles at all! ''), and invites the protagonist to come along. At first, the protagonist considers joining the doorman, but realizing that he 's come all this way for nothing, he instead decides to go back home to the Valley of Vung and deal with his troubles. He recognizes that he will have at least some troubles for the rest of his life, but at least now, he 's ready to face them. Armed with a bat, the Protagonist now gives the rocks, quail, skritz, and skrink troubles of their own ("But I 've bought a big bat. I 'm all ready, you see. Now my troubles are going to have troubles with me! ''). In Seussical, the character of General Genghis Kahn Schmitz makes an appearance as a secondary character. He introduces JoJo to the military school in song. This sets up a subplot concerning JoJo in which he is thought to be lost in battle. The character of Schmitz in the play is a cross between the Schmitz seen in the book and the unnamed generals in the Butter Battle Book. Solla Sollew is the subject of a song in which the main characters yearn for a happy resolution to their problems. It is referred to as "a faraway land, so the stories all tell / somewhere beyond the horizon. '' It is said that "troubles there are few '' and that "maybe it 's something like heaven. '' Solla Sollew is believed to be a place of hope and wonder, where "breezes are warm '' and "people are kind. '' It is a dream of the characters to find this incredible place, where they will find each other and be happy once and for all. However, they can not ever find it, saying in the song "when I get close, it disappears ''.
names of hobbits from lord of the rings
List of hobbits - wikipedia In J.R.R. Tolkien 's legendarium, Hobbits are a fictional race related to Men. They first appear in The Hobbit and play an important role in The Lord of the Rings. This is a list of hobbits that are mentioned by name in Tolkien 's works. They are ordered alphabetically by first name. In cases where a hobbit 's family name was changed, usually through marriage, their original family name is given in parentheses. Nicknames are given in quotation marks. Note that the years are given in years of the Third Age (unless otherwise noted), and not according to Shire Reckoning. Bilbo 's farewell party, which is frequently referred to, occurred in T.A. 3001. Jolly Cotton (born 2984) The second of Tolman Cotton 's four sons. Wilcome ' Jolly ' Cotton had been a childhood friend of Sam Gamgee. During the War of the Ring, he helped defend his father 's farm against Sharkey 's Men, and played his part in helping free the Shire. "I think the simple ' rustic ' love of Sam and his Rosie (nowhere elaborated) is absolutely essential to the study of his (the chief hero 's) character, and to the theme of the relation of ordinary life (breathing, eating, working, begetting) and quests, sacrifice, causes, and the ' longing for Elves ', and sheer beauty. '' - J.R.R. Tolkien letter dated 1951 List of hobbit families
who was the oldest elected president of the united states of america
List of presidents of the United States by age - wikipedia This is a list of presidents of the United States by age. The first table charts the age of each United States president at the time of presidential inauguration (first inauguration if elected to multiple and consecutive terms), upon leaving office, and at the time of death. Each president 's lifespan (age at death) is measured in two ways, to allow for the differing number of leap days that each experienced. The first figure is the number of days between the president 's date of birth and date of death, allowing for leap days; in parentheses, the same period is given in years and days, with the years being the number of whole years that the president lived, and the days being the number of days since the last birthday. Where the president is still living, their lifespan is calculated up to August 27, 2018. The second table includes those presidents who had the distinction among their peers of being the oldest living president, and charts both when they became and ceased to be oldest living. The median age upon accession to the presidency is 55 years and 3 months. This is how old Lyndon B. Johnson was at the time of his inauguration. The youngest person to assume the office was Theodore Roosevelt, who became president at the age of 42 years, 322 days, following William McKinley 's assassination; the oldest was Donald Trump, who was 70 years, 220 days old at his inauguration. The youngest person to be elected president was John F. Kennedy, at 43 years, 163 days of age on election day; the oldest was Ronald Reagan, who was 73 years, 274 days old at the time of his election to a second term. Assassinated three years into his term, John F. Kennedy was the youngest at the time of leaving office (46 years, 177 days); the youngest president to leave office at the conclusion of a normal transition was Theodore Roosevelt (50 years, 128 days). The oldest at the time of leaving office was Ronald Reagan (77 years, 349 days). Born on May 29, 1917, John F. Kennedy was younger than four of his successors, the greatest number to date: Lyndon Johnson (8 years, 9 months, and 2 days); Ronald Reagan (6 years, 3 months, and 23 days); Richard Nixon (4 years, 4 months, and 16 days); and Gerald Ford (3 years, 10 months, and 15 days). Born on February 6, 1911, Ronald Reagan was older than four of his predecessors, the greatest number to date: Richard Nixon (1 year, 11 months, and 7 days); Gerald Ford (2 years, 5 months, and 8 days); John F. Kennedy (6 years, 3 months, and 23 days); and Jimmy Carter (13 years, 7 months, and 25 days). Three presidents -- Donald Trump, George W. Bush, and Bill Clinton -- were born in 1946 (all within the span of 9 weeks). This is the only calendar year in which three presidents have been born. Two presidents -- James K. Polk and Warren G. Harding -- were born on November 2 (70 years apart). This is the only day of the year having the birthday of multiple presidents. The oldest living U.S. president is George H.W. Bush, born June 12, 1924 (age 94 years, 76 days). On November 25, 2017, he also became the longest - lived president, surpassing the lifespan of Gerald Ford, who died at the age of 93 years, 165 days. The second oldest living president, Jimmy Carter, has the distinction of having the longest post-presidency in U.S. history, currently at 37 years, 219 days. He surpassed the previous record, held by Herbert Hoover (31 years, 230 days) on September 7, 2012. The youngest living president is Barack Obama, born August 4, 1961 (age 57 years, 23 days). The shortest - lived president to have died by natural causes (thereby excluding John F. Kennedy and James A. Garfield, who were both assassinated) was James K. Polk, who died of cholera at the age of 53 years, 225 days; only 103 days after leaving office. Six U.S. presidents have lived into their 90s, with two of them currently alive. They are (in order of birth): John Adams (aged 90 years, 247 days) Herbert Hoover (aged 90 years, 71 days) Ronald Reagan (aged 93 years, 120 days) Gerald Ford (aged 93 years, 165 days) George H.W. Bush (age 94 years, 76 days) Jimmy Carter (age 93 years, 330 days) Of the 44 persons who have served as president, 24 have become the oldest such individual of their time. Herbert Hoover held this distinction for the longest period of any, from the death of Calvin Coolidge in January 1933 until his own death 31 years later. Lyndon B. Johnson held it for the shortest, from the death of Harry S. Truman in December 1972 until his own death only 27 days later. On three occasions the oldest living president lost this distinction not by his death, but by the inauguration of a president who was older. Theodore Roosevelt (born 1858) lost this distinction when William Taft (born 1857) was inaugurated, then four years later Taft lost it when Woodrow Wilson (born 1856) was inaugurated. More recently, Richard Nixon (born 1913) ceased being the oldest living president when Ronald Reagan (born 1911) was inaugurated. Furthermore, although Theodore Roosevelt was the youngest ever to become both the oldest living president (at age 49) and a former president (at age 50), he was the only living president or former president by the end of his term. Consequently, Taft was the oldest living president twice: first during his presidency (having succeeded the younger Roosevelt), and a second time after Wilson (his successor as president but an older man) died. Gerald Ford was the oldest individual to acquire this distinction, at the age of 90. On seven occasions, the oldest living president has acquired this distinction during his term in office. In the cases of John Adams, Ulysses S. Grant, Theodore Roosevelt, Herbert Hoover and Richard Nixon, this happened at the same time as their becoming the only living president; in the cases of Andrew Jackson and Benjamin Harrison, the only other living president at the time was a younger predecessor (John Quincy Adams and Grover Cleveland respectively). By contrast, the president who acquired this distinction furthest from his time in office was Gerald Ford, who had been retired for 7003999800000000000 ♠ 27 years, 137 days.
the financial crisis in the u.s. in 2009 was triggered by
Financial crisis of 2007 -- 2008 - wikipedia The financial crisis of 2007 -- 2008, also known as the global financial crisis and the 2008 financial crisis, is considered by many economists to have been the worst financial crisis since the Great Depression of the 1930s. It began in 2007 with a crisis in the subprime mortgage market in the United States, and developed into a full - blown international banking crisis with the collapse of the investment bank Lehman Brothers on September 15, 2008. Excessive risk - taking by banks such as Lehman Brothers helped to magnify the financial impact globally. Massive bail - outs of financial institutions and other palliative monetary and fiscal policies were employed to prevent a possible collapse of the world financial system. The crisis was nonetheless followed by a global economic downturn, the Great Recession. The European debt crisis, a crisis in the banking system of the European countries using the euro, followed later. The Dodd -- Frank Wall Street Reform and Consumer Protection Act of 2010 was enacted in the US in the aftermath of the crisis to "promote the financial stability of the United States ''. The Basel III capital and liquidity standards were adopted by countries around the world. The precipitating factor for the Financial Crisis of 2007 -- 2008 was a high default rate in the United States subprime home mortgage sector -- the bursting of the "subprime bubble ''. While the causes of the bubble are disputed, some or all of the following factors must have contributed. The accumulation and subsequent high default rate of these subprime mortgages led to the financial crisis and the consequent damage to the world economy. High mortgage approval rates led to a large pool of homebuyers, which drove up housing prices. This appreciation in value led large numbers of homeowners (subprime or not) to borrow against their homes as an apparent windfall. This "bubble '' would be burst by a rising single - family residential mortgages delinquency rate beginning in August 2006 and peaking in the first quarter, 2010. The high delinquency rates led to a rapid devaluation of financial instruments (mortgage - backed securities including bundled loan portfolios, derivatives and credit default swaps). As the value of these assets plummeted, the market (buyers) for these securities evaporated and banks who were heavily invested in these assets began to experience a liquidity crisis. Freddie Mac and Fannie Mae were taken over by the federal government on September 7, 2008. Lehman Brothers filed for bankruptcy on September 15, 2008. Merrill Lynch, AIG, HBOS, Royal Bank of Scotland, Bradford & Bingley, Fortis, Hypo Real Estate, and Alliance & Leicester were all expected to follow -- with a US federal bailout announced the following day beginning with $85 billion to AIG. In spite of trillions paid out by the US federal government, it became much more difficult to borrow money. The resulting decrease in buyers caused housing prices to plummet. While the collapse of large financial institutions was prevented by the bailout of banks by national governments, stock markets still dropped worldwide. In many areas, the housing market also suffered, resulting in evictions, foreclosures, and prolonged unemployment. The crisis played a significant role in the failure of key businesses, declines in consumer wealth estimated in trillions of US dollars, and a downturn in economic activity leading to the Great Recession of 2008 -- 2012 and contributing to the European sovereign - debt crisis. The active phase of the crisis, which manifested as a liquidity crisis, can be dated from August 9, 2007, when BNP Paribas terminated withdrawals from three hedge funds citing "a complete evaporation of liquidity ''. The bursting of the US housing bubble, which peaked at the end of 2006, caused the values of securities tied to US real estate pricing to plummet, damaging financial institutions globally. The financial crisis was triggered by a complex interplay of policies that encouraged home ownership, providing easier access to loans for subprime borrowers; overvaluation of bundled subprime mortgages based on the theory that housing prices would continue to escalate; questionable trading practices on behalf of both buyers and sellers; compensation structures that prioritize short - term deal flow over long - term value creation; and a lack of adequate capital holdings from banks and insurance companies to back the financial commitments they were making. Questions regarding bank solvency, declines in credit availability, and damaged investor confidence affected global stock markets, where securities suffered large losses during 2008 and early 2009. Economies worldwide slowed during this period, as credit tightened and international trade declined. Governments and central banks responded with unprecedented fiscal stimulus, monetary policy expansion and institutional bailouts. In the US, Congress passed the American Recovery and Reinvestment Act of 2009. Many causes for the financial crisis have been suggested, with varying weight assigned by experts. The immediate cause or trigger of the crisis was the bursting of the US housing bubble, which peaked in 2006 / 2007. Already - rising default rates on "subprime '' and adjustable - rate mortgages (ARM) began to increase quickly thereafter. Easy availability of credit in the US, fueled by large inflows of foreign funds after the Russian debt crisis and Asian financial crisis of the 1997 -- 1998 period, led to a housing construction boom and facilitated debt - financed consumer spending. As banks began to give out more loans to potential home owners, housing prices began to rise. Lax lending standards and rising real estate prices also contributed to the real estate bubble. Loans of various types (e.g., mortgage, credit card, and auto) were easy to obtain and consumers assumed an unprecedented debt load. As part of the housing and credit booms, the number of financial agreements called mortgage - backed securities (MBS) and collateralized debt obligations (CDO), which derived their value from mortgage payments and housing prices, greatly increased. Such financial innovation enabled institutions and investors around the world to invest in the US housing market. As housing prices declined, major global financial institutions that had borrowed and invested heavily in subprime MBS reported significant losses. Falling prices also resulted in homes worth less than the mortgage loan, providing the lender with a financial incentive to enter foreclosure. The ongoing foreclosure epidemic that began in late 2006 in the US and only reduced to historical levels in early 2014 drained significant wealth from consumers, losing up to $4.2 trillion in wealth from home equity. Defaults and losses on other loan types also increased significantly as the crisis expanded from the housing market to other parts of the economy. Total losses are estimated in the trillions of US dollars globally. While the housing and credit bubbles were building, a series of factors caused the financial system to both expand and become increasingly fragile, a process called financialization. US government policy from the 1970s onward has emphasized deregulation to encourage business, which resulted in less oversight of activities and less disclosure of information about new activities undertaken by banks and other evolving financial institutions. Thus, policymakers did not immediately recognize the increasingly important role played by financial institutions such as investment banks and hedge funds, also known as the shadow banking system. Some experts believe these institutions had become as important as commercial (depository) banks in providing credit to the US economy, but they were not subject to the same regulations. These institutions, as well as certain regulated banks, had also assumed significant debt burdens while providing the loans described above and did not have a financial cushion sufficient to absorb large loan defaults or MBS losses. These losses affected the ability of financial institutions to lend, slowing economic activity. Concerns regarding the stability of key financial institutions drove central banks to provide funds to encourage lending and restore faith in the commercial paper markets, which are integral to funding business operations. Governments also bailed out key financial institutions and implemented economic stimulus programs, assuming significant additional financial commitments. The US Financial Crisis Inquiry Commission reported its findings in January 2011. It concluded that: ... the crisis was avoidable and was caused by: The 2000s were the decade of subprime borrowers; no longer was this a segment left to fringe lenders. The relaxing of credit lending standards by investment banks and commercial banks drove this about - face. Subprime did not become magically less risky; Wall Street just accepted this higher risk. During a period of tough competition between mortgage lenders for revenue and market share, and when the supply of creditworthy borrowers was limited, mortgage lenders relaxed underwriting standards and originated riskier mortgages to less creditworthy borrowers. In the view of some analysts, the relatively conservative government - sponsored enterprises (GSEs) policed mortgage originators and maintained relatively high underwriting standards prior to 2003. However, as market power shifted from securitizers to originators and as intense competition from private securitizers undermined GSE power, mortgage standards declined and risky loans proliferated. The worst loans were originated in 2004 -- 2007, the years of the most intense competition between securitizers and the lowest market share for the GSEs. As well as easy credit conditions, there is evidence that competitive pressures contributed to an increase in the amount of subprime lending during the years preceding the crisis. Major US investment banks and GSEs such as Fannie Mae played an important role in the expansion of lending, with GSEs eventually relaxing their standards to try to catch up with the private banks. A contrarian view is that Fannie Mae and Freddie Mac led the way to relaxed underwriting standards, starting in 1995, by advocating the use of easy - to - qualify automated underwriting and appraisal systems, by designing the no - down - payment products issued by lenders, by the promotion of thousands of small mortgage brokers, and by their close relationship to subprime loan aggregators such as Countrywide. Depending on how "subprime '' mortgages are defined, they remained below 10 % of all mortgage originations until 2004, when they rose to nearly 20 % and remained there through the 2005 -- 2006 peak of the United States housing bubble. The majority report of the Financial Crisis Inquiry Commission, written by the six Democratic appointees, the minority report, written by three of the four Republican appointees, studies by Federal Reserve economists, and the work of several independent scholars generally contend that government affordable housing policy was not the primary cause of the financial crisis. Although they concede that governmental policies had some role in causing the crisis, they contend that GSE loans performed better than loans securitized by private investment banks, and performed better than some loans originated by institutions that held loans in their own portfolios. In his dissent to the majority report of the Financial Crisis Inquiry Commission, American Enterprise Institute fellow Peter J. Wallison stated his belief that the roots of the financial crisis can be traced directly and primarily to affordable housing policies initiated by the US Department of Housing and Urban Development (HUD) in the 1990s and to massive risky loan purchases by government - sponsored entities Fannie Mae and Freddie Mac. Later, based upon information in the SEC 's December 2011 securities fraud case against six former executives of Fannie and Freddie, Peter Wallison and Edward Pinto estimated that, in 2008, Fannie and Freddie held 13 million substandard loans totaling over $2 trillion. In the early and mid-2000s, the Bush administration called numerous times for investigation into the safety and soundness of the GSEs and their swelling portfolio of subprime mortgages. On September 10, 2003, the House Financial Services Committee held a hearing at the urging of the administration to assess safety and soundness issues and to review a recent report by the Office of Federal Housing Enterprise Oversight (OFHEO) that had uncovered accounting discrepancies within the two entities. The hearings never resulted in new legislation or formal investigation of Fannie Mae and Freddie Mac, as many of the committee members refused to accept the report and instead rebuked OFHEO for their attempt at regulation. Some believe this was an early warning to the systemic risk that the growing market in subprime mortgages posed to the US financial system that went unheeded. A 2000 United States Department of the Treasury study of lending trends for 305 cities from 1993 to 1998 showed that $467 billion of mortgage lending was made by Community Reinvestment Act (CRA) - covered lenders into low and mid level income (LMI) borrowers and neighborhoods, representing 10 % of all US mortgage lending during the period. The majority of these were prime loans. Sub-prime loans made by CRA - covered institutions constituted a 3 % market share of LMI loans in 1998, but in the run - up to the crisis, fully 25 % of all sub-prime lending occurred at CRA - covered institutions and another 25 % of sub-prime loans had some connection with CRA. Furthermore, most sub-prime loans were not made to the LMI borrowers targeted by the CRA, especially in the years 2005 -- 2006 leading up to the crisis, nor did it find any evidence that lending under the CRA rules increased delinquency rates or that the CRA indirectly influenced independent mortgage lenders to ramp up sub-prime lending. To other analysts the delay between CRA rule changes (in 1995) and the explosion of subprime lending is not surprising, and does not exonerate the CRA. They contend that there were two, connected causes to the crisis: the relaxation of underwriting standards in 1995 and the ultra-low interest rates initiated by the Federal Reserve after the terrorist attack on September 11, 2001. Both causes had to be in place before the crisis could take place. Critics also point out that publicly announced CRA loan commitments were massive, totaling $4.5 trillion in the years between 1994 and 2007. They also argue that the Federal Reserve 's classification of CRA loans as "prime '' is based on the faulty and self - serving assumption that high - interest - rate loans (3 percentage points over average) equal "subprime '' loans. Others have pointed out that there were not enough of these loans made to cause a crisis of this magnitude. In an article in Portfolio Magazine, Michael Lewis spoke with one trader who noted that "There were n't enough Americans with (bad) credit taking out (bad loans) to satisfy investors ' appetite for the end product. '' Essentially, investment banks and hedge funds used financial innovation to enable large wagers to be made, far beyond the actual value of the underlying mortgage loans, using derivatives called credit default swaps, collateralized debt obligations and synthetic CDOs. As of March 2011, the FDIC had paid out $9 billion to cover losses on bad loans at 165 failed financial institutions. The Congressional Budget Office estimated, in June 2011, that the bailout to Fannie Mae and Freddie Mac exceeds $300 billion (calculated by adding the fair value deficits of the entities to the direct bailout funds at the time). Economist Paul Krugman argued in January 2010 that the simultaneous growth of the residential and commercial real estate pricing bubbles and the global nature of the crisis undermines the case made by those who argue that Fannie Mae, Freddie Mac, CRA, or predatory lending were primary causes of the crisis. In other words, bubbles in both markets developed even though only the residential market was affected by these potential causes. Countering Krugman, Peter J. Wallison wrote: "It is not true that every bubble -- even a large bubble -- has the potential to cause a financial crisis when it deflates. '' Wallison notes that other developed countries had "large bubbles during the 1997 -- 2007 period '' but "the losses associated with mortgage delinquencies and defaults when these bubbles deflated were far lower than the losses suffered in the United States when the 1997 -- 2007 (bubble) deflated. '' According to Wallison, the reason the US residential housing bubble (as opposed to other types of bubbles) led to financial crisis was that it was supported by a huge number of substandard loans -- generally with low or no downpayments. Krugman 's contention (that the growth of a commercial real estate bubble indicates that US housing policy was not the cause of the crisis) is challenged by additional analysis. After researching the default of commercial loans during the financial crisis, Xudong An and Anthony B. Sanders reported (in December 2010): "We find limited evidence that substantial deterioration in CMBS (commercial mortgage - backed securities) loan underwriting occurred prior to the crisis. '' Other analysts support the contention that the crisis in commercial real estate and related lending took place after the crisis in residential real estate. Business journalist Kimberly Amadeo reported: "The first signs of decline in residential real estate occurred in 2006. Three years later, commercial real estate started feeling the effects. Denice A. Gierach, a real estate attorney and CPA, wrote: ... most of the commercial real estate loans were good loans destroyed by a really bad economy. In other words, the borrowers did not cause the loans to go bad, it was the economy. Between 1998 and 2006, the price of the typical American house increased by 124 %. During the two decades ending in 2001, the national median home price ranged from 2.9 to 3.1 times median household income. This ratio rose to 4.0 in 2004, and 4.6 in 2006. This housing bubble resulted in many homeowners refinancing their homes at lower interest rates, or financing consumer spending by taking out second mortgages secured by the price appreciation. In a Peabody Award winning program, NPR correspondents argued that a "Giant Pool of Money '' (represented by $70 trillion in worldwide fixed income investments) sought higher yields than those offered by US Treasury bonds early in the decade. This pool of money had roughly doubled in size from 2000 to 2007, yet the supply of relatively safe, income generating investments had not grown as fast. Investment banks on Wall Street answered this demand with products such as the mortgage - backed security and the collateralized debt obligation that were assigned safe ratings by the credit rating agencies. In effect, Wall Street connected this pool of money to the mortgage market in the US, with enormous fees accruing to those throughout the mortgage supply chain, from the mortgage broker selling the loans to small banks that funded the brokers and the large investment banks behind them. By approximately 2003, the supply of mortgages originated at traditional lending standards had been exhausted, and continued strong demand began to drive down lending standards. The collateralized debt obligation in particular enabled financial institutions to obtain investor funds to finance subprime and other lending, extending or increasing the housing bubble and generating large fees. This essentially places cash payments from multiple mortgages or other debt obligations into a single pool from which specific securities draw in a specific sequence of priority. Those securities first in line received investment - grade ratings from rating agencies. Securities with lower priority had lower credit ratings but theoretically a higher rate of return on the amount invested. By September 2008, average US housing prices had declined by over 20 % from their mid-2006 peak. As prices declined, borrowers with adjustable - rate mortgages could not refinance to avoid the higher payments associated with rising interest rates and began to default. During 2007, lenders began foreclosure proceedings on nearly 1.3 million properties, a 79 % increase over 2006. This increased to 2.3 million in 2008, an 81 % increase vs. 2007. By August 2008, 9.2 % of all US mortgages outstanding were either delinquent or in foreclosure. By September 2009, this had risen to 14.4 %. Lower interest rates encouraged borrowing. From 2000 to 2003, the Federal Reserve lowered the federal funds rate target from 6.5 % to 1.0 %. This was done to soften the effects of the collapse of the dot - com bubble and the September 2001 terrorist attacks, as well as to combat a perceived risk of deflation. As early as 2002 it was apparent that credit was fueling housing instead of business investment as some economists went so far as to advocate that the Fed "needs to create a housing bubble to replace the Nasdaq bubble ''. Moreover, empirical studies using data from advanced countries show that excessive credit growth contributed greatly to the severity of the crisis. Additional downward pressure on interest rates was created by the high and rising US current account deficit, which peaked along with the housing bubble in 2006. Federal Reserve chairman Ben Bernanke explained how trade deficits required the US to borrow money from abroad, in the process bidding up bond prices and lowering interest rates. Bernanke explained that between 1996 and 2004, the US current account deficit increased by $650 billion, from 1.5 % to 5.8 % of GDP. Financing these deficits required the country to borrow large sums from abroad, much of it from countries running trade surpluses. These were mainly the emerging economies in Asia and oil - exporting nations. The balance of payments identity requires that a country (such as the US) running a current account deficit also have a capital account (investment) surplus of the same amount. Hence large and growing amounts of foreign funds (capital) flowed into the US to finance its imports. All of this created demand for various types of financial assets, raising the prices of those assets while lowering interest rates. Foreign investors had these funds to lend either because they had very high personal savings rates (as high as 40 % in China) or because of high oil prices. Ben Bernanke has referred to this as a "saving glut ''. A flood of funds (capital or liquidity) reached the US financial markets. Foreign governments supplied funds by purchasing Treasury bonds and thus avoided much of the direct effect of the crisis. US households, on the other hand, used funds borrowed from foreigners to finance consumption or to bid up the prices of housing and financial assets. Financial institutions invested foreign funds in mortgage - backed securities. The Fed then raised the Fed funds rate significantly between July 2004 and July 2006. This contributed to an increase in 1 - year and 5 - year adjustable - rate mortgage (ARM) rates, making ARM interest rate resets more expensive for homeowners. This may have also contributed to the deflating of the housing bubble, as asset prices generally move inversely to interest rates, and it became riskier to speculate in housing. US housing and financial assets dramatically declined in value after the housing bubble burst. Subprime lending standards declined in the USA: in early 2000, a subprime borrower had a FICO score of 660 or less. By 2005, many lenders dropped the required FICO score to 620, making it much easier to qualify for prime loans and making subprime lending a riskier business. Proof of income and assets were de-emphasized. Loans moved from full documentation to low documentation to no documentation. One subprime mortgage product that gained wide acceptance was the no income, no job, no asset verification required (NINJA) mortgage. Informally, these loans were aptly referred to as "liar loans '' because they encouraged borrowers to be less than honest in the loan application process. Testimony given to the Financial Crisis Inquiry Commission by Richard M. Bowen III on events during his tenure as the Business Chief Underwriter for Correspondent Lending in the Consumer Lending Group for Citigroup (where he was responsible for over 220 professional underwriters) suggests that by the final years of the US housing bubble (2006 -- 2007), the collapse of mortgage underwriting standards was endemic. His testimony stated that by 2006, 60 % of mortgages purchased by Citi from some 1,600 mortgage companies were "defective '' (were not underwritten to policy, or did not contain all policy - required documents) -- this, despite the fact that each of these 1,600 originators was contractually responsible (certified via representations and warrantees) that its mortgage originations met Citi 's standards. Moreover, during 2007, "defective mortgages (from mortgage originators contractually bound to perform underwriting to Citi 's standards) increased... to over 80 % of production ''. In separate testimony to Financial Crisis Inquiry Commission, officers of Clayton Holdings -- the largest residential loan due diligence and securitization surveillance company in the United States and Europe -- testified that Clayton 's review of over 900,000 mortgages issued from January 2006 to June 2007 revealed that scarcely 54 % of the loans met their originators ' underwriting standards. The analysis (conducted on behalf of 23 investment and commercial banks, including 7 "too big to fail '' banks) additionally showed that 28 % of the sampled loans did not meet the minimal standards of any issuer. Clayton 's analysis further showed that 39 % of these loans (i.e. those not meeting any issuer 's minimal underwriting standards) were subsequently securitized and sold to investors. There is strong evidence that the GSEs -- due to their large size and market power -- were far more effective at policing underwriting by originators and forcing underwriters to repurchase defective loans. By contrast, private securitizers have been far less aggressive and less effective in recovering losses from originators on behalf of investors. Predatory lending refers to the practice of unscrupulous lenders, enticing borrowers to enter into "unsafe '' or "unsound '' secured loans for inappropriate purposes. A classic bait - and - switch method was used by Countrywide Financial, advertising low interest rates for home refinancing. Such loans were covered by very detailed contracts, and swapped for more expensive loan products on the day of closing. Whereas the advertisement might state that 1 % or 1.5 % interest would be charged, the consumer would be put into an adjustable rate mortgage (ARM) in which the interest charged would be greater than the mortgage payments, creating negative amortization which the credit consumer might not notice until long after the loan transaction had been consummated. Countrywide, sued by California Attorney General Jerry Brown for "unfair business practices '' and "false advertising '', was making high cost mortgages "to homeowners with weak credit, adjustable rate mortgages (ARMs) that allowed homeowners to make interest - only payments ''. When housing prices decreased, homeowners in ARMs then had little incentive to pay their monthly payments, since their home equity had disappeared. This caused Countrywide 's financial condition to deteriorate, ultimately resulting in a decision by the Office of Thrift Supervision to seize the lender. One Countrywide employee -- who would later plead guilty to two counts of wire fraud and spent 18 months in prison -- stated that, "If you had a pulse, we gave you a loan. '' Former employees from Ameriquest, which was United States ' leading wholesale lender, described a system in which they were pushed to falsify mortgage documents and then sell the mortgages to Wall Street banks eager to make fast profits. There is growing evidence that such mortgage frauds may be a cause of the crisis. A 2012 OECD study suggest that bank regulation based on the Basel accords encourage unconventional business practices and contributed to or even reinforced the financial crisis. In other cases, laws were changed or enforcement weakened in parts of the financial system. Key examples include: Prior to the crisis, financial institutions became highly leveraged, increasing their appetite for risky investments and reducing their resilience in case of losses. Much of this leverage was achieved using complex financial instruments such as off - balance sheet securitization and derivatives, which made it difficult for creditors and regulators to monitor and try to reduce financial institution risk levels. These instruments also made it virtually impossible to reorganize financial institutions in bankruptcy, and contributed to the need for government bailouts. US households and financial institutions became increasingly indebted or overleveraged during the years preceding the crisis. This increased their vulnerability to the collapse of the housing bubble and worsened the ensuing economic downturn. Key statistics include: Free cash used by consumers from home equity extraction doubled from $627 billion in 2001 to $1,428 billion in 2005 as the housing bubble built, a total of nearly $5 trillion over the period, contributing to economic growth worldwide. US home mortgage debt relative to GDP increased from an average of 46 % during the 1990s to 73 % during 2008, reaching $10.5 trillion. US household debt as a percentage of annual disposable personal income was 127 % at the end of 2007, versus 77 % in 1990. In 1981, US private debt was 123 % of GDP; by the third quarter of 2008, it was 290 %. From 2004 to 2007, the top five US investment banks each significantly increased their financial leverage (see diagram), which increased their vulnerability to a financial shock. Changes in capital requirements, intended to keep US banks competitive with their European counterparts, allowed lower risk weightings for AAA securities. The shift from first - loss tranches to AAA tranches was seen by regulators as a risk reduction that compensated the higher leverage. These five institutions reported over $4.1 trillion in debt for fiscal year 2007, about 30 % of US nominal GDP for 2007. Lehman Brothers went bankrupt and was liquidated, Bear Stearns and Merrill Lynch were sold at fire - sale prices, and Goldman Sachs and Morgan Stanley became commercial banks, subjecting themselves to more stringent regulation. With the exception of Lehman, these companies required or received government support. Lehman reported that it had been in talks with Bank of America and Barclays for the company 's possible sale. However, both Barclays and Bank of America ultimately declined to purchase the entire company. Fannie Mae and Freddie Mac, two US government - sponsored enterprises, owned or guaranteed nearly $5 trillion in mortgage obligations at the time they were placed into conservatorship by the US government in September 2008. These seven entities were highly leveraged and had $9 trillion in debt or guarantee obligations; yet they were not subject to the same regulation as depository banks. Behavior that may be optimal for an individual (e.g., saving more during adverse economic conditions) can be detrimental if too many individuals pursue the same behavior, as ultimately one person 's consumption is another person 's income. Too many consumers attempting to save (or pay down debt) simultaneously is called the paradox of thrift and can cause or deepen a recession. Economist Hyman Minsky also described a "paradox of deleveraging '' as financial institutions that have too much leverage (debt relative to equity) can not all de-leverage simultaneously without significant declines in the value of their assets. In April 2009, US Federal Reserve vice-chair Janet Yellen discussed these paradoxes: Once this massive credit crunch hit, it did n't take long before we were in a recession. The recession, in turn, deepened the credit crunch as demand and employment fell, and credit losses of financial institutions surged. Indeed, we have been in the grips of precisely this adverse feedback loop for more than a year. A process of balance sheet deleveraging has spread to nearly every corner of the economy. Consumers are pulling back on purchases, especially on durable goods, to build their savings. Businesses are cancelling planned investments and laying off workers to preserve cash. And, financial institutions are shrinking assets to bolster capital and improve their chances of weathering the current storm. Once again, Minsky understood this dynamic. He spoke of the paradox of deleveraging, in which precautions that may be smart for individuals and firms -- and indeed essential to return the economy to a normal state -- nevertheless magnify the distress of the economy as a whole. The term financial innovation refers to the ongoing development of financial products designed to achieve particular client objectives, such as offsetting a particular risk exposure (such as the default of a borrower) or to assist with obtaining financing. Examples pertinent to this crisis included: the adjustable - rate mortgage; the bundling of subprime mortgages into mortgage - backed securities (MBS) or collateralized debt obligations (CDO) for sale to investors, a type of securitization; and a form of credit insurance called credit default swaps (CDS). The usage of these products expanded dramatically in the years leading up to the crisis. These products vary in complexity and the ease with which they can be valued on the books of financial institutions. CDO issuance grew from an estimated $20 billion in Q1 2004 to its peak of over $180 billion by Q1 2007, then declined back under $20 billion by Q1 2008. Further, the credit quality of CDO 's declined from 2000 to 2007, as the level of subprime and other non-prime mortgage debt increased from 5 % to 36 % of CDO assets. As described in the section on subprime lending, the CDS and portfolio of CDS called synthetic CDO enabled a theoretically infinite amount to be wagered on the finite value of housing loans outstanding, provided that buyers and sellers of the derivatives could be found. For example, buying a CDS to insure a CDO ended up giving the seller the same risk as if they owned the CDO, when those CDO 's became worthless. This boom in innovative financial products went hand in hand with more complexity. It multiplied the number of actors connected to a single mortgage (including mortgage brokers, specialized originators, the securitizers and their due diligence firms, managing agents and trading desks, and finally investors, insurances and providers of repo funding). With increasing distance from the underlying asset these actors relied more and more on indirect information (including FICO scores on creditworthiness, appraisals and due diligence checks by third party organizations, and most importantly the computer models of rating agencies and risk management desks). Instead of spreading risk this provided the ground for fraudulent acts, misjudgments and finally market collapse. Martin Wolf further wrote in June 2009 that certain financial innovations enabled firms to circumvent regulations, such as off - balance sheet financing that affects the leverage or capital cushion reported by major banks, stating: "... an enormous part of what banks did in the early part of this decade -- the off - balance - sheet vehicles, the derivatives and the ' shadow banking system ' itself -- was to find a way round regulation. '' The pricing of risk refers to the incremental compensation required by investors for taking on additional risk, which may be measured by interest rates or fees. Several scholars have argued that a lack of transparency about banks ' risk exposures prevented markets from correctly pricing risk before the crisis, enabled the mortgage market to grow larger than it otherwise would have, and made the financial crisis far more disruptive than it would have been if risk levels had been disclosed in a straightforward, readily understandable format. For a variety of reasons, market participants did not accurately measure the risk inherent with financial innovation such as MBS and CDOs or understand its effect on the overall stability of the financial system. For example, the pricing model for CDOs clearly did not reflect the level of risk they introduced into the system. Banks estimated that $450 billion of CDO were sold between "late 2005 to the middle of 2007 ''; among the $102 billion of those that had been liquidated, JPMorgan estimated that the average recovery rate for "high quality '' CDOs was approximately 32 cents on the dollar, while the recovery rate for mezzanine CDO was approximately five cents for every dollar. Another example relates to AIG, which insured obligations of various financial institutions through the usage of credit default swaps. The basic CDS transaction involved AIG receiving a premium in exchange for a promise to pay money to party A in the event party B defaulted. However, AIG did not have the financial strength to support its many CDS commitments as the crisis progressed and was taken over by the government in September 2008. US taxpayers provided over $180 billion in government support to AIG during 2008 and early 2009, through which the money flowed to various counterparties to CDS transactions, including many large global financial institutions. The Financial Crisis Inquiry Commission (FCIC) made the major government study of the crisis. It concluded in January 2011: The Commission concludes AIG failed and was rescued by the government primarily because its enormous sales of credit default swaps were made without putting up the initial collateral, setting aside capital reserves, or hedging its exposure -- a profound failure in corporate governance, particularly its risk management practices. AIG 's failure was possible because of the sweeping deregulation of over-the - counter (OTC) derivatives, including credit default swaps, which effectively eliminated federal and state regulation of these products, including capital and margin requirements that would have lessened the likelihood of AIG 's failure. The limitations of a widely used financial model also were not properly understood. This formula assumed that the price of CDS was correlated with and could predict the correct price of mortgage - backed securities. Because it was highly tractable, it rapidly came to be used by a huge percentage of CDO and CDS investors, issuers, and rating agencies. According to one wired.com article: Then the model fell apart. Cracks started appearing early on, when financial markets began behaving in ways that users of Li 's formula had n't expected. The cracks became full - fledged canyons in 2008 -- when ruptures in the financial system 's foundation swallowed up trillions of dollars and put the survival of the global banking system in serious peril... Li 's Gaussian copula formula will go down in history as instrumental in causing the unfathomable losses that brought the world financial system to its knees. As financial assets became more complex and harder to value, investors were reassured by the fact that the international bond rating agencies and bank regulators accepted as valid some complex mathematical models that showed the risks were much smaller than they actually were. George Soros commented that "The super-boom got out of hand when the new products became so complicated that the authorities could no longer calculate the risks and started relying on the risk management methods of the banks themselves. Similarly, the rating agencies relied on the information provided by the originators of synthetic products. It was a shocking abdication of responsibility. '' Moreover, a conflict of interest between professional investment managers and their institutional clients, combined with a global glut in investment capital, led to bad investments by asset managers in over-priced credit assets. Professional investment managers generally are compensated based on the volume of client assets under management. There is, therefore, an incentive for asset managers to expand their assets under management in order to maximize their compensation. As the glut in global investment capital caused the yields on credit assets to decline, asset managers were faced with the choice of either investing in assets where returns did not reflect true credit risk or returning funds to clients. Many asset managers continued to invest client funds in over-priced (under - yielding) investments, to the detriment of their clients, so they could maintain their assets under management. They supported this choice with a "plausible deniability '' of the risks associated with subprime - based credit assets because the loss experience with early "vintages '' of subprime loans was so low. Despite the dominance of the above formula, there are documented attempts of the financial industry, occurring before the crisis, to address the formula limitations, specifically the lack of dependence dynamics and the poor representation of extreme events. The volume "Credit Correlation: Life After Copulas '', published in 2007 by World Scientific, summarizes a 2006 conference held by Merrill Lynch in London where several practitioners attempted to propose models rectifying some of the copula limitations. See also the article by Donnelly and Embrechts and the book by Brigo, Pallavicini and Torresetti, that reports relevant warnings and research on CDOs appeared in 2006. Mortgage risks were underestimated by almost all institutions in the chain from originator to investor by underweighting the possibility of falling housing prices based on historical trends of the past 50 years. Limitations of default and prepayment models, the heart of pricing models, led to overvaluation of mortgage and asset - backed products and their derivatives by originators, securitizers, broker - dealers, rating - agencies, insurance underwriters and the vast majority of investors with the exception of hedge funds. There is strong evidence that the riskiest, worst performing mortgages were funded through the "shadow banking system '' and that competition from the shadow banking system may have pressured more traditional institutions to lower their own underwriting standards and originate riskier loans. In a June 2008 speech, President and CEO of the New York Federal Reserve Bank Timothy Geithner -- who in 2009 became Secretary of the United States Treasury -- placed significant blame for the freezing of credit markets on a "run '' on the entities in the "parallel '' banking system, also called the shadow banking system. These entities became critical to the credit markets underpinning the financial system, but were not subject to the same regulatory controls. Further, these entities were vulnerable because of maturity mismatch, meaning that they borrowed short - term in liquid markets to purchase long - term, illiquid and risky assets. This meant that disruptions in credit markets would make them subject to rapid deleveraging, selling their long - term assets at depressed prices. He described the significance of these entities: In early 2007, asset - backed commercial paper conduits, in structured investment vehicles, in auction - rate preferred securities, tender option bonds and variable rate demand notes, had a combined asset size of roughly $2.2 trillion. Assets financed overnight in triparty repo grew to $2.5 trillion. Assets held in hedge funds grew to roughly $1.8 trillion. The combined balance sheets of the five largest investment banks totaled $4 trillion. In comparison, the total assets of the top five bank holding companies in the United States at that point were just over $6 trillion, and total assets of the entire banking system were about $10 trillion. The combined effect of these factors was a financial system vulnerable to self - reinforcing asset price and credit cycles. Paul Krugman, laureate of the Nobel Prize in Economics, described the run on the shadow banking system as the "core of what happened '' to cause the crisis. He referred to this lack of controls as "malign neglect '' and argued that regulation should have been imposed on all banking - like activity. The securitization markets supported by the shadow banking system started to close down in the spring of 2007 and nearly shut - down in the fall of 2008. More than a third of the private credit markets thus became unavailable as a source of funds. According to the Brookings Institution, the traditional banking system does not have the capital to close this gap as of June 2009: "It would take a number of years of strong profits to generate sufficient capital to support that additional lending volume. '' The authors also indicate that some forms of securitization are "likely to vanish forever, having been an artifact of excessively loose credit conditions. '' Rapid increases in a number of commodity prices followed the collapse in the housing bubble. The price of oil nearly tripled from $50 to $147 from early 2007 to 2008, before plunging as the financial crisis began to take hold in late 2008. Experts debate the causes, with some attributing it to speculative flow of money from housing and other investments into commodities, some to monetary policy, and some to the increasing feeling of raw materials scarcity in a fast - growing world, leading to long positions taken on those markets, such as Chinese increasing presence in Africa. An increase in oil prices tends to divert a larger share of consumer spending into gasoline, which creates downward pressure on economic growth in oil importing countries, as wealth flows to oil - producing states. A pattern of spiking instability in the price of oil over the decade leading up to the price high of 2008 has been recently identified. The destabilizing effects of this price variance has been proposed as a contributory factor in the financial crisis. Copper prices increased at the same time as oil prices. Copper traded at about $2,500 per ton from 1990 until 1999, when it fell to about $1,600. The price slump lasted until 2004, when a price surge pushed copper to $7,040 per ton in 2008. Nickel prices boomed in the late 1990s, then declined from around $51,000 / £ 36,700 per metric ton in May 2007 to about $11,550 / £ 8,300 per metric ton in January 2009. Prices were only just starting to recover as of January 2010, but most of Australia 's nickel mines had gone bankrupt by then. As the price for high grade nickel sulphate ore recovered in 2010, so did the Australian nickel mining industry. Coincidentally with these price fluctuations, long - only commodity index funds became popular -- by one estimate investment increased from $90 billion in 2006 to $200 billion at the end of 2007, while commodity prices increased 71 % -- which raised concern as to whether these index funds caused the commodity bubble. The empirical research has been mixed. Another analysis is that the financial crisis was merely a symptom of another, deeper crisis, which is a systemic crisis of capitalism itself. Ravi Batra 's theory is that growing inequality of financial capitalism produces speculative bubbles that burst and result in depression and major political changes. He has also suggested that a "demand gap '' related to differing wage and productivity growth explains deficit and debt dynamics important to stock market developments. John Bellamy Foster, a political economy analyst and editor of the Monthly Review, believes that the decrease in GDP growth rates since the early 1970s is due to increasing market saturation. The conventional Marxist explanation of capitalist crises was pointed to by economists Andrew Kliman, Michael Roberts, and Guglielmo Carchedi, in contradistinction to the Monthly Review school represented by Foster. These Marxist economists do not point to low wages or underconsumption as the cause of the crisis, but instead point to capitalism 's long - term tendency of the rate of profit to fall as the underlying cause of crises generally. From this point of view, the problem was the inability of capital to grow or accumulate at sufficient rates through productive investment alone. Low rates of profit in productive sectors led to speculative investment in riskier assets, where there was potential for greater return on investment. The speculative frenzy of the late 90s and 2000s was, in this view, a consequence of a rising organic composition of capital, expressed through the fall in the rate of profit. According to Michael Roberts, the fall in the rate of profit "eventually triggered the credit crunch of 2007 when credit could no longer support profits ''. In 2005, John C. Bogle wrote that a series of challenges face capitalism that have contributed to past financial crises and have not been sufficiently addressed: Corporate America went astray largely because the power of managers went virtually unchecked by our gatekeepers for far too long... They failed to ' keep an eye on these geniuses ' to whom they had entrusted the responsibility of the management of America 's great corporations. Echoing the central thesis of James Burnham 's 1941 seminal book, The Managerial Revolution, Bogle cites particular issues, including: An analysis conducted by Mark Roeder, a former executive at the Swiss - based UBS Bank, suggested that large - scale momentum, or The Big Mo "played a pivotal role '' in the 2008 -- 09 global financial crisis. Roeder suggested that "recent technological advances, such as computer - driven trading programs, together with the increasingly interconnected nature of markets, has magnified the momentum effect. This has made the financial sector inherently unstable. '' Robert Reich attributes the current economic downturn to the stagnation of wages in the United States, particularly those of the hourly workers who comprise 80 % of the workforce. He says this stagnation forced the population to borrow to meet the cost of living. Economists Ailsa McKay and Margunn Bjørnholt argue that the financial crisis and the response to it revealed a crisis of ideas in mainstream economics and within the economics profession, and call for a reshaping of both the economy, economic theory and the economics profession. The former Governor of the Reserve Bank of India, Raghuram Rajan, had predicted the crisis in 2005 when he became chief economist at the International Monetary Fund. In 2005, at a celebration honouring Alan Greenspan, who was about to retire as chairman of the US Federal Reserve, Rajan delivered a controversial paper that was critical of the financial sector. In that paper, Rajan "argued that disaster might loom. '' Rajan argued that financial sector managers were encouraged to "take risks that generate severe adverse consequences with small probability but, in return, offer generous compensation the rest of the time. These risks are known as tail risks. But perhaps the most important concern is whether banks will be able to provide liquidity to financial markets so that if the tail risk does materialise, financial positions can be unwound and losses allocated so that the consequences to the real economy are minimised. '' The financial crisis was not widely predicted by mainstream economists. Karim Abadir, based on his work with Gabriel Talmain, predicted the timing of the recession whose trigger had already started manifesting itself in the real economy from early 2007. A number of heterodox economists predicted the crisis, with varying arguments. Dirk Bezemer in his research credits (with supporting argument and estimates of timing) 12 economists with predicting the crisis: Dean Baker (US), Wynne Godley (UK), Fred Harrison (UK), Michael Hudson (US), Eric Janszen (US), Steve Keen (Australia), Jakob Brøchner Madsen & Jens Kjaer Sørensen (Denmark), Med Jones (US) Kurt Richebächer (US), Nouriel Roubini (US), Peter Schiff (US), and Robert Shiller (US). Examples of other experts who gave indications of a financial crisis have also been given. The Austrian economic school regarded the crisis as a vindication and classic example of a predictable credit - fueled bubble that could not forestall the disregarded but inevitable effect of an artificial, manufactured laxity in monetary supply, a perspective that even former Fed Chair Alan Greenspan in Congressional testimony confessed himself forced to return to. A cover story in BusinessWeek magazine claims that economists mostly failed to predict the worst international economic crisis since the Great Depression of the 1930s. The Wharton School of the University of Pennsylvania 's online business journal examines why economists failed to predict a major global financial crisis. Popular articles published in the mass media have led the general public to believe that the majority of economists have failed in their obligation to predict the financial crisis. For example, an article in the New York Times informs that economist Nouriel Roubini warned of such crisis as early as September 2006, and the article goes on to state that the profession of economics is bad at predicting recessions. According to The Guardian, Roubini was ridiculed for predicting a collapse of the housing market and worldwide recession, while The New York Times labelled him "Dr. Doom ''. Shiller, an expert in housing markets, wrote an article a year before the collapse of Lehman Brothers in which he predicted that a slowing US housing market would cause the housing bubble to burst, leading to financial collapse. Schiff regularly appeared on television in the years before the crisis and warned of the impending real estate collapse. Within mainstream financial economics, most believe that financial crises are simply unpredictable, following Eugene Fama 's efficient - market hypothesis and the related random - walk hypothesis, which state respectively that markets contain all information about possible future movements, and that the movements of financial prices are random and unpredictable. Recent research casts doubt on the accuracy of "early warning '' systems of potential crises, which must also predict their timing. Stock trader and financial risk engineer Nassim Nicholas Taleb, author of the 2007 book The Black Swan, spent years warning against the breakdown of the banking system in particular and the economy in general owing to their use of and reliance on bad risk models and reliance on forecasting, and framed the problem as part of "robustness and fragility ''. He also took action against the establishment view by making a big financial bet on banking stocks and making a fortune from the crisis ("They did n't listen, so I took their money ''). According to David Brooks from the New York Times, "Taleb not only has an explanation for what 's happening, he saw it coming. '' The US stock market peaked in October 2007, when the Dow Jones Industrial Average index exceeded 14,000 points. It then entered a pronounced decline, which accelerated markedly in October 2008. By March 2009, the Dow Jones average had reached a trough of around 6,600. Four years later, it hit an all - time high. It is probable, but debated, that the Federal Reserve 's aggressive policy of quantitative easing spurred the partial recovery in the stock market. Market strategist Phil Dow believes distinctions exist "between the current market malaise '' and the Great Depression. He says the Dow Jones average 's fall of more than 50 % over a period of 17 months is similar to a 54.7 % fall in the Great Depression, followed by a total drop of 89 % over the following 16 months. "It 's very troubling if you have a mirror image, '' said Dow. Floyd Norris, the chief financial correspondent of The New York Times, wrote in a blog entry in March 2009 that the decline has not been a mirror image of the Great Depression, explaining that although the decline amounts were nearly the same at the time, the rates of decline had started much faster in 2007, and that the past year had only ranked eighth among the worst recorded years of percentage drops in the Dow. The past two years ranked third, however. The first notable event signaling a possible financial crisis occurred in the United Kingdom on August 9, 2007, when BNP Paribas, citing "a complete evaporation of liquidity '', blocked withdrawals from three hedge funds. The significance of this event was not immediately recognized but soon led to a panic as investors and savers attempted to liquidate assets deposited in highly leveraged financial institutions. The International Monetary Fund estimated that large US and European banks lost more than $1 trillion on toxic assets and from bad loans from January 2007 to September 2009. These losses are expected to top $2.8 trillion from 2007 to 2010. US bank losses were forecast to hit $1 trillion and European bank losses will reach $1.6 trillion. The International Monetary Fund (IMF) estimated in 2009 that US banks were about 60 % through their losses, but British and eurozone banks only 40 %. One of the first victims was Northern Rock, a medium - sized British bank. The highly leveraged nature of its business led the bank to request security from the Bank of England. This in turn led to investor panic and a bank run in mid-September 2007. Calls by Liberal Democrat Treasury Spokesman Vince Cable to nationalise the institution were initially ignored; in February 2008, however, the British government (having failed to find a private sector buyer) relented, and the bank was taken into public hands. Northern Rock 's problems proved to be an early indication of the troubles that would soon befall other banks and financial institutions. The first visible institution to run into trouble in the United States was the Southern California -- based IndyMac, a spin - off of Countrywide Financial. Before its failure, IndyMac Bank was the largest savings and loan association in the Los Angeles market and the seventh largest mortgage originator in the United States. The failure of IndyMac Bank on July 11, 2008, was the fourth largest bank failure in United States history up until the crisis precipitated even larger failures, and the second largest failure of a regulated thrift. IndyMac Bank 's parent corporation was IndyMac Bancorp until the FDIC seized IndyMac Bank. IndyMac Bancorp filed for Chapter 7 bankruptcy in July 2008. IndyMac Bank was founded as Countrywide Mortgage Investment in 1985 by David S. Loeb and Angelo Mozilo as a means of collateralizing Countrywide Financial loans too big to be sold to Freddie Mac and Fannie Mae. In 1997, Countrywide spun off IndyMac as an independent company run by Mike Perry, who remained its CEO until the downfall of the bank in July 2008. The primary causes of its failure were largely associated with its business strategy of originating and securitizing Alt - A loans on a large scale. This strategy resulted in rapid growth and a high concentration of risky assets. From its inception as a savings association in 2000, IndyMac grew to the seventh largest savings and loan and ninth largest originator of mortgage loans in the United States. During 2006, IndyMac originated over $90 billion of mortgages. IndyMac 's aggressive growth strategy, use of Alt - A and other nontraditional loan products, insufficient underwriting, credit concentrations in residential real estate in the California and Florida markets -- states, alongside Nevada and Arizona, where the housing bubble was most pronounced -- and heavy reliance on costly funds borrowed from a Federal Home Loan Bank (FHLB) and from brokered deposits, led to its demise when the mortgage market declined in 2007. IndyMac often made loans without verification of the borrower 's income or assets, and to borrowers with poor credit histories. Appraisals obtained by IndyMac on underlying collateral were often questionable as well. As an Alt - A lender, IndyMac 's business model was to offer loan products to fit the borrower 's needs, using an extensive array of risky option - adjustable - rate - mortgages (option ARMs), subprime loans, 80 / 20 loans, and other nontraditional products. Ultimately, loans were made to many borrowers who simply could not afford to make their payments. The thrift remained profitable only as long as it was able to sell those loans in the secondary mortgage market. IndyMac resisted efforts to regulate its involvement in those loans or tighten their issuing criteria: see the comment by Ruthann Melbourne, Chief Risk Officer, to the regulating agencies. May 12, 2008, in a small note in the "Capital '' section of its what would become its last 10 - Q released before receivership, IndyMac revealed -- but did not admit -- that it was no longer a well - capitalized institution and that it was headed for insolvency. IndyMac reported that during April 2008, Moody 's and Standard & Poor 's downgraded the ratings on a significant number of Mortgage - backed security (MBS) bonds -- including $160 million issued by IndyMac that the bank retained in its MBS portfolio. IndyMac concluded that these downgrades would have harmed the Company 's risk - based capital ratio as of June 30, 2008. Had these lowered ratings been in effect at March 31, 2008, IndyMac concluded that the bank 's capital ratio would have been 9.27 % total risk - based. IndyMac warned that if its regulators found its capital position to have fallen below "well capitalized '' (minimum 10 % risk - based capital ratio) to "adequately capitalized '' (8 -- 10 % risk - based capital ratio) the bank might no longer be able to use brokered deposits as a source of funds. Senator Charles Schumer (D - NY) later pointed out that brokered deposits made up more than 37 percent of IndyMac 's total deposits, and ask the Federal Deposit Insurance Corporation (FDIC) whether it had considered ordering IndyMac to reduce its reliance on these deposits. With $18.9 billion in total deposits reported on March 31, Senator Schumer would have been referring to a little over $7 billion in brokered deposits. While the breakout of maturities of these deposits is not known exactly, a simple averaging would have put the threat of brokered deposits loss to IndyMac at $500 million a month, had the regulator disallowed IndyMac from acquiring new brokered deposits on June 30. IndyMac was taking new measures to preserve capital, such as deferring interest payments on some preferred securities. Dividends on common shares had already been suspended for the first quarter of 2008, after being cut in half the previous quarter. The company still had not secured a significant capital infusion nor found a ready buyer. IndyMac reported that the bank 's risk - based capital was only $47 million above the minimum required for this 10 % mark. But it did not reveal some of that $47 million capital it claimed it had, as of March 31, 2008, was fabricated. When home prices declined in the latter half of 2007 and the secondary mortgage market collapsed, IndyMac was forced to hold $10.7 billion of loans it could not sell in the secondary market. Its reduced liquidity was further exacerbated in late June 2008 when account holders withdrew $1.55 billion or about 7.5 % of IndyMac 's deposits. This "run '' on the thrift followed the public release of a letter from Senator Charles Schumer to the FDIC and OTS. The letter outlined the Senator 's concerns with IndyMac. While the run was a contributing factor in the timing of IndyMac 's demise, the underlying cause of the failure was the unsafe and unsound way they operated the thrift. On June 26, 2008, Senator Charles Schumer (D - NY), a member of the Senate Banking Committee, chairman of Congress ' Joint Economic Committee and the third - ranking Democrat in the Senate, released several letters he had sent to regulators, which warned that, "The possible collapse of big mortgage lender IndyMac Bancorp Inc. poses significant financial risks to its borrowers and depositors, and regulators may not be ready to intervene to protect them. '' Some worried depositors began to withdraw money. On July 7, 2008, IndyMac announced on the company blog that it: IndyMac announced the closure of both its retail lending and wholesale divisions, halted new loan submissions, and cut 3,800 jobs. On July 11, 2008, citing liquidity concerns, the FDIC put IndyMac Bank into conservatorship. A bridge bank, IndyMac Federal Bank, FSB, was established to assume control of IndyMac Bank 's assets, its secured liabilities, and its insured deposit accounts. The FDIC announced plans to open IndyMac Federal Bank, FSB on July 14, 2008. Until then, depositors would have access their insured deposits through ATMs, their existing checks, and their existing debit cards. Telephone and Internet account access was restored when the bank reopened. The FDIC guarantees the funds of all insured accounts up to US $100,000, and has declared a special advance dividend to the roughly 10,000 depositors with funds in excess of the insured amount, guaranteeing 50 % of any amounts in excess of $100,000. Yet, even with the pending sale of Indymac to IMB Management Holdings, an estimated 10,000 uninsured depositors of Indymac are still at a loss of over $270 million. With $32 billion in assets, IndyMac Bank was one of the largest bank failures in American history. IndyMac Bancorp filed for Chapter 7 bankruptcy on July 31, 2008. Initially the companies affected were those directly involved in home construction and mortgage lending such as Northern Rock and Countrywide Financial, as they could no longer obtain financing through the credit markets. Over 100 mortgage lenders went bankrupt during 2007 and 2008. Concerns that investment bank Bear Stearns would collapse in March 2008 resulted in its fire - sale to JP Morgan Chase. The financial institution crisis hit its peak in September and October 2008. Several major institutions either failed, were acquired under duress, or were subject to government takeover. These included Lehman Brothers, Merrill Lynch, Fannie Mae, Freddie Mac, Washington Mutual, Wachovia, Citigroup, and AIG. On October 6, 2008, three weeks after Lehman Brothers filed the largest bankruptcy in US history, Lehman 's former CEO Richard S. Fuld Jr. found himself before Representative Henry A. Waxman, the California Democrat who chaired the House Committee on Oversight and Government Reform. Fuld said he was a victim of the collapse, blaming a "crisis of confidence '' in the markets for dooming his firm. In September 2008, the crisis hit its most critical stage. There was the equivalent of a bank run on the money market funds, which frequently invest in commercial paper issued by corporations to fund their operations and payrolls. Withdrawal from money markets were $144.5 billion during one week, versus $7.1 billion the week prior. This interrupted the ability of corporations to rollover (replace) their short - term debt. The US government responded by extending insurance for money market accounts analogous to bank deposit insurance via a temporary guarantee and with Federal Reserve programs to purchase commercial paper. The TED spread, an indicator of perceived credit risk in the general economy, spiked up in July 2007, remained volatile for a year, then spiked even higher in September 2008, reaching a record 4.65 % on October 10, 2008. In a dramatic meeting on September 18, 2008, Treasury Secretary Henry Paulson and Fed chairman Ben Bernanke met with key legislators to propose a $700 billion emergency bailout. Bernanke reportedly told them: "If we do n't do this, we may not have an economy on Monday. '' The Emergency Economic Stabilization Act, which implemented the Troubled Asset Relief Program (TARP), was signed into law on October 3, 2008. Economist Paul Krugman and US Treasury Secretary Timothy Geithner explain the credit crisis via the implosion of the shadow banking system, which had grown to nearly equal the importance of the traditional commercial banking sector as described above. Without the ability to obtain investor funds in exchange for most types of mortgage - backed securities or asset - backed commercial paper, investment banks and other entities in the shadow banking system could not provide funds to mortgage firms and other corporations. This meant that nearly one - third of the US lending mechanism was frozen and continued to be frozen into June 2009. According to the Brookings Institution, at that time the traditional banking system did not have the capital to close this gap: "It would take a number of years of strong profits to generate sufficient capital to support that additional lending volume. '' The authors also indicate that some forms of securitization were "likely to vanish forever, having been an artifact of excessively loose credit conditions. '' While traditional banks raised their lending standards, it was the collapse of the shadow banking system that was the primary cause of the reduction in funds available for borrowing. There is a direct relationship between declines in wealth and declines in consumption and business investment, which along with government spending, represent the economic engine. Between June 2007 and November 2008, Americans lost an estimated average of more than a quarter of their collective net worth. By early November 2008, a broad US stock index, the S&P 500, was down 45 % from its 2007 high. Housing prices had dropped 20 % from their 2006 peak, with futures markets signaling a 30 -- 35 % potential drop. Total home equity in the United States, which was valued at $13 trillion at its peak in 2006, had dropped to $8.8 trillion by mid-2008 and was still falling in late 2008. Total retirement assets, Americans ' second - largest household asset, dropped by 22 %, from $10.3 trillion in 2006 to $8 trillion in mid-2008. During the same period, savings and investment assets (apart from retirement savings) lost $1.2 trillion and pension assets lost $1.3 trillion. Taken together, these losses total a staggering $8.3 trillion. Since peaking in the second quarter of 2007, household wealth is down $14 trillion. Further, US homeowners had extracted significant equity in their homes in the years leading up to the crisis, which they could no longer do once housing prices collapsed. Readily obtainable cash used by consumers from home equity extraction doubled from $627 billion in 2001 to $1,428 billion in 2005 as the housing bubble built, a total of nearly $5 trillion over the period. US home mortgage debt relative to GDP increased from an average of 46 % during the 1990s to 73 % during 2008, reaching $10.5 trillion. To offset this decline in consumption and lending capacity, the US government and US Federal Reserve committed $13.9 trillion, of which $6.8 trillion was invested or spent as of June 2009. In effect, the Fed went from being the "lender of last resort '' to the "lender of only resort '' for a significant portion of the economy. In some cases the Fed was considered the "buyer of last resort. '' In November 2008, economist Dean Baker observed: There is a really good reason for tighter credit. Tens of millions of homeowners who had substantial equity in their homes two years ago have little or nothing today. Businesses are facing the worst downturn since the Great Depression. This matters for credit decisions. A homeowner with equity in her home is very unlikely to default on a car loan or credit card debt. They will draw on this equity rather than lose their car and / or have a default placed on their credit record. On the other hand, a homeowner who has no equity is a serious default risk. In the case of businesses, their creditworthiness depends on their future profits. Profit prospects look much worse in November 2008 than they did in November 2007... While many banks are obviously at the brink, consumers and businesses would be facing a much harder time getting credit right now even if the financial system were rock solid. The problem with the economy is the loss of close to $6 trillion in housing wealth and an even larger amount of stock wealth. At the heart of the portfolios of many of these institutions were investments whose assets had been derived from bundled home mortgages. Exposure to these mortgage - backed securities, or to the credit derivatives used to insure them against failure, caused the collapse or takeover of several key firms such as Lehman Brothers, AIG, Merrill Lynch, and HBOS. The crisis rapidly developed and spread into a global economic shock, resulting in a number of European bank failures, declines in various stock indexes, and large reductions in the market value of equities and commodities. Both MBS and CDO were purchased by corporate and institutional investors globally. Derivatives such as credit default swaps also increased the linkage between large financial institutions. Moreover, the de-leveraging of financial institutions, as assets were sold to pay back obligations that could not be refinanced in frozen credit markets, further accelerated the solvency crisis and caused a decrease in international trade. World political leaders, national ministers of finance and central bank directors coordinated their efforts to reduce fears, but the crisis continued. At the end of October 2008 a currency crisis developed, with investors transferring vast capital resources into stronger currencies such as the yen, the dollar and the Swiss franc, leading many emergent economies to seek aid from the International Monetary Fund. Several commentators have suggested that if the liquidity crisis continues, an extended recession or worse could occur. The continuing development of the crisis has prompted fears of a global economic collapse although there are now many cautiously optimistic forecasters in addition to some prominent sources who remain negative. The financial crisis is likely to yield the biggest banking shakeout since the savings - and - loan meltdown. Investment bank UBS stated on October 6 that 2008 would see a clear global recession, with recovery unlikely for at least two years. Three days later UBS economists announced that the "beginning of the end '' of the crisis had begun, with the world starting to make the necessary actions to fix the crisis: capital injection by governments; injection made systemically; interest rate cuts to help borrowers. The United Kingdom had started systemic injection, and the world 's central banks were now cutting interest rates. UBS emphasized the United States needed to implement systemic injection. UBS further emphasized that this fixes only the financial crisis, but that in economic terms "the worst is still to come ''. UBS quantified their expected recession durations on October 16: the Eurozone 's would last two quarters, the United States ' would last three quarters, and the United Kingdom 's would last four quarters. The economic crisis in Iceland involved all three of the country 's major banks. Relative to the size of its economy, Iceland 's banking collapse is the largest suffered by any country in economic history. At the end of October UBS revised its outlook downwards: the forthcoming recession would be the worst since the early 1980s recession with negative 2009 growth for the US, Eurozone, UK; very limited recovery in 2010; but not as bad as the Great Depression. The Brookings Institution reported in June 2009 that US consumption accounted for more than a third of the growth in global consumption between 2000 and 2007. "The US economy has been spending too much and borrowing too much for years and the rest of the world depended on the US consumer as a source of global demand. '' With a recession in the US and the increased savings rate of US consumers, declines in growth elsewhere have been dramatic. For the first quarter of 2009, the annualized rate of decline in GDP was 14.4 % in Germany, 15.2 % in Japan, 7.4 % in the UK, 18 % in Latvia, 9.8 % in the Euro area and 21.5 % for Mexico. Some developing countries that had seen strong economic growth saw significant slowdowns. For example, growth forecasts in Cambodia show a fall from more than 10 % in 2007 to close to zero in 2009, and Kenya may achieve only 3 -- 4 % growth in 2009, down from 7 % in 2007. According to the research by the Overseas Development Institute, reductions in growth can be attributed to falls in trade, commodity prices, investment and remittances sent from migrant workers (which reached a record $251 billion in 2007, but have fallen in many countries since). This has stark implications and has led to a dramatic rise in the number of households living below the poverty line, be it 300,000 in Bangladesh or 230,000 in Ghana. Especially states with a fragile political system have to fear that investors from Western states withdraw their money because of the crisis. Bruno Wenn of the German DEG recommends to provide a sound economic policymaking and good governance to attract new investors The World Bank reported in February 2009 that the Arab World was far less severely affected by the credit crunch. With generally good balance of payments positions coming into the crisis or with alternative sources of financing for their large current account deficits, such as remittances, Foreign Direct Investment (FDI) or foreign aid, Arab countries were able to avoid going to the market in the latter part of 2008. This group is in the best position to absorb the economic shocks. They entered the crisis in exceptionally strong positions. This gives them a significant cushion against the global downturn. The greatest effect of the global economic crisis will come in the form of lower oil prices, which remains the single most important determinant of economic performance. Steadily declining oil prices would force them to draw down reserves and cut down on investments. Significantly lower oil prices could cause a reversal of economic performance as has been the case in past oil shocks. Initial impact will be seen on public finances and employment for foreign workers. The output of goods and services produced by labor and property located in the United States decreased at an annual rate of approximately 6 % in the fourth quarter of 2008 and first quarter of 2009, versus activity in the year - ago periods. The US unemployment rate increased to 10.1 % by October 2009, the highest rate since 1983 and roughly twice the pre-crisis rate. The average hours per work week declined to 33, the lowest level since the government began collecting the data in 1964. With the decline of gross domestic product came the decline in innovation. With fewer resources to risk in creative destruction, the number of patent applications flat - lined. Compared to the previous 5 years of exponential increases in patent application, this stagnation correlates to the similar drop in GDP during the same time period. Typical American families did not fare as well, nor did those "wealthy - but - not wealthiest '' families just beneath the pyramid 's top. On the other hand, half of the poorest families did not have wealth declines at all during the crisis. The Federal Reserve surveyed 4,000 households between 2007 and 2009, and found that the total wealth of 63 percent of all Americans declined in that period. 77 percent of the richest families had a decrease in total wealth, while only 50 percent of those on the bottom of the pyramid suffered a decrease. A 2015 study commissioned by the ACLU found that white home - owning households recovered from the financial crisis faster than black home - owning households, and projected that the financial crisis will likely widen the racial wealth gap in the US. On November 3, 2008, the European Commission at Brussels predicted for 2009 an extremely weak growth of GDP, by 0.1 %, for the countries of the Eurozone (France, Germany, Italy, Belgium etc.) and even negative number for the UK (− 1.0 %), Ireland and Spain. On November 6, the IMF at Washington, D.C., launched numbers predicting a worldwide recession by − 0.3 % for 2009, averaged over the developed economies. On the same day, the Bank of England and the European Central Bank, respectively, reduced their interest rates from 4.5 % down to 3 %, and from 3.75 % down to 3.25 %. As a consequence, starting from November 2008, several countries launched large "help packages '' for their economies. The US Federal Reserve Open Market Committee release in June 2009 stated: ... the pace of economic contraction is slowing. Conditions in financial markets have generally improved in recent months. Household spending has shown further signs of stabilizing but remains constrained by ongoing job losses, lower housing wealth, and tight credit. Businesses are cutting back on fixed investment and staffing but appear to be making progress in bringing inventory stocks into better alignment with sales. Although economic activity is likely to remain weak for a time, the Committee continues to anticipate that policy actions to stabilize financial markets and institutions, fiscal and monetary stimulus, and market forces will contribute to a gradual resumption of sustainable economic growth in a context of price stability. Economic projections from the Federal Reserve and Reserve Bank Presidents include a return to typical growth levels (GDP) of 2.5 -- 3 % in 2010; an unemployment plateau in 2009 and 2010 around 10 % with moderation in 2011; and inflation that remains at typical levels around 1 -- 2 %. The US Federal Reserve and central banks around the world took steps to expand money supplies to avoid the risk of a deflationary spiral, in which lower wages and higher unemployment led to a self - reinforcing decline in global consumption. In addition, governments enacted large fiscal stimulus packages, by borrowing and spending to offset the reduction in private sector demand caused by the crisis. The US Federal Reserve 's new and expanded liquidity facilities were intended to enable the central bank to fulfill its traditional lender - of - last - resort role during the crisis while mitigating stigma, broadening the set of institutions with access to liquidity, and increasing the flexibility with which institutions could tap such liquidity. This credit freeze brought the global financial system to the brink of collapse. The response of the Federal Reserve, the European Central Bank, the Bank of England and other central banks was immediate and dramatic. During the last quarter of 2008, these central banks purchased US $2.5 trillion of government debt and troubled private assets from banks. This was the largest liquidity injection into the credit market, and the largest monetary policy action, in world history. Following a model initiated by the United Kingdom bank rescue package, the governments of European nations and the US guaranteed the debt issued by their banks and raised the capital of their national banking systems, ultimately purchasing $1.5 trillion newly issued preferred stock in their major banks. In October 2010, Nobel laureate Joseph Stiglitz explained how the US Federal Reserve was implementing another monetary policy -- creating currency -- as a method to combat the liquidity trap. By creating $600 billion and inserting this directly into banks, the Federal Reserve intended to spur banks to finance more domestic loans and refinance mortgages. However, banks instead were spending the money in more profitable areas by investing internationally in emerging markets. Banks were also investing in foreign currencies, which Stiglitz and others point out may lead to currency wars while China redirects its currency holdings away from the United States. Governments have also bailed out a variety of firms as discussed above, incurring large financial obligations. To date, various US government agencies have committed or spent trillions of dollars in loans, asset purchases, guarantees, and direct spending. Significant controversy has accompanied the bailout, leading to the development of a variety of "decision making frameworks '', to help balance competing policy interests during times of financial crisis. The US executed two stimulus packages, totaling nearly $1 trillion during 2008 and 2009. Other countries also implemented fiscal stimulus plans beginning in 2008. United States President Barack Obama and key advisers introduced a series of regulatory proposals in June 2009. The proposals address consumer protection, executive pay, bank financial cushions or capital requirements, expanded regulation of the shadow banking system and derivatives, and enhanced authority for the Federal Reserve to safely wind - down systemically important institutions, among others. In January 2010, Obama proposed additional regulations limiting the ability of banks to engage in proprietary trading. The proposals were dubbed "The Volcker Rule '', in recognition of Paul Volcker, who has publicly argued for the proposed changes. The US Senate passed a reform bill in May 2010, following the House, which passed a bill in December 2009. These bills must now be reconciled. The New York Times provided a comparative summary of the features of the two bills, which address to varying extent the principles enumerated by the Obama administration. For instance, the Volcker Rule against proprietary trading is not part of the legislation, though in the Senate bill regulators have the discretion but not the obligation to prohibit these trades. European regulators introduced Basel III regulations for banks. It increased capital ratios, limits on leverage, narrow definition of capital (to exclude subordinated debt), limit counter-party risk, and new liquidity requirements. Critics argue that Basel III does n't address the problem of faulty risk - weightings. Major banks suffered losses from AAA - rated created by financial engineering (which creates apparently risk - free assets out of high risk collateral) that required less capital according to Basel II. Lending to AA - rated sovereigns has a risk - weight of zero, thus increasing lending to governments and leading to the next crisis. Johan Norberg argues that regulations (Basel III among others) have indeed led to excessive lending to risky governments (see European sovereign - debt crisis) and the ECB pursues even more lending as the solution. At least two major reports were produced by Congress: the Financial Crisis Inquiry Commission report, released January 2011, and a report by the United States Senate Homeland Security Permanent Subcommittee on Investigations entitled Wall Street and the Financial Crisis: Anatomy of a Financial Collapse (released April 2011). In Iceland in April 2012, the special Landsdómur court convicted former Prime Minister Geir Haarde of mishandling the 2008 -- 2012 Icelandic financial crisis. As of September 2011, no individuals in the UK have been prosecuted for misdeeds during the financial meltdown of 2008. As of 2012, in the United States, a large volume of troubled mortgages remained in place. It had proved impossible for most homeowners facing foreclosure to refinance or modify their mortgages and foreclosure rates remained high. The US recession that began in December 2007 ended in June 2009, according to the US National Bureau of Economic Research (NBER) and the financial crisis appears to have ended about the same time. In April 2009 TIME magazine declared "More Quickly Than It Began, The Banking Crisis Is Over. '' The United States Financial Crisis Inquiry Commission dates the crisis to 2008. President Barack Obama declared on January 27, 2010, "the markets are now stabilized, and we 've recovered most of the money we spent on the banks. '' The New York Times identifies March 2009 as the "nadir of the crisis '' and noted in 2011 that "Most stock markets around the world are at least 75 percent higher than they were then. Financial stocks, which led the markets down, have also led them up. '' Nevertheless, the lack of fundamental changes in banking and financial markets worries many market participants, including the International Monetary Fund. The distribution of household incomes in the United States has become more unequal during the post-2008 economic recovery, a first for the US but in line with the trend over the last ten economic recoveries since 1949. Income inequality in the United States has grown from 2005 to 2012 in more than 2 out of 3 metropolitan areas. Median household wealth fell 35 % in the US, from $106,591 to $68,839 between 2005 and 2011. The financial crisis generated many articles and books outside of the scholarly and financial press, including articles and books by author William Greider, economist Michael Hudson, author and former bond salesman Michael Lewis, Kevin Phillips, and investment broker Peter Schiff. In May 2010, a documentary, Overdose: A Film about the Next Financial Crisis, premiered about how the financial crisis came about and how the solutions that have been applied by many governments are setting the stage for the next crisis. The film is based on the book Financial Fiasco by Johan Norberg and features Alan Greenspan, with funding from the libertarian think tank The Cato Institute. Greenspan is responsible for de-regulating the derivatives market while chairman of the Federal Reserve. In October 2010, a documentary film about the crisis, Inside Job directed by Charles Ferguson, was released by Sony Pictures Classics. In 2011, it was awarded the Academy Award for Best Documentary Feature at the 83rd Academy Awards. Time magazine named "25 People to Blame for the Financial Crisis ''. Michael Lewis published a best - selling non-fiction book about the crisis, entitled The Big Short. In 2015, it was adapted into a film of the same name, which won the Academy Award for Best Adapted Screenplay. One point raised is to what extent those outside of the markets themselves (i.e., not working for a mainstream investment bank) could forecast the events and be generally less myopic. Subsequent to the crisis itself some observers furthermore noted a change in social relations as some group culpability emerged. Advanced economies led global economic growth prior to the financial crisis with "emerging '' and "developing '' economies lagging behind. The crisis overturned this relationship. The International Monetary Fund found that "advanced '' economies accounted for only 31 % of global GDP while emerging and developing economies accounted for 69 % of global GDP from 2007 to 2014. In the tables, the names of emergent economies are shown in boldface type, while the names of developed economies are in Roman (regular) type. The twenty largest economies contributing to global GDP growth (2007 -- 2014) The initial articles and some subsequent material were adapted from the Wikinfo article Financial crisis of 2007 -- 2008 released under the GNU Free Documentation License Version 1.2
which of the following is the correct order for piagets four stages of cognitive development
Piaget 's theory of cognitive development - wikipedia Piaget 's theory of cognitive development is a comprehensive theory about the nature and development of human intelligence. It was first created by the Swiss developmental psychologist Jean Piaget (1896 -- 1980). The theory deals with the nature of knowledge itself and how humans gradually come to acquire, construct, and use it. Piaget 's theory is mainly known as a developmental stage theory. To Piaget, cognitive development was a progressive reorganization of mental processes resulting from biological maturation and environmental experience. He believed that children construct an understanding of the world around them, experience discrepancies between what they already know and what they discover in their environment, then adjust their ideas accordingly. Moreover, Piaget claimed that cognitive development is at the center of the human organism, and language is contingent on knowledge and understanding acquired through cognitive development. Piaget 's earlier work received the greatest attention. Child - centered classrooms and "open education '' are direct applications of Piaget 's views. Despite its huge success, Piaget 's theory has some limitations that Piaget recognized himself: for example, the theory supports sharp stages rather than continuous development (décalage). Piaget noted that reality is a dynamic system of continuous change. Reality is defined in reference to the two conditions that define dynamic systems. Specifically, he argued that reality involves transformations and states. Transformations refer to all manners of changes that a thing or person can undergo. States refer to the conditions or the appearances in which things or persons can be found between transformations. For example, there might be changes in shape or form (for instance, liquids are reshaped as they are transferred from one vessel to another, and similarly humans change in their characteristics as they grow older), in size (a toddler does not walk and run without falling, but after 7 yrs of age, the child 's sensory motor anatomy is well developed and now acquires skill faster), or in placement or location in space and time (e.g., various objects or persons might be found at one place at one time and at a different place at another time). Thus, Piaget argued, if human intelligence is to be adaptive, it must have functions to represent both the transformational and the static aspects of reality. He proposed that operative intelligence is responsible for the representation and manipulation of the dynamic or transformational aspects of reality, and that figurative intelligence is responsible for the representation of the static aspects of reality. Operative intelligence is the active aspect of intelligence. It involves all actions, overt or covert, undertaken in order to follow, recover, or anticipate the transformations of the objects or persons of interest. Figurative intelligence is the more or less static aspect of intelligence, involving all means of representation used to retain in mind the states (i.e., successive forms, shapes, or locations) that intervene between transformations. That is, it involves perception, imitation, mental imagery, drawing, and language. Therefore, the figurative aspects of intelligence derive their meaning from the operative aspects of intelligence, because states can not exist independently of the transformations that interconnect them. Piaget stated that the figurative or the representational aspects of intelligence are subservient to its operative and dynamic aspects, and therefore, that understanding essentially derives from the operative aspect of intelligence. At any time, operative intelligence frames how the world is understood and it changes if understanding is not successful. Piaget stated that this process of understanding and change involves two basic functions: assimilation and accommodation. Through his study of the field of education, Piaget focused on two processes, which he named assimilation and accommodation. To Piaget, assimilation meant integrating external elements into structures of lives or environments, or those we could have through experience. Assimilation is how humans perceive and adapt to new information. It is the process of fitting new information into pre-existing cognitive schemas. Assimilation in which new experiences are reinterpreted to fit into, or assimilate with, old ideas. It occurs when humans are faced with new or unfamiliar information and refer to previously learned information in order to make sense of it. In contrast, accommodation is the process of taking new information in one 's environment and altering pre-existing schemas in order to fit in the new information. This happens when the existing schema (knowledge) does not work, and needs to be changed to deal with a new object or situation. Accommodation is imperative because it is how people will continue to interpret new concepts, schemas, frameworks, and more. Piaget believed that the human brain has been programmed through evolution to bring equilibrium, which is what he believed ultimately influences structures by the internal and external processes through assimilation and accommodation. Piaget 's understanding was that assimilation and accommodation can not exist without the other. They are two sides of a coin. To assimilate an object into an existing mental schema, one first needs to take into account or accommodate to the particularities of this object to a certain extent. For instance, to recognize (assimilate) an apple as an apple, one must first focus (accommodate) on the contour of this object. To do this, one needs to roughly recognize the size of the object. Development increases the balance, or equilibration, between these two functions. When in balance with each other, assimilation and accommodation generate mental schemas of the operative intelligence. When one function dominates over the other, they generate representations which belong to figurative intelligence. Cognitive development is Jean Piaget 's theory. Through a series of stages, Piaget proposed four stages of cognitive development: the sensorimotor, preoperational, concrete operational and formal operational period. The sensorimotor stage is the first of the four stages in cognitive development which "extends from birth to the acquisition of language ''. In this stage, infants progressively construct knowledge and understanding of the world by coordinating experiences (such as vision and hearing) with physical interactions with objects (such as grasping, sucking, and stepping). Infants gain knowledge of the world from the physical actions they perform within it. They progress from reflexive, instinctual action at birth to the beginning of symbolic thought toward the end of the stage. Children learn that they are separate from the environment. They can think about aspects of the environment, even though these may be outside the reach of the child 's senses. In this stage, according to Piaget, the development of object permanence is one of the most important accomplishments. Object permanence is a child 's understanding that objects continue to exist even though he or she can not see or hear them. Peek - a-boo is a good test for that. By the end of the sensorimotor period, children develop a permanent sense of self and object. Piaget divided the sensorimotor stage into six sub-stages ". By observing sequences of play, Piaget was able to demonstrate that, towards the end of the second year, a qualitatively new kind of psychological functioning occurs, known as the Pre-operational Stage. It starts when the child begins to learn to speak at age two and lasts up until the age of seven. During the Pre-operational Stage of cognitive development, Piaget noted that children do not yet understand concrete logic and can not mentally manipulate information. Children 's increase in playing and pretending takes place in this stage. However, the child still has trouble seeing things from different points of view. The children 's play is mainly categorized by symbolic play and manipulating symbols. Such play is demonstrated by the idea of checkers being snacks, pieces of paper being plates, and a box being a table. Their observations of symbols exemplifies the idea of play with the absence of the actual objects involved. The pre-operational stage is sparse and logically inadequate in regard to mental operations. The child is able to form stable concepts as well as magical beliefs. The child, however, is still not able to perform operations, which are tasks that the child can do mentally, rather than physically. Thinking in this stage is still egocentric, meaning the child has difficulty seeing the viewpoint of others. The Pre-operational Stage is split into two substages: the symbolic function substage, and the intuitive thought substage. The symbolic function substage is when children are able to understand, represent, remember, and picture objects in their mind without having the object in front of them. The intuitive thought substage is when children tend to propose the questions of "why? '' and "how come? '' This stage is when children want to understand everything. At about two to four years of age, children can not yet manipulate and transform information in a logical way. However, they now can think in images and symbols. Other examples of mental abilities are language and pretend play. Symbolic play is when children develop imaginary friends or role - play with friends. Children 's play becomes more social and they assign roles to each other. Some examples of symbolic play include playing house, or having a tea party. Interestingly, the type of symbolic play in which children engage is connected with their level of creativity and ability to connect with others. Additionally, the quality of their symbolic play can have consequences on their later development. For example, young children whose symbolic play is of a violent nature tend to exhibit less prosocial behavior and are more likely to display antisocial tendencies in later years. In this stage, there are still limitations, such as egocentrism and precausal thinking. Egocentrism occurs when a child is unable to distinguish between their own perspective and that of another person. Children tend to stick to their own viewpoint, rather than consider the view of others. Indeed, they are not even aware that such a concept as "different viewpoints '' exists. Egocentrism can be seen in an experiment performed by Piaget and Swiss developmental psychologist Bärbel Inhelder, known as the three mountain problem. In this experiment, three views of a mountain are shown to the child, who is asked what a traveling doll would see at the various angles. The child will consistently describe what they can see from the position from which they are seated, regardless of the angle from which they are asked to take the doll 's perspective. Egocentrism would also cause a child to believe, "I like Sesame Street, so Daddy must like Sesame Street, too ''. cvb Similar to preoperational children 's egocentric thinking is their structuring of a cause and effect relationships. Piaget coined the term "precausal thinking '' to describe the way in which preoperational children use their own existing ideas or views, like in egocentrism, to explain cause - and - effect relationships. Three main concepts of causality as displayed by children in the preoperational stage include: animism, artificialism and transductive reasoning. Animism is the belief that inanimate objects are capable of actions and have lifelike qualities. An example could be a child believing that the sidewalk was mad and made them fall down, or that the stars twinkle in the sky because they are happy. Artificialism refers to the belief that environmental characteristics can be attributed to human actions or interventions. For example, a child might say that it is windy outside because someone is blowing very hard, or the clouds are white because someone painted them that color. Finally, precausal thinking is categorized by transductive reasoning. Transductive reasoning is when a child fails to understand the true relationships between cause and effect. Unlike deductive or inductive reasoning (general to specific, or specific to general), transductive reasoning refers to when a child reasons from specific to specific, drawing a relationship between two separate events that are otherwise unrelated. For example, if a child hears the dog bark and then a balloon popped, the child would conclude that because the dog barked, the balloon popped. At between about the ages of 4 and 7, children tend to become very curious and ask many questions, beginning the use of primitive reasoning. There is an emergence in the interest of reasoning and wanting to know why things are the way they are. Piaget called it the "intuitive substage '' because children realize they have a vast amount of knowledge, but they are unaware of how they acquired it. Centration, conservation, irreversibility, class inclusion, and transitive inference are all characteristics of preoperative thought. Centration is the act of focusing all attention on one characteristic or dimension of a situation, whilst disregarding all others. Conservation is the awareness that altering a substance 's appearance does not change its basic properties. Children at this stage are unaware of conservation and exhibit centration. Both centration and conservation can be more easily understood once familiarized with Piaget 's most famous experimental task. In this task, a child is presented with two identical beakers containing the same amount of liquid. The child usually notes that the beakers do contain the same amount of liquid. When one of the beakers is poured into a taller and thinner container, children who are younger than seven or eight years old typically say that the two beakers no longer contain the same amount of liquid, and that the taller container holds the larger quantity (centration), without taking into consideration the fact that both beakers were previously noted to contain the same amount of liquid. Due to superficial changes, the child was unable to comprehend that the properties of the substances continued to remain the same (conservation). Irreversibility is a concept developed in this stage which is closely related to the ideas of centration and conservation. Irreversibility refers to when children are unable to mentally reverse a sequence of events. In the same beaker situation, the child does not realize that, if the sequence of events was reversed and the water from the tall beaker was poured back into its original beaker, then the same amount of water would exist. Another example of children 's reliance on visual representations is their misunderstanding of "less than '' or "more than ''. When two rows containing equal amounts of blocks are placed in front of a child, one row spread farther apart than the other, the child will think that the row spread farther contains more blocks. Class inclusion refers to a kind of conceptual thinking that children in the preoperational stage can not yet grasp. Children 's inability to focus on two aspects of a situation at once inhibits them from understanding the principle that one category or class can contain several different subcategories or classes. For example, a four - year - old girl may be shown a picture of eight dogs and three cats. The girl knows what cats and dogs are, and she is aware that they are both animals. However, when asked, "Are there more dogs or animals? '' she is likely to answer "more dogs ''. This is due to her difficulty focusing on the two subclasses and the larger class all at the same time. She may have been able to view the dogs as dogs or animals, but struggled when trying to classify them as both, simultaneously. Similar to this is concept relating to intuitive thought, known as "transitive inference ''. Transitive inference is using previous knowledge to determine the missing piece, using basic logic. Children in the preoperational stage lack this logic. An example of transitive inference would be when a child is presented with the information "A '' is greater than "B '' and "B '' is greater than "C ''. This child may have difficulty here understanding that "A '' is also greater than "C ''. The concrete operational stage is the third stage of Piaget 's theory of cognitive development. This stage, which follows the preoperational stage, occurs between the ages of 7 and 11 (preadolescence) years, and is characterized by the appropriate use of logic. During this stage, a child 's thought processes become more mature and "adult like ''. They start solving problems in a more logical fashion. Abstract, hypothetical thinking is not yet developed in the child, and children can only solve problems that apply to concrete events or objects. At this stage, the children undergo a transition where the child learns rules such as conservation. Piaget determined that children are able to incorporate Inductive reasoning. Inductive reasoning involves drawing inferences from observations in order to make a generalization. In contrast, children struggle with deductive reasoning, which involves using a generalized principle in order to try to predict the outcome of an event. Children in this stage commonly experience difficulties with figuring out logic in their heads. For example, a child will understand that "A is more than B '' and "B is more than C ''. However, when asked "is A more than C? '', the child might not be able to logically figure the question out in his or her head. Two other important processes in the concrete operational stage are logic and the elimination of egocentrism. Egocentrism is the inability to consider or understand a perspective other than one 's own. It is the phase where the thought and morality of the child is completely self focused. During this stage, the child acquires the ability to view things from another individual 's perspective, even if they think that perspective is incorrect. For instance, show a child a comic in which Jane puts a doll under a box, leaves the room, and then Melissa moves the doll to a drawer, and Jane comes back. A child in the concrete operations stage will say that Jane will still think it 's under the box even though the child knows it is in the drawer. (See also False - belief task.) Children in this stage can, however, only solve problems that apply to actual (concrete) objects or events, and not abstract concepts or hypothetical tasks. Understanding and knowing how to use full common sense has not yet been completely adapted. Piaget determined that children in the concrete operational stage were able to incorporate inductive logic. On the other hand, children at this age have difficulty using deductive logic, which involves using a general principle to predict the outcome of a specific event. This includes mental reversibility. An example of this is being able to reverse the order of relationships between mental categories. For example, a child might be able to recognize that his or her dog is a Labrador, that a Labrador is a dog, and that a dog is an animal, and draw conclusions from the information available, as well as apply all these processes to hypothetical situations. The abstract quality of the adolescent 's thought at the formal operational level is evident in the adolescent 's verbal problem solving ability. The logical quality of the adolescent 's thought is when children are more likely to solve problems in a trial - and - error fashion. Adolescents begin to think more as a scientist thinks, devising plans to solve problems and systematically test opinions. They use hypothetical - deductive reasoning, which means that they develop hypotheses or best guesses, and systematically deduce, or conclude, which is the best path to follow in solving the problem. During this stage the adolescent is able to understand love, logical proofs and values. During this stage the young person begins to entertain possibilities for the future and is fascinated with what they can be. Adolescents also are changing cognitively by the way that they think about social matters. Adolescent egocentrism governs the way that adolescents think about social matters, and is the heightened self - consciousness in them as they are, which is reflected in their sense of personal uniqueness and invincibility. Adolescent egocentrism can be dissected into two types of social thinking, imaginary audience that involves attention - getting behavior, and personal fable, which involves an adolescent 's sense of personal uniqueness and invincibility. These two types of social thinking begin to affect a child 's egocentrism in the concrete stage. However, it carries over to the formal operational stage when they are then faced with abstract thought and fully logical thinking. Piagetian tests are well known and practiced to test for concrete operations. The most prevalent tests are those for conservation. There are some important aspects that the experimenter must take into account when performing experiments with these children. One example of an experiment for testing conservation is the water level task. An experimenter will have two glasses that are the same size, fill them to the same level with liquid, which the child will acknowledge is the same. Then, the experimenter will pour the liquid from one of the small glasses into a tall, thin glass. The experimenter will then ask the child if the taller glass has more liquid, less liquid, or the same amount of liquid. The child will then give his answer. The experimenter will ask the child why he gave his answer, or why he thinks that is. The final stage is known as the formal operational stage (adolescence and into adulthood, roughly ages 11 to approximately 15 -- 20): Intelligence is demonstrated through the logical use of symbols related to abstract concepts. This form of thought includes "assumptions that have no necessary relation to reality. '' At this point, the person is capable of hypothetical and deductive reasoning. During this time, people develop the ability to think about abstract concepts. Piaget stated that "hypothetico - deductive reasoning '' becomes important during the formal operational stage. This type of thinking involves hypothetical "what - if '' situations that are not always rooted in reality, i.e. counterfactual thinking. It is often required in science and mathematics. While children in primary school years mostly used inductive reasoning, drawing general conclusions from personal experiences and specific facts, adolescents become capable of deductive reasoning, in which they draw specific conclusions from abstract concepts using logic. This capability results from their capacity to think hypothetically. "However, research has shown that not all persons in all cultures reach formal operations, and most people do not use formal operations in all aspects of their lives ''. Piaget and his colleagues conducted several experiments to assess formal operational thought. In one of the experiments, Piaget evaluated the cognitive capabilities of children of different ages through the use of a scale and varying weights. The task was to balance the scale by hooking weights on the ends of the scale. To successfully complete the task, the children must use formal operational thought to realize that the distance of the weights from the center and the heaviness of the weights both affected the balance. A heavier weight has to be placed closer to the center of the scale, and a lighter weight has to be placed farther from the center, so that the two weights balance each other. While 3 - to 5 - year olds could not at all comprehend the concept of balancing, children by the age of 7 could balance the scale by placing the same weights on both ends, but they failed to realize the importance of the location. By age 10, children could think about location but failed to use logic and instead used trial - and - error. Finally, by age 13 and 14, in early adolescence, some children more clearly understood the relationship between weight and distance and could successfully implement their hypothesis. Piaget sees children 's conception of causation as a march from "primitive '' conceptions of cause to those of a more scientific, rigorous, and mechanical nature. These primitive concepts are characterized as supernatural, with a decidedly non-natural or non-mechanical tone. Piaget has as his most basic assumption that babies are phenomenists. That is, their knowledge "consists of assimilating things to schemas '' from their own action such that they appear, from the child 's point of view, "to have qualities which, in fact, stem from the organism ''. Consequently, these "subjective conceptions, '' so prevalent during Piaget 's first stage of development, are dashed upon discovering deeper empirical truths. Piaget gives the example of a child believing that the moon and stars follow him on a night walk. Upon learning that such is the case for his friends, he must separate his self from the object, resulting in a theory that the moon is immobile, or moves independently of other agents. The second stage, from around three to eight years of age, is characterized by a mix of this type of magical, animistic, or "non-natural '' conceptions of causation and mechanical or "naturalistic '' causation. This conjunction of natural and non-natural causal explanations supposedly stems from experience itself, though Piaget does not make much of an attempt to describe the nature of the differences in conception. In his interviews with children, he asked questions specifically about natural phenomena, such as: "What makes clouds move? '', "What makes the stars move? '', "Why do rivers flow? '' The nature of all the answers given, Piaget says, are such that these objects must perform their actions to "fulfill their obligations towards men ''. He calls this "moral explanation ''. Parents can use Piaget 's theory when deciding how to determine what to buy in order to support their child 's growth. Teachers can also use Piaget 's theory, for instance, when discussing whether the syllabus subjects are suitable for the level of students or not. For example, recent studies have shown that children in the same grade and of the same age perform differentially on tasks measuring basic addition and subtraction fluency. While children in the preoperational and concrete operational levels of cognitive development perform combined arithmetic operations (such as addition and subtraction) with similar accuracy, children in the concrete operational level of cognitive development have been able to perform both addition problems and subtraction problems with overall greater fluency. The stage of cognitive growth of a person differ from another. It affects and influences how someone thinks about everything including flowers. A 7 - month old infant, in the sensorimotor age, flowers are recognized by smelling, pulling and biting. A slightly older child has not realized that a flower is not fragrant, but similar to many children at her age, her egocentric, two handed curiosity will teach her. In the formal operational stage of an adult, flowers are part of larger, logical scheme. They are used either to earn money or to create beauty. Cognitive development or thinking is an active process from the beginning to the end of life. Intellectual advancement happens because people at every age and developmental period looks for cognitive equilibrium. To achieve this balance, the easiest way is to understand the new experiences through the lens of the preexisting ideas. Infants learn that new objects can be grabbed in the same way of familiar objects, and adults explain the day 's headlines as evidence for their existing worldview. However, the application of standardized Piagetian theory and procedures in different societies established widely varying results that lead some to speculate not only that some cultures produce more cognitive development than others but that without specific kinds of cultural experience, but also formal schooling, development might cease at certain level, such as concrete operational level. A procedure was done following methods developed in Geneva (i.e. water level task). Participants were presented with two beakers of equal circumference and height, filled with equal amounts of water. The water from one beaker was transferred into another with taller and smaller circumference. The children and young adults from non-literate societies of a given age were more likely to think that the taller, thinner beaker had more water in it. On the other hand, an experiment on the effects of modifying testing procedures to match local cultural produced a different pattern of results. In the revised procedures, the participants explained in their own language and indicated that while the water was now "more '', the quantity was the same. Piaget 's water level task has also been applied to the elderly by Formann and results showed an age - associated non-linear decline of performance. In 1967, Piaget considered the possibility of RNA molecules as likely embodiments of his still - abstract schemas (which he promoted as units of action) -- though he did not come to any firm conclusion. At that time, due to work such as that of Swedish biochemist Holger Hydén, RNA concentrations had, indeed, been shown to correlate with learning, so the idea was quite plausible. However, by the time of Piaget 's death in 1980, this notion had lost favor. One main problem was over the protein which, it was assumed, such RNA would necessarily produce, and that did not fit in with observation. It was determined that only about 3 % of RNA does code for protein. Hence, most of the remaining 97 % (the "ncRNA '') could theoretically be available to serve as Piagetian schemas (or other regulatory roles in the 2000s under investigation). The issue has not yet been resolved experimentally, but its theoretical aspects were reviewed in 2008 -- then developed further from the viewpoints of biophysics and epistemology. Meanwhile, this RNA - based approach also unexpectedly offered explanations for other several biological issues unresolved, thus providing some measure of corroboration. Piaget designed a number of tasks to verify hypotheses arising from his theory. The tasks were not intended to measure individual differences, and they have no equivalent in psychometric intelligence tests. Notwithstanding the different research traditions in which psychometric tests and Piagetian tasks were developed, the correlations between the two types of measures have been found to be consistently positive and generally moderate in magnitude. A common general factor underlies them. It has been shown that it is possible to construct a battery consisting of Piagetian tasks that is as good a measure of general intelligence as standard IQ tests. Piagetian accounts of development have been challenged on several grounds. First, as Piaget himself noted, development does not always progress in the smooth manner his theory seems to predict. Décalage, or progressive forms of cognitive developmental progression in a specific domain, suggest that the stage model is, at best, a useful approximation. Furthermore, studies have found that children may be able to learn concepts and capability of complex reasoning that supposedly represented in more advanced stages with relative ease (Lourenço & Machado, 1996, p. 145). More broadly, Piaget 's theory is "domain general, '' predicting that cognitive maturation occurs concurrently across different domains of knowledge (such as mathematics, logic, and understanding of physics or language). Piaget did not take into account variability in a child 's performance notably how a child can differ in sophistication across several domains. During the 1980s and 1990s, cognitive developmentalists were influenced by "neo-nativist '' and evolutionary psychology ideas. These ideas de-emphasized domain general theories and emphasized domain specificity or modularity of mind. Modularity implies that different cognitive faculties may be largely independent of one another, and thus develop according to quite different timetables, which are "influenced by real world experiences ''. In this vein, some cognitive developmentalists argued that, rather than being domain general learners, children come equipped with domain specific theories, sometimes referred to as "core knowledge, '' which allows them to break into learning within that domain. For example, even young infants appear to be sensitive to some predictable regularities in the movement and interactions of objects (for example, an object can not pass through another object), or in human behavior (for example, a hand repeatedly reaching for an object has that object, not just a particular path of motion), as it becomes the building block of which more elaborate knowledge is constructed. Piaget 's theory has been said to undervalue the influence that culture has on cognitive development. Piaget demonstrates that a child goes through several stages of cognitive development and come to conclusions on their own but in reality, a child 's sociocultural environment plays an important part in their cognitive development. Social interaction teaches the child about the world and helps them develop through the cognitive stages, which Piaget neglected to consider. More recent work has strongly challenged some of the basic presumptions of the "core knowledge '' school, and revised ideas of domain generality -- but from a newer dynamic systems approach, not from a revised Piagetian perspective. Dynamic systems approaches harken to modern neuroscientific research that was not available to Piaget when he was constructing his theory. One important finding is that domain - specific knowledge is constructed as children develop and integrate knowledge. This enables the domain to improve the accuracy of the knowledge as well as organization of memories. However, this suggests more of a "smooth integration '' of learning and development than either Piaget, or his neo-nativist critics, had envisioned. Additionally, some psychologists, such as Lev Vygotsky and Jerome Bruner, thought differently from Piaget, suggesting that language was more important for cognition development than Piaget implied. In recent years, several theorists attempted to address concerns with Piaget 's theory by developing new theories and models that can accommodate evidence which violates Piagetian predictions and postulates.
write the name of two types of nerve processes
Nervous tissue - wikipedia Nervous tissue or nerve tissue is the main tissue component of the two parts of the nervous system; the brain and spinal cord of the central nervous system (CNS), and the branching peripheral nerves of the peripheral nervous system (PNS), which regulates and controls bodily functions and activity. It is composed of neurons, or nerve cells, which receive and transmit impulses, and neuroglia, also known as glial cells or more commonly as just glia (from the Greek, meaning glue), which assist the propagation of the nerve impulse as well as providing nutrients to the neuron. Nervous tissue is made up of different types of nerve cells, all of which have an axon, the long stem - like part of the cell that sends action potential signals to the next cell. Bundles of axons make up the nerves. Functions of the nervous system are sensory input, integration, control of muscles and glands, homeostasis, and mental activity. Nervous tissue is composed of neurons, also called nerve cells, and neuroglial cells. Typically, nervous tissue is categorized into four types of tissue. In the central nervous system (CNS), the tissue types found are grey matter and white matter. In the peripheral nervous system (PNS), the tissue types are nerves and ganglia. The tissue is categorized by its neuronal and neuroglial components. Neurons are cells with specialized features that allow them to receive and facilitate nerve impulses, or action potentials, across their membrane to the next neuron. They possess a large cell body (soma), with cell projections called dendrites and an axon. Dendrites are thin, branching projections that receive electrochemical signaling (neurotransmitters) to create a change in voltage in the cell. Axons are long projections that carry the action potential away from the cell body toward the next neuron. The bulb - like end of the axon, called the axon terminal, is separated from the dendrite of the following neuron by a small gap called a synaptic cleft. When the action potential travels to the axon terminal, neurotransmitters are released across the synapse and bind to the post-synaptic receptors, continuing the nerve impulse. Neurons are classified both functionally and structurally. Functional classification: Structural classification: Neuroglia encompasses the non-neural cells in nervous tissue that provide various crucial supportive functions for neurons. They are smaller than neurons, and vary in structure according to their function. Neuroglial cells are classified as follows: In the central nervous system: In the Peripheral Nervous System: The three layers of connective tissue surrounding each nerve are: The function of nervous tissue is to form the communication network of the nervous system by conducting electric signals across tissue. In the CNS, grey matter, which contains the synapses, is important for information processing. White matter, containing myelinated axons, connects and facilitates nerve impulse between grey matter areas in the CNS. In the PNS, the ganglion tissue, containing the cell bodies and dendrites, contain relay points for nerve tissue impulses. The nerve tissue, containing myelinated axons bundles, carry action potential / nerve impulses. Neoplasms (tumours) in nervous tissue include:
when did the white sox win their last world series
2005 World Series - wikipedia The 2005 World Series was the 101st edition of Major League Baseball 's championship series, a best - of - seven playoff between the American League (AL) champions Chicago White Sox and the National League (NL) champions Houston Astros. The White Sox swept the Astros four games to none in the series, played between October 22 to 26, winning their third World Series championship and their first in 88 seasons. Although the series was a sweep, all four games were quite close, being decided by two runs or fewer. Home - field advantage was awarded to Chicago by virtue of the American League 's 7 -- 5 victory over the National League in the Major League Baseball All - Star Game. The Astros were attempting to become the fourth consecutive wild card team to win the Series, following the Anaheim Angels (2002), Florida Marlins (2003) and Boston Red Sox (2004). Both teams were attempting to overcome decades of disappointment, with a combined 132 years between the two teams without a title. The Astros were making their first Series appearance in 44 years of play, while the White Sox had waited exactly twice as long for a title, having last won the Series in 1917, and had not been in the Series since 1959, three years before the Astros ' inaugural season. Like the 1982 World Series between the St. Louis Cardinals and the Milwaukee Brewers, the 2005 World Series is one of only two World Series in the modern era (1903 -- present) with no possibility for a rematch between the two opponents, because the Astros moved to the American League in 2013. However, the Brewers did meet the Cardinals in the 2011 National League Championship Series. The Chicago White Sox finished the regular season with the best record in the American League at 99 -- 63. After starting the season on a tear, the White Sox began to fade in August, when a ​ 15 ⁄ game lead fell all the way to ​ 1 ⁄. However, the Sox were able to hold off the Cleveland Indians to win the American League Central Division by six games, sweeping Cleveland in three games on the season 's final weekend. In the Division Series, the White Sox swept the defending champion Boston Red Sox. The League Championship Series began with the Los Angeles Angels of Anaheim winning Game 1, but a controversial uncaught third strike in Game 2 helped the Sox start a run and win Games 2 -- 5, all on complete games pitched by starters Mark Buehrle, Jon Garland, Freddy García, and José Contreras, clinching their first American League pennant in 46 years. Manager Ozzie Guillén then led the White Sox to a World Series victory, their first in 88 years. Slugger Frank Thomas was not on the post-season roster because he was injured, but the team honored his perennial contributions to the franchise during Game 1 of the Division Series against the Boston Red Sox when he was chosen to throw out the ceremonial first pitch. "What a feeling, '' Thomas said. "Standing O all around the place. People really cheering me. I had tears in my eyes. To really know the fans cared that much about me -- it was a great feeling. One of my proudest moments in the game. '' The Houston Astros won the Wild Card for the second straight year, once again clinching it on the final day of the season. The Astros embarked on a memorable Division Series rematch against the Atlanta Braves. With the Astros in the lead two games to one, the teams played an eighteen - inning marathon in Game 4, which was the longest (in both time and innings played) postseason game in history. In this game, Roger Clemens made only the second relief appearance of his career, and the first in postseason play. Chris Burke 's walk - off home run ended the game in the bottom of the eighteenth. For the second straight year, the Astros played the St. Louis Cardinals in the League Championship Series. Like the White Sox, the Astros dropped Game 1, but were able to regroup and win Games 2 -- 4. With the Astros on the verge of clinching their first ever National League pennant in Game 5, Albert Pujols hit a mammoth three - run home run off Brad Lidge in the top of the ninth inning to take the lead, and subsequently stave off elimination. However, behind NLCS MVP Roy Oswalt, the Astros were able to defeat the Cards 5 -- 1 in Game 6 and earned a trip to the World Series. This was the Astros ' first World Series appearance in franchise history. Playing in their first World Series home game since 1959, the White Sox took an early lead with a home run from Jermaine Dye in the first inning. After Mike Lamb 's home run tied the game in the second, the Sox scored two more in the second when Juan Uribe doubled in A.J. Pierzynski after Carl Everett had already scored on a ground - out earlier in the inning. The Astros responded in the next inning when Lance Berkman hit a double, driving in Adam Everett and Craig Biggio. In the White Sox half of the fourth, Joe Crede hit what turned out to be the game - winning home run. In the bottom of the eighth, Scott Podsednik hit a triple with Pierzynski on second off of Russ Springer for an insurance run. Roger Clemens recorded his shortest World Series start, leaving after the second inning with 53 pitches, including 35 for strikes, due to a sore hamstring that he had previously injured (which had caused him to also miss his last regular season start) as the loss went to Wandy Rodríguez. José Contreras pitched seven innings, allowing three runs on six hits for the win. Before exiting, Contreras allowed a leadoff double by Willie Taveras with no outs. Neal Cotts entered the game in the top of the eighth inning. It marked the first time in five games that the White Sox had gone to their bullpen. Cotts pitched ​ ⁄ innings before Bobby Jenks was called upon by manager Ozzie Guillén to relieve him. Guillen signaled for the large pitcher by holding his arms out wide and then up high. In the postgame conference, the Sox manager joked that he wanted to be clear he was asking for "The Big Boy. '' Jenks returned in the ninth to earn the save, giving the White Sox a 1 -- 0 lead in the series. On a miserably cold (51 ° F (11 ° C)) and rainy evening, Morgan Ensberg 's first - pitch home run off starter Mark Buehrle put the Astros on top in the second inning. The White Sox answered in the bottom of the second with two runs off Andy Pettitte on Joe Crede 's RBI single with two on and Juan Uribe 's sacrifice fly. Houston 's Lance Berkman tied the game on a sacrifice fly in the third after a one - out triple, then hit a two - run double in the fifth to give the Astros a 4 -- 2 lead. In the seventh, Dan Wheeler loaded the bases with a double by Juan Uribe, a walk to Tadahito Iguchi, and plate umpire Jeff Nelson 's ruling that Jermaine Dye was hit by a pitched ball. The Astros brought in Chad Qualls, who promptly served up a grand slam to Paul Konerko on his first pitch, the 18th grand slam in the annals of the Fall Classic. In the top of the ninth, Sox closer Bobby Jenks blew the save on a game - tying pinch - hit single by José Vizcaíno. In the bottom of the ninth, Astros closer Brad Lidge gave up a one - out, walk - off home run -- the 14th in Series history -- to Scott Podsednik, giving Lidge his second loss in as many post-season appearances (his previous appearance was in Game 5 of 2005 National League Championship Series). Podsednik had not hit a single homer in the regular season, but this was his second of the post-season. This was the second time in World Series history where a grand slam and a walk - off home run were hit in the same game. The Oakland A 's Jose Canseco (grand slam) and the Los Angeles Dodgers ' Kirk Gibson (walk - off) in Game 1 of the 1988 World Series were the first to do it. Never before had a World Series grand slam and a World Series walk - off home run been hit by the same team in the same game. Game 3 was the first World Series game played in the state of Texas. Before the game, it was ruled by Commissioner Bud Selig that the retractable roof would be open at Minute Maid Park, weather permitting. The Astros objected, citing that their record in games with the roof closed was better than with the retractable roof open. Selig 's office claimed that the ruling was based on the rules established by Houston and were consistent with how the Astros organization treated the situation all year long, as well as the weather forecasts for that period of time. The game would become the longest World Series game in length of time (5 hours and 41 minutes) and tied for the longest in number of innings (14, tied with Game 2 of the 1916 World Series and Game 1 of the 2015 World Series). Houston struck early on a Lance Berkman single after a Craig Biggio lead - off double in the bottom of the first off Chicago starter Jon Garland. A White Sox rally was snuffed in the second inning; after Paul Konerko hit a leadoff double and A.J. Pierzynski walked, Aaron Rowand lined out into a double play. Houston scored in the third; Adam Everett walked, was caught in a rundown and got hit by the ball on a Juan Uribe throwing error, then scored on a Roy Oswalt sacrifice bunt and a Biggio single. Two batters later, Morgan Ensberg singled Biggio home. Jason Lane led off the Astros ' fourth with a home run to left - center field. It was later shown in replays that the ball should not have been ruled a home run, hitting to the left of the yellow line on the unusual wall in left - center field. The White Sox rallied in the top of the fifth, true to their "Win Or Die Trying '' mantra of 2005, starting with a Joe Crede lead - off homer. Uribe, on first after hitting a single, scored on a Tadahito Iguchi base hit with one out, followed by Scott Podsednik coming home on a single by Jermaine Dye. Pierzynski hit a two - out double to Tal 's Hill, driving in two runs, scoring Iguchi and Dye giving the Sox the lead. The Astros rallied in the last of the eighth with two outs when Lane 's double scored Ensberg with the tying run after back - to - back walks by Ensberg and Mike Lamb, giving Dustin Hermanson a blown save. Houston tried to rally to win in the ninth, but stranded Chris Burke at third, after he had walked, reached second on an error and stolen third. The Astros tried again in the 10th as well as in the 11th, but failed each time. In the top of the 14th, after the Sox hit into a spectacular double play started by Ensberg, Geoff Blum (a former Astro and the Astros ' television color analyst as of 2015), who had entered the game in the 13th, homered to right with two outs off Ezequiel Astacio. Infield singles by Rowand and Crede were followed by walks to Uribe and Chris Widger. Trailing 7 - 5, Houston tried to rally with the tying runs on first and third and two outs after a Uribe error. Game 2 starter Mark Buehrle earned the save for winning pitcher Dámaso Marte when Everett popped out, bringing Chicago one game closer to its first championship in 88 years. Buehrle became the first pitcher to start a game in the Series and save the next one since Bob Turley of the Yankees in the 1958 World Series. Many records were set or tied besides time and innings: The teams combined to use 17 pitchers (nine for the White Sox, eight for the Astros), throwing a total of 482 pitches, and walking 21 batters (a dozen by Chicago, nine by Houston); 43 players were used (the White Sox used 22 and the Astros used 21), and 30 men were left on base (15 for each team), all new high - water marks in Fall Classic history. Scott Podsednik set a new all - time record with eight official at - bats in this game. One tied record was total double plays, with six (four by the Astros, two by the White Sox). Before the game, Major League Baseball unveiled its Latino Legends Team. The fourth game was the pitchers ' duel that had been promised throughout the series. Both Houston starter Brandon Backe and Chicago starter Freddy García put zeros on the scoreboard through seven innings, the longest Series scoreless stretch since Game 7 of the 1991 World Series. Scott Podsednik had a two - out triple in the top of the third, but a Tadahito Iguchi groundout ended that threat. The Astros wasted a chance in the sixth, Jason Lane striking out with the bases loaded. The White Sox in the top of the seventh put runners at second and third, but shortstop Juan Uribe struck out to end the inning. Chicago broke through in the next inning against embattled Houston closer Brad Lidge. Willie Harris hit a pinch - hit single. Podsednik advanced him with a sacrifice bunt. Carl Everett pinch - hit for Iguchi and grounded out to the right side to allow Harris to move to third. Jermaine Dye, the Most Valuable Player of the series, had the game - winning single, driving in Harris. Things got a little sticky for the Sox in the Astros half of the eighth when reliever Cliff Politte hit Willy Taveras, threw a wild pitch, sending Taveras to second, and walked Lance Berkman. After Morgan Ensberg flew out to center, the White Sox manager Ozzie Guillén brought in Neal Cotts to finish the inning. Cotts induced pinch - hitter José Vizcaíno into a ground out to Uribe. Bobby Jenks, the 24 - year - old fireballer, started the ninth inning. He allowed a single to Jason Lane and a sacrifice bunt to Brad Ausmus. Chris Burke came in to pinch - hit; he fouled one off to the left side, but Uribe made an amazing catch in the stands to retire Burke. The game ended when Orlando Palmeiro grounded to Uribe. It was a bang - bang play as Paul Konerko caught the ball from Uribe at 11: 01 pm CDT to begin the biggest celebration in Chicago since the sixth NBA championship by the Bulls, co-owned with the White Sox, in 1998. As a result, Jerry Reinsdorf, owner of both teams, had won seven championships overall. This game would be the last playoff game for the Astros as a member of the NL, as they would move to the AL in 2013, and not appear in a playoff game until the 2015 American League Wild Card Game. The last two Series games technically ended on the same day, Game 3 having concluded after midnight, Houston time. The 1 -- 0 shutout was the first game with a total of one run scored to end a World Series since the 1995 World Series, in which Game 6 was won by the Atlanta Braves over the Cleveland Indians, and the first 1 -- 0 game in any Series game since Game 5 of the 1996 World Series when the New York Yankees shut - out the Braves in the last game ever played at Atlanta -- Fulton County Stadium. The 2005 White Sox joined the 1995 Atlanta Braves and 1999 New York Yankees as the only teams to win a World Series after losing no more than one game combined in the Division Series and Championship Series. This was the second consecutive World Series to be won by a team that has the word "Sox '' in its nickname, after the Boston Red Sox won the 2004 World Series against the St. Louis Cardinals. This also happened in 1917 and 1918. Furthermore, it was the second year in a row in which the Series champions broke a long - lived "curse. '' In one of those ways that patterns appear to emerge in sporting events, the White Sox World Series win in 2005, along with the Boston Red Sox win in 2004, symmetrically bookended the two teams ' previous World Series winners and the long gaps between, with the Red Sox and White Sox last Series wins having come in 1918 and 1917, respectively. AL Chicago White Sox (4) vs. NL Houston Astros (0) 2005 World Series (4 -- 0): Chicago White Sox (A.L.) over Houston Astros (N.L.) As per their contract, Fox Sports carried the World Series on United States television. Joe Buck provided play - by - play for his eighth World Series while analyst Tim McCarver worked his sixteenth. ESPN Radio was the nationwide radio broadcaster, as it had been since 1998. Jon Miller and Joe Morgan provided the play - by - play and analysis. Locally, KTRH - AM and WMVP were the primary carriers for the World Series in the Houston and Chicago markets. For KTRH long time Astros voice Milo Hamilton provided play - by - play while John Rooney called the games for the White Sox. Game 4 was Rooney 's last call after seventeen years as the radio voice of the White Sox, as he left to take the same position with the St. Louis Cardinals. The Cardinals proceeded to win the 2006 World Series, making Rooney the first home announcer to call back - to - back World Series wins for two different teams. That the teams were in two different leagues makes the feat even more unusual. The ratings for the 2005 World Series were considered weak. With an overall average of 11.1, 2005 set a record for the lowest rated World Series of all - time. The prior lowest was 11.9, set by the 2002 World Series between the San Francisco Giants and the Anaheim Angels (importantly, this series went 7 games, and the 2005 Series went 4). Following the 2005 World Series, however, every subsequent Series through 2013 except for 2009 produced lower ratings. The record - low 2012 World Series, another four - game sweep, averaged 7.6 (3.5 points lower than 2005 's rating) and 12.7 million viewers (4.4 million fewer viewers than 2005). Neither team advanced to the post-season in 2006, but the 2006 World Series again featured teams from the American League Central and National League Central divisions, this time represented by the Detroit Tigers and the St. Louis Cardinals, respectively. The Cardinals won the World Series in five games, in which manager Tony La Russa became the second manager to win the World Series in both American and National leagues, previously managing the Oakland Athletics to the 1989 World Series championship. Both the White Sox and the Astros were in the Wild Card race until the final weeks of the season, with the White Sox finishing with 90 wins, the Astros with 82 wins. The White Sox made their first post-2005 playoff appearance in 2008, while the Astros would not return to the postseason until 2015, their third season as an American League team and would not return to the World Series until 2017, their fifth season as an American League team. This was the city of Chicago 's first professional sports championship since the Chicago Fire won MLS Cup ' 98 (which came four months after the Chicago Bulls ' sixth NBA championship that year). The next major Chicago sports championship came in 2010, when the NHL 's Chicago Blackhawks ended a 49 - year Stanley Cup title drought. With the Chicago Bears ' win in Super Bowl XX and the Chicago Cubs ' own World Series championship in 2016, all Chicago sports teams have won at least one major championship since 1985. Meanwhile, the Astros themselves made it back to the World Series in 2017, but this time as an AL team, where they defeated the Los Angeles Dodgers in seven games, resulting in Houston 's first professional sports championship since the 2006 -- 07 Houston Dynamo won their back - to - back MLS Championships.
the impact of poverty on business operations wikipedia
Poverty - wikipedia Poverty is the scarcity or the lack of a certain (variant) amount of material possessions or money. Poverty is a multifaceted concept, which may include social, economic, and political elements. Absolute poverty, extreme poverty, or destitution refers to the complete lack of the means necessary to meet basic personal needs such as food, clothing and shelter. The threshold at which absolute poverty is defined is considered to be about the same, independent of the person 's permanent location or era. On the other hand, relative poverty occurs when a person who lives in a given country does not enjoy a certain minimum level of "living standards '' as compared to the rest of the population of that country. Therefore, the threshold at which relative poverty is defined varies from country to another, or from one society to another. Providing basic needs can be restricted by constraints on government 's ability to deliver services, such as corruption, tax avoidance, debt and loan conditionalities and by the brain drain of health care and educational professionals. Strategies of increasing income to make basic needs more affordable typically include welfare, economic freedoms and providing financial services. Poverty reduction is still a major issue (or a target) for many international organizations such as the United Nations and the World Bank. In 2012 it was estimated that, using a poverty line of $1.25 a day, 1.2 billion people lived in poverty. Given the current economic model, built on GDP, it would take 100 years to bring the world 's poorest up to the poverty line of $1.25 a day. UNICEF estimates half the world 's children (or 1.1 billion) live in poverty. The World Bank forecasted in 2015 that 702.1 million people were living in extreme poverty, down from 1.75 billion in 1990. Extreme poverty is observed in all parts of the world, including developed economies. Of the 2015 population, about 347.1 million people (35.2 %) lived in Sub-Saharan Africa and 231.3 million (13.5 %) lived in South Asia. According to the World Bank, between 1990 and 2015, the percentage of the world 's population living in extreme poverty fell from 37.1 % to 9.6 %, falling below 10 % for the first time. In public opinion around the world people surveyed tend to incorrectly think extreme poverty has n't decreased. There is disagreement amongst expert as to what would be considered a realistic poverty rate with one considering it "an inaccurately measured and arbitrary cut off ''. One estimate places the true scale of poverty much higher than the World Bank, with an estimated 4.3 billion people (59 % of the world 's population) living with less than $5 a day and unable to meet basic needs adequately. It has been argued by some academics that the neoliberal policies promoted by global financial institutions such as the IMF and the World Bank are actually exacerbating both inequality and poverty. Poverty is the scarcity or the lack of a certain (variant) amount of material possessions or money. The word poverty comes from the old French word poverté (Modern French: pauvreté), from Latin paupertās from pauper (poor). The English word "poverty '' via Anglo - Norman povert. There are several definitions of poverty depending on the context of the situation it is placed in, and the views of the person giving the definition. Income Poverty: a family 's income fails to meet a federally established threshold that differs across countries. United Nations: Fundamentally, poverty is the inability of having choices and opportunities, a violation of human dignity. It means lack of basic capacity to participate effectively in society. It means not having enough to feed and clothe a family, not having a school or clinic to go to, not having the land on which to grow one 's food or a job to earn one 's living, not having access to credit. It means insecurity, powerlessness and exclusion of individuals, households and communities. It means susceptibility to violence, and it often implies living in marginal or fragile environments, without access to clean water or sanitation. World Bank: Poverty is pronounced deprivation in well - being, and comprises many dimensions. It includes low incomes and the inability to acquire the basic goods and services necessary for survival with dignity. Poverty also encompasses low levels of health and education, poor access to clean water and sanitation, inadequate physical security, lack of voice, and insufficient capacity and opportunity to better one 's life. Poverty is usually measured as either absolute or relative (the latter being actually an index of income inequality). In the United Kingdom, the second Cameron ministry came under attack for their redefinition of poverty; poverty is no longer classified by a family 's income, but as to whether a family is in work or not. Considering that two - thirds of people who found work were accepting wages that are below the living wage (according to the Joseph Rowntree Foundation) this has been criticised by anti-poverty campaigners as an unrealistic view of poverty in the United Kingdom. Absolute poverty refers to a set standard which is consistent over time and between countries. First introduced in 1990, the dollar a day poverty line measured absolute poverty by the standards of the world 's poorest countries. The World Bank defined the new international poverty line as $1.25 a day in 2008 for 2005 (equivalent to $1.00 a day in 1996 US prices). In October 2015, they reset it to $1.90 a day. Absolute poverty, extreme poverty, or abject poverty is "a condition characterized by severe deprivation of basic human needs, including food, safe drinking water, sanitation facilities, health, shelter, education and information. It depends not only on income but also on access to services. '' The term ' absolute poverty ', when used in this fashion, is usually synonymous with ' extreme poverty ': Robert McNamara, the former president of the World Bank, described absolute or extreme poverty as, "a condition so limited by malnutrition, illiteracy, disease, squalid surroundings, high infant mortality, and low life expectancy as to be beneath any reasonable definition of human decency. '' Australia is one of the world 's wealthier nations. In his article published in Australian Policy Online, Robert Tanton notes that, "While this amount is appropriate for third world countries, in Australia, the amount required to meet these basic needs will naturally be much higher because prices of these basic necessities are higher. '' However, as the amount of wealth required for survival is not the same in all places and time periods, particularly in highly developed countries where few people would fall below the World Bank Group 's poverty lines, countries often develop their own national poverty lines. An absolute poverty line was calculated in Australia for the Henderson poverty inquiry in 1973. It was $62.70 a week, which was the disposable income required to support the basic needs of a family of two adults and two dependent children at the time. This poverty line has been updated regularly by the Melbourne Institute according to increases in average incomes; for a single employed person it was $391.85 per week (including housing costs) in March 2009. In Australia the OECD poverty would equate to a "disposable income of less than $358 per week for a single adult (higher for larger households to take account of their greater costs). in 2015 Australia implemented the Individual Deprivation Measure which address gender disparities in poverty. For a few years starting 1990, the World Bank anchored absolute poverty line as $1 per day. This was revised in 1993, and through 2005, absolute poverty was $1.08 a day for all countries on a purchasing power parity basis, after adjusting for inflation to the 1993 U.S. dollar. In 2005, after extensive studies of cost of living across the world, The World Bank raised the measure for global poverty line to reflect the observed higher cost of living. In 2015, the World Bank defines extreme poverty as living on less than US $1.90 (PPP) per day, and moderate poverty as less than $2 or $5 a day (but note that a person or family with access to subsistence resources, e.g., subsistence farmers, may have a low cash income without a correspondingly low standard of living -- they are not living "on '' their cash income but using it as a top up). It estimated that "in 2001, 1.1 billion people had consumption levels below $1 a day and 2.7 billion lived on less than $2 a day. '' A ' dollar a day ', in nations that do not use the U.S. dollar as currency, does not translate to living a day on the equivalent amount of local currency as determined by the exchange rate. Rather, it is determined by the purchasing power parity rate, which would look at how much local currency is needed to buy the same things that a dollar could buy in the United States. Usually, this would translate to less local currency than the exchange rate in poorer countries as the United States is a relatively more expensive country. The poverty line threshold of $1.90 per day, as set by the World Bank, is controversial. Each nation has its own threshold for absolute poverty line; in the United States, for example, the absolute poverty line was US $15.15 per day in 2010 (US $22,000 per year for a family of four), while in India it was US $1.0 per day and in China the absolute poverty line was US $0.55 per day, each on PPP basis in 2010. These different poverty lines make data comparison between each nation 's official reports qualitatively difficult. Some scholars argue that the World Bank method sets the bar too high, others argue it is low. Still others suggest that poverty line misleads as it measures everyone below the poverty line the same, when in reality someone living on $1.20 per day is in a different state of poverty than someone living on $0.20 per day. In other words, the depth and intensity of poverty varies across the world and in any regional populations, and $1.25 per day poverty line and head counts are inadequate measures. The share of the world 's population living in absolute poverty fell from 43 % in 1981 to 14 % in 2011. The absolute number of people in poverty fell from 1.95 billion in 1981 to 1.01 billion in 2011. The economist Max Roser estimates that the number of people in poverty is therefore roughly the same as 200 years ago. This is the case since the world population was just little more than 1 billion in 1820 and the majority (84 % to 94 %) of the world population was living poverty. The proportion of the developing world 's population living in extreme economic poverty fell from 28 percent in 1990 to 21 percent in 2001. Most of this improvement has occurred in East and South Asia. In East Asia the World Bank reported that "The poverty headcount rate at the $2 - a-day level is estimated to have fallen to about 27 percent (in 2007), down from 29.5 percent in 2006 and 69 percent in 1990. '' In Sub-Saharan Africa extreme poverty went up from 41 percent in 1981 to 46 percent in 2001, which combined with growing population increased the number of people living in extreme poverty from 231 million to 318 million. In the early 1990s some of the transition economies of Central and Eastern Europe and Central Asia experienced a sharp drop in income. The collapse of the Soviet Union resulted in large declines in GDP per capita, of about 30 to 35 % between 1990 and the trough year of 1998 (when it was at its minimum). As a result, poverty rates also increased although in subsequent years as per capita incomes recovered the poverty rate dropped from 31.4 % of the population to 19.6 %. World Bank data shows that the percentage of the population living in households with consumption or income per person below the poverty line has decreased in each region of the world since 1990: According to Chen and Ravallion, about 1.76 billion people in developing world lived above $1.25 per day and 1.9 billion people lived below $1.25 per day in 1981. The world 's population increased over the next 25 years. In 2005, about 4.09 billion people in developing world lived above $1.25 per day and 1.4 billion people lived below $1.25 per day (both 1981 and 2005 data are on inflation adjusted basis). Some scholars caution that these trends are subject to various assumptions and not certain. Additionally, they note that the poverty reduction is not uniform across the world; economically prospering countries such as China, India and Brazil have made more progress in absolute poverty reduction than countries in other regions of the world. The absolute poverty measure trends noted above are supported by human development indicators, which have also been improving. Life expectancy has greatly increased in the developing world since World War II and is starting to close the gap to the developed world. Child mortality has decreased in every developing region of the world. The proportion of the world 's population living in countries where per - capita food supplies are less than 2,200 calories (9,200 kilojoules) per day decreased from 56 % in the mid-1960s to below 10 % by the 1990s. Similar trends can be observed for literacy, access to clean water and electricity and basic consumer items. Secondary poverty refers to those that would be living above the poverty line, but spend their income on unncessary pleasures, such as alcoholic beverages, thus placing them below it. In 18th and 19th century Great Britain, the practice of temperance among Methodists, as well as their rejection of gambling, allowed them to eliminate secondary poverty and accumulate capital. Relative poverty views poverty as socially defined and dependent on social context, hence relative poverty is a measure of income inequality. Usually, relative poverty is measured as the percentage of the population with income less than some fixed proportion of median income. There are several other different income inequality metrics, for example, the Gini coefficient or the Theil Index. Relative poverty is the "most useful measure for ascertaining poverty rates in wealthy developed nations ''. Relative poverty measure is used by the United Nations Development Program (UNDP), the United Nations Children 's Fund (UNICEF), the Organisation for Economic Co-operation and Development (OECD) and Canadian poverty researchers. In the European Union, the "relative poverty measure is the most prominent and most -- quoted of the EU social inclusion indicators ''. "Relative poverty reflects better the cost of social inclusion and equality of opportunity in a specific time and space. '' "Once economic development has progressed beyond a certain minimum level, the rub of the poverty problem -- from the point of view of both the poor individual and of the societies in which they live -- is not so much the effects of poverty in any absolute form but the effects of the contrast, daily perceived, between the lives of the poor and the lives of those around them. For practical purposes, the problem of poverty in the industrialized nations today is a problem of relative poverty (page 9). '' In 1776 Adam Smith in the Wealth of Nations argued that poverty is the inability to afford, "not only the commodities which are indispensably necessary for the support of life but whatever the custom of the country renders it indecent for creditable people, even of the lowest order, to be without ''. In 1958 J.K. Galbraith argued that "People are poverty stricken when their income, even if adequate for survival, falls markedly behind that of their community. '' In 1964 in a joint committee economic President 's report in the United States, Republicans endorsed the concept of relative poverty. "No objective definition of poverty exists... The definition varies from place to place and time to time. In America as our standard of living rises, so does our idea of what is substandard. '' In 1965 Rose Friedman argued for the use of relative poverty claiming that the definition of poverty changes with general living standards. Those labeled as poor in 1995 would have had "a higher standard of living than many labeled not poor '' in 1965. In 1979, British sociologist, Peter Townsend published his famous definition, "individuals... can be said to be in poverty when they lack the resources to obtain the types of diet, participate in the activities and have the living conditions and amenities which are customary, or are at least widely encouraged or approved, in the societies to which they belong (page 31) ''. This definition and measurement of poverty was profoundly linked to the idea that poverty and societal participation are deeply associated. Peter Townsend transformed the conception of poverty, viewing it not simply as lack of income but as the configuration of the economic conditions that prevent people from being full members of the society (Townsend, 1979; Ferragina et al. 2016). Poverty reduces the ability of people to participate in society, effectively denying them full citizenship (as suggested by T.H. Marshall). Given that there are no universal principles by which to determine the minimum threshold of participation equating to full membership of society, Townsend argued that the appropriate measure would necessarily be relative to any particular cultural context. He suggested that in each society there should be an empirically determinable ' breakpoint ' within the income distribution below which participation of individuals collapses, providing a scientific basis for fixing a poverty line and determining the extent of poverty (Ferragina et al. 2016). Brian Nolan and Christopher T. Whelan of the Economic and Social Research Institute (ESRI) in Ireland explained that "Poverty has to be seen in terms of the standard of living of the society in question. '' Relative poverty measures are used as official poverty rates by the European Union, UNICEF, and the OEDC. The main poverty line used in the OECD and the European Union is based on "economic distance '', a level of income set at 60 % of the median household income. Economic aspects of poverty focus on material needs, typically including the necessities of daily living, such as food, clothing, shelter, or safe drinking water. Poverty in this sense may be understood as a condition in which a person or community is lacking in the basic needs for a minimum standard of well - being and life, particularly as a result of a persistent lack of income. The increase in poverty runs parallel sides with unemployment, hunger, and higher crime rate. Analysis of social aspects of poverty links conditions of scarcity to aspects of the distribution of resources and power in a society and recognizes that poverty may be a function of the diminished "capability '' of people to live the kinds of lives they value. The social aspects of poverty may include lack of access to information, education, health care, social capital or political power. Poverty levels are snapshot pictures in time that omits the transitional dynamics between levels. Mobility statistics supply additional information about the fraction who leave the poverty level. For example, one study finds that in a sixteen - year period (1975 to 1991 in the U.S.) only 5 % of those in the lower fifth of the income level were still at that level, while 95 % transitioned to a higher income category. Poverty levels can remain the same while those who rise out of poverty are replaced by others. The transient poor and chronic poor differ in each society. In a nine - year period ending in 2005 for the U.S., 50 % of the poorest quintile transitioned to a higher quintile. Poverty may also be understood as an aspect of unequal social status and inequitable social relationships, experienced as social exclusion, dependency, and diminished capacity to participate, or to develop meaningful connections with other people in society. Such social exclusion can be minimized through strengthened connections with the mainstream, such as through the provision of relational care to those who are experiencing poverty. The World Bank 's "Voices of the Poor, '' based on research with over 20,000 poor people in 23 countries, identifies a range of factors which poor people identify as part of poverty. These include: David Moore, in his book The World Bank, argues that some analysis of poverty reflect pejorative, sometimes racial, stereotypes of impoverished people as powerless victims and passive recipients of aid programs. Ultra-poverty, a term apparently coined by Michael Lipton, connotes being amongst poorest of the poor in low - income countries. Lipton defined ultra-poverty as receiving less than 80 percent of minimum caloric intake whilst spending more than 80 % of income on food. Alternatively a 2007 report issued by International Food Policy Research Institute defined ultra-poverty as living on less than 54 cents per day. Asset poverty is an economic and social condition that is more persistent and prevalent than income poverty. It can be defined as a household 's inability to access wealth resources that are enough to provide for basic needs for a period of three months. Basic needs refer to the minimum standards for consumption and acceptable needs. Wealth resources consist of home ownership, other real estate (second home, rented properties, etc.), net value of farm and business assets, stocks, checking and savings accounts, and other savings (money in savings bonds, life insurance policy cash values, etc.). Wealth is measured in three forms: net worth, net worth minus home equity, and liquid assets. Net worth consists of all the aspects mentioned above. Net worth minus home equity is the same except it does not include home ownership in asset calculations. Liquid assets are resources that are readily available such as cash, checking and savings accounts, stocks, and other sources of savings. There are two types of assets: tangible and intangible. Tangible assets most closely resemble liquid assets in that they include stocks, bonds, property, natural resources, and hard assets not in the form of real estate. Intangible assets are simply the access to credit, social capital, cultural capital, political capital, and human capital. The effects of poverty may also be causes as listed above, thus creating a "poverty cycle '' operating across multiple levels, individual, local, national and global. One third of deaths -- some 18 million people a year or 50,000 per day -- are due to poverty - related causes. People of color, women and children, are over represented among the global poor and these effects of severe poverty. Those living in poverty suffer disproportionately from hunger or even starvation and disease. Those living in poverty suffer lower life expectancy. According to the World Health Organization, hunger and malnutrition are the single gravest threats to the world 's public health and malnutrition is by far the biggest contributor to child mortality, present in half of all cases. Almost 90 % of maternal deaths during childbirth occur in Asia and sub-Saharan Africa, compared to less than 1 % in the developed world. Those who live in poverty have also been shown to have a far greater likelihood of having or incurring a disability within their lifetime. Infectious diseases such as malaria and tuberculosis can perpetuate poverty by diverting health and economic resources from investment and productivity; malaria decreases GDP growth by up to 1.3 % in some developing nations and AIDS decreases African growth by 0.3 -- 1.5 % annually. Poverty has been shown to impede cognitive function. One way in which this may happen is that financial worries put a severe burden on one 's mental resources so that they are no longer fully available for solving complicated problems. The reduced capability for problem solving can lead to suboptimal decisions and further perpetuate poverty. Many other pathways from poverty to compromised cognitive capacities have been noted, from poor nutrition and environmental toxins to the effects of stress on parenting behavior, all of which lead to suboptimal psychological development. Neuroscientists have documented the impact of poverty on brain structure and function throughout the lifespan. Infectious diseases continue to blight the lives of the poor across the world. An estimated 40 million people are living with HIV / AIDS, with 3 million deaths in 2004. Every year there are 350 -- 500 million cases of malaria, with 1 million fatalities: Africa accounts for 90 percent of malarial deaths and African children account for over 80 percent of malaria victims worldwide. Rises in the costs of living make poor people less able to afford items. Poor people spend a greater portion of their budgets on food than wealthy people. As a result, poor households and those near the poverty threshold can be particularly vulnerable to increases in food prices. For example, in late 2007 increases in the price of grains led to food riots in some countries. The World Bank warned that 100 million people were at risk of sinking deeper into poverty. Threats to the supply of food may also be caused by drought and the water crisis. Intensive farming often leads to a vicious cycle of exhaustion of soil fertility and decline of agricultural yields. Approximately 40 % of the world 's agricultural land is seriously degraded. In Africa, if current trends of soil degradation continue, the continent might be able to feed just 25 % of its population by 2025, according to United Nations University 's Ghana - based Institute for Natural Resources in Africa. Every year nearly 11 million children living in poverty die before their fifth birthday. 1.02 billion people go to bed hungry every night. According to the Global Hunger Index, Sub-Saharan Africa had the highest child malnutrition rate of the world 's regions over the 2001 -- 2006 period. As part of the Sustainable Development Goals the global community has made the elimination of hunger and undernutrition a priority for the coming years. While the Goal 2 of the SDGs aims to reach this goal by 2030 a number of initiatives aim to achieve the goal 5 years earlier, by 2025: Research has found that there is a high risk of educational underachievement for children who are from low - income housing circumstances. This is often a process that begins in primary school for some less fortunate children. Instruction in the US educational system, as well as in most other countries, tends to be geared towards those students who come from more advantaged backgrounds. As a result, children in poverty are at a higher risk than advantaged children for retention in their grade, special deleterious placements during the school 's hours and even not completing their high school education. Advantage breeds advantage. There are indeed many explanations for why students tend to drop out of school. One is the conditions of which they attend school. Schools in poverty - stricken areas have conditions that hinder children from learning in a safe environment. Researchers have developed a name for areas like this: an urban war zone is a poor, crime - laden district in which deteriorated, violent, even war - like conditions and underfunded, largely ineffective schools promote inferior academic performance, including irregular attendance and disruptive or non-compliant classroom behavior. Because of poverty, "Students from low - income families are 2.4 times more likely to drop out than middle - income kids, and over 10 times more likely than high - income peers to drop out '' For children with low resources, the risk factors are similar to others such as juvenile delinquency rates, higher levels of teenage pregnancy, and the economic dependency upon their low - income parent or parents. Families and society who submit low levels of investment in the education and development of less fortunate children end up with less favorable results for the children who see a life of parental employment reduction and low wages. Higher rates of early childbearing with all the connected risks to family, health and well - being are major important issues to address since education from preschool to high school are both identifiably meaningful in a life. Poverty often drastically affects children 's success in school. A child 's "home activities, preferences, mannerisms '' must align with the world and in the cases that they do not do these, students are at a disadvantage in the school and, most importantly, the classroom. Therefore, it is safe to state that children who live at or below the poverty level will have far less success educationally than children who live above the poverty line. Poor children have a great deal less healthcare and this ultimately results in many absences from the academic year. Additionally, poor children are much more likely to suffer from hunger, fatigue, irritability, headaches, ear infections, flu, and colds. These illnesses could potentially restrict a child or student 's focus and concentration. For a child to grow up emotionally healthy, the children under three need "A strong, reliable primary caregiver who provides consistent and unconditional love, guidance, and support. Safe, predictable, stable environments. Ten to 20 hours each week of harmonious, reciprocal interactions. This process, known as attunement, is most crucial during the first 6 -- 24 months of infants ' lives and helps them develop a wider range of healthy emotions, including gratitude, forgiveness, and empathy. Enrichment through personalized, increasingly complex activities ''. Harmful spending habits mean that the poor typically spend about 2 percent of their income educating their children but larger percentages of alcohol and tobacco (For example, 6 percent in Indonesia and 8 percent in Mexico). Poverty has been also considered a real social phenomenon reflecting more the consequences of a lack of income than the lack of income per se (Ferragina et al. 2016). According to Townsend: humans are social animals entangled in a web of relationships, which exert complex and changing pressures, as much in their consumption of goods and services as in any other aspect of their behaviour (Townsend 1979). This idea has received theoretical support from scholars and extensive testimony from people experiencing poverty across the globe (Walker 2014). Participation and consumption have become ever more crucial mechanisms through which people establish and communicate their identity and position in society, increasing the premium attached to resources needed to participate (Giddens 1991). In addition, the concept of social exclusion has been added to the lexicon of poverty related terms, describing the process by which people, especially those on low incomes, can become socially and politically detached from mainstream society and its associated resources and opportunities (Cantillon 1997). Equally western society have become more complex with ethnic diversity, multi-culturalism and life - style choices raising the possibility that a single concept of poverty as conceived in the past might no longer apply (Ferragina et al. 2016). Poverty increases the risk of homelessness. Slum - dwellers, who make up a third of the world 's urban population, live in a poverty no better, if not worse, than rural people, who are the traditional focus of the poverty in the developing world, according to a report by the United Nations. There are over 100 million street children worldwide. Most of the children living in institutions around the world have a surviving parent or close relative, and they most commonly entered orphanages because of poverty. It is speculated that, flush with money, orphanages are increasing and push for children to join even though demographic data show that even the poorest extended families usually take in children whose parents have died. Experts and child advocates maintain that orphanages are expensive and often harm children 's development by separating them from their families and that it would be more effective and cheaper to aid close relatives who want to take in the orphans. As of 2012, 2.5 billion people lack access to sanitation services and 15 % practice open defecation. The most noteworthy example is Bagladesh, which has half the GDP per capita of India but has a lower mortality from diarrhea than India or the world average, with diarrhea deaths declining by 90 % since the 1990s. Even while providing latrines is a challenge, people still do not use them even when available. By strategically providing pit latrines to the poorest, charities in Bangladesh sparked a cultural change as those better off perceived it as an issue of status to not use one. The vast majority of the latrines built were then not from charities but by villagers themselves. Water utility subsidies tend to subsidize water consumption by those connected to the supply grid, which is typically skewed towards the richer and urban segment of the population and those outside informal housing. As a result of heavy consumption subsidies, the price of water decreases to the extent that only 30 %, on average, of the supplying costs in developing countries is covered. This results in a lack of incentive to maintain delivery systems, leading to losses from leaks annually that are enough for 200 million people. This also leads to a lack of incentive to invest in expanding the network, resulting in much of the poor population being unconnected to the network. Instead, the poor buy water from water vendors for, on average, about five to 16 times the metered price. However, subsidies for laying new connections to the network rather than for consumption have shown more promise for the poor. Similarly, the poorest fifth receive 0.1 % of the world 's lighting but pay a fifth of total spending on light, accounting for 25 to 30 percent of their income. Indoor air pollution from burning fuels kills 2 million, with almost half the deaths from pneumonia in children under 5. Fuel from Bamboo burns more cleanly and also matures much faster than wood, thus also reducing deforestation. Additionally, using solar panels is promoted as being cheaper over the products ' lifetime even if upfront costs are higher. Thus, payment schemes such as lend - to - own programs are promoted and up to 14 % of Kenyan households use solar as their primary energy source. According to experts, many women become victims of trafficking, the most common form of which is prostitution, as a means of survival and economic desperation. Deterioration of living conditions can often compel children to abandon school to contribute to the family income, putting them at risk of being exploited. For example, in Zimbabwe, a number of girls are turning to sex in return for food to survive because of the increasing poverty. According to studies, as poverty decreases there will be fewer and fewer instances of violence. In one survey, 67 % of children from disadvantaged inner cities said they had witnessed a serious assault, and 33 % reported witnessing a homicide. 51 % of fifth graders from New Orleans (median income for a household: $27,133) have been found to be victims of violence, compared to 32 % in Washington, DC (mean income for a household: $40,127). Max Weber and some schools of modernization theory suggest that cultural values could affect economic success. However, researchers have gathered evidence that suggest that values are not as deeply ingrained and that changing economic opportunities explain most of the movement into and out of poverty, as opposed to shifts in values. Studies have shown that poverty changes the personalities of children who live in it. The Great Smoky Mountains Study was a ten - year study that was able to demonstrate this. During the study, about one - quarter of the families saw a dramatic and unexpected increase in income. The study showed that among these children, instances of behavioral and emotional disorders decreased, and conscientiousness and agreeableness increased. Cultural factors, such as discrimination of various kinds, can negatively affect productivity such as age discrimination, stereotyping, discrimination against people with physical disability, gender discrimination, racial discrimination, and caste discrimination. Women are the group suffering from the highest rate of poverty after children; 14.5 % of women and 22 % of children are poor in the United States. In addition, the fact that women are more likely to be caregivers, regardless of income level, to either the generations before or after them, exacerbates the burdens of their poverty. Marking the International Day for the Eradication of Poverty, the United Nations Special Rapporteur on extreme poverty Philip Alston warned in a statement that, "The world 's poor are at disproportionate risk of torture, arrest, early death and domestic violence, but their civil and political rights are being airbrushed out of the picture. ''... people in lower socio - economic classes are much more likely to get killed, tortured or experience an invasion of their privacy, and are far less likely to realize their right to vote, or otherwise participate in the political process. '' Various poverty reduction strategies are broadly categorized here based on whether they make more of the basic human needs available or whether they increase the disposable income needed to purchase those needs. Some strategies such as building roads can both bring access to various basic needs, such as fertilizer or healthcare from urban areas, as well as increase incomes, by bringing better access to urban markets. Statistics of 2018 shows population living in extreme conditions has declined by more than 1 billion in the last 25 years. As per the report published by the world bank on September 19, 2018 world poverty falls below 750 million. Agricultural technologies such as nitrogen fertilizers, pesticides, new seed varieties and new irrigation methods have dramatically reduced food shortages in modern times by boosting yields past previous constraints. Before the Industrial Revolution, poverty had been mostly accepted as inevitable as economies produced little, making wealth scarce. Geoffrey Parker wrote that "In Antwerp and Lyon, two of the largest cities in western Europe, by 1600 three - quarters of the total population were too poor to pay taxes, and therefore likely to need relief in times of crisis. '' The initial industrial revolution led to high economic growth and eliminated mass absolute poverty in what is now considered the developed world. Mass production of goods in places such as rapidly industrializing China has made what were once considered luxuries, such as vehicles and computers, inexpensive and thus accessible to many who were otherwise too poor to afford them. Even with new products, such as better seeds, or greater volumes of them, such as industrial production, the poor still require access to these products. Improving road and transportation infrastructure helps solve this major bottleneck. In Africa, it costs more to move fertilizer from an African seaport 60 miles inland than to ship it from the United States to Africa because of sparse, low - quality roads, leading to fertilizer costs two to six times the world average. Microfranchising models such as door to door distributors who earn commission - based income or Coca - Cola 's successful distribution system are used to disseminate basic needs to remote areas for below market prices. Nations do not necessarily need wealth to gain health. For example, Sri Lanka had a maternal mortality rate of 2 % in the 1930s, higher than any nation today. It reduced it to 0.5 -- 0.6 % in the 1950s and to 0.6 % today while spending less each year on maternal health because it learned what worked and what did not. Knowledge on the cost effectiveness of healthcare interventions can be elusive and educational measures have been made to disseminate what works, such as the Copenhagen Consensus. Cheap water filters and promoting hand washing are some of the most cost effective health interventions and can cut deaths from diarrhea and pneumonia. Strategies to provide education cost effectively include deworming children, which costs about 50 cents per child per year and reduces non-attendance from anemia, illness and malnutrition, while being only a twenty - fifth as expensive as increasing school attendance by constructing schools. Schoolgirl absenteeism could be cut in half by simply providing free sanitary towels. Fortification with micronutrients was ranked the most cost effective aid strategy by the Copenhagen Consensus. For example, iodised salt costs 2 to 3 cents per person a year while even moderate iodine deficiency in pregnancy shaves off 10 to 15 IQ points. Paying for school meals is argued to be an efficient strategy in increasing school enrollment, reducing absenteeism and increasing student attention. Desirable actions such as enrolling children in school or receiving vaccinations can be encouraged by a form of aid known as conditional cash transfers. In Mexico, for example, dropout rates of 16 - to 19 - year - olds in rural area dropped by 20 % and children gained half an inch in height. Initial fears that the program would encourage families to stay at home rather than work to collect benefits have proven to be unfounded. Instead, there is less excuse for neglectful behavior as, for example, children stopped begging on the streets instead of going to school because it could result in suspension from the program. Government revenue can be diverted away from basic services by corruption. Funds from aid and natural resources are often sent by government individuals for money laundering to overseas banks which insist on bank secrecy, instead of spending on the poor. A Global Witness report asked for more action from Western banks as they have proved capable of stanching the flow of funds linked to terrorism. Illicit capital flight from the developing world is estimated at ten times the size of aid it receives and twice the debt service it pays, with one estimate that most of Africa would be developed if the taxes owed were paid. About 60 per cent of illicit capital flight from Africa is from transfer mispricing, where a subsidiary in a developing nation sells to another subsidiary or shell company in a tax haven at an artificially low price to pay less tax. An African Union report estimates that about 30 % of sub-Saharan Africa 's GDP has been moved to tax havens. Solutions include corporate "country - by - country reporting '' where corporations disclose activities in each country and thereby prohibit the use of tax havens where no effective economic activity occurs. Developing countries ' debt service to banks and governments from richer countries can constrain government spending on the poor. For example, Zambia spent 40 % of its total budget to repay foreign debt, and only 7 % for basic state services in 1997. One of the proposed ways to help poor countries has been debt relief. Zambia began offering services, such as free health care even while overwhelming the health care infrastructure, because of savings that resulted from a 2005 round of debt relief. The World Bank and the International Monetary Fund, as primary holders of developing countries ' debt, attach structural adjustment conditionalities in return for loans which are generally geared toward loan repayment with austerity measures such as the elimination of state subsidies and the privatization of state services. For example, the World Bank presses poor nations to eliminate subsidies for fertilizer even while many farmers can not afford them at market prices. In Malawi, almost five million of its 13 million people used to need emergency food aid but after the government changed policy and subsidies for fertilizer and seed were introduced, farmers produced record - breaking corn harvests in 2006 and 2007 as Malawi became a major food exporter. A major proportion of aid from donor nations is tied, mandating that a receiving nation spend on products and expertise originating only from the donor country. US law requires food aid be spent on buying food at home, instead of where the hungry live, and, as a result, half of what is spent is used on transport. Distressed securities funds, also known as vulture funds, buy up the debt of poor nations cheaply and then sue countries for the full value of the debt plus interest which can be ten or 100 times what they paid. They may pursue any companies which do business with their target country to force them to pay to the fund instead. Considerable resources are diverted on costly court cases. For example, a court in Jersey ordered Congo to pay an American speculator $100 million in 2010. Now, the UK, Isle of Man and Jersey have banned such payments. The loss of basic needs providers emigrating from impoverished countries has a damaging effect. As of 2004, there were more Ethiopia - trained doctors living in Chicago than in Ethiopia. Proposals to mitigate the problem include compulsory government service for graduates of public medical and nursing schools and promoting medical tourism so that health care personal have more incentive to practice in their home countries. Some argue that overpopulation and lack of access to birth control leads to population increase to exceed food production and other resources. Better education for both men and women, and more control of their lives, reduces population growth due to family planning. According to United Nations Population Fund (UNFPA), by giving better education to men and women, they can earn money for their lives and can help them to strengthen economic security. The following are strategies used or proposed to increase personal incomes among the poor. Raising farm incomes is described as the core of the antipoverty effort as three - quarters of the poor today are farmers. Estimates show that growth in the agricultural productivity of small farmers is, on average, at least twice as effective in benefiting the poorest half of a country 's population as growth generated in nonagricultural sectors. A guaranteed minimum income ensures that every citizen will be able to purchase a desired level of basic needs. A basic income (or negative income tax) is a system of social security, that periodically provides each citizen, rich or poor, with a sum of money that is sufficient to live on. Studies of large cash - transfer programs in Ethiopia, Kenya, and Malawi show that the programs can be effective in increasing consumption, schooling, and nutrition, whether they are tied to such conditions or not. Proponents argue that a basic income is more economically efficient than a minimum wage and unemployment benefits, as the minimum wage effectively imposes a high marginal tax on employers, causing losses in efficiency. In 1968, Paul Samuelson, John Kenneth Galbraith and another 1,200 economists signed a document calling for the US Congress to introduce a system of income guarantees. Winners of the Nobel Prize in Economics, with often diverse political convictions, who support a basic income include Herbert A. Simon, Friedrich Hayek, Robert Solow, Milton Friedman, Jan Tinbergen, James Tobin and James Meade. Income grants are argued to be vastly more efficient in extending basic needs to the poor than subsidizing supplies whose effectiveness in poverty alleviation is diluted by the non-poor who enjoy the same subsidized prices. With cars and other appliances, the wealthiest 20 % of Egypt uses about 93 % of the country 's fuel subsidies. In some countries, fuel subsidies are a larger part of the budget than health and education. A 2008 study concluded that the money spent on in - kind transfers in India in a year could lift all India 's poor out of poverty for that year if transferred directly. The primary obstacle argued against direct cash transfers is the impractically for poor countries of such large and direct transfers. In practice, payments determined by complex iris scanning are used by war - torn Democratic Republic of Congo and Afghanistan, while India is phasing out its fuel subsidies in favor of direct transfers. Additionally, in aid models, the famine relief model increasingly used by aid groups calls for giving cash or cash vouchers to the hungry to pay local farmers instead of buying food from donor countries, often required by law, as it wastes money on transport costs. Corruption often leads to many civil services being treated by governments as employment agencies to loyal supporters and so it could mean going through 20 procedures, paying $2,696 in fees and waiting 82 business days to start a business, in Bolivia, while, in Canada, it takes two days, two registration procedures, and $280 to do the same. Such costly barriers favor big firms at the expense of small enterprises, where most jobs are created. Often, businesses have to bribe government officials even for routine activities, which is, in effect, a tax on business. Noted reductions in poverty in recent decades has occurred in China and India mostly as a result of the abandonment of collective farming in China and the ending of the central planning model known as the License Raj in India. The World Bank concludes that governments and feudal elites extending to the poor the right to the land that they live and use are ' the key to reducing poverty ' citing that land rights greatly increase poor people 's wealth, in some cases doubling it. Although approaches varied, the World Bank said the key issues were security of tenure and ensuring land transactions costs were low. Greater access to markets brings more income to the poor. Road infrastructure has a direct impact on poverty. Additionally, migration from poorer countries resulted in $328 billion sent from richer to poorer countries in 2010, more than double the $120 billion in official aid flows from OECD members. In 2011, India got $52 billion from its diaspora, more than it took in foreign direct investment. Microloans, made famous by the Grameen Bank, is where small amounts of money are loaned to farmers or villages, mostly women, who can then obtain physical capital to increase their economic rewards. However, microlending has been criticized for making hyperprofits off the poor even from its founder, Muhammad Yunus, and in India, Arundhati Roy asserts that some 250,000 debt - ridden farmers have been driven to suicide. Those in poverty place overwhelming importance on having a safe place to save money, much more so than receiving loans. Additionally, a large part of microfinance loans are spent not on investments but on products that would usually be paid by a checking or savings account. Microsavings are designs to make savings products available for the poor, who make small deposits. Mobile banking utilizes the wide availability of mobile phones to address the problem of the heavy regulation and costly maintenance of saving accounts. This usually involves a network of agents of mostly shopkeepers, instead of bank branches, would take deposits in cash and translate these onto a virtual account on customers ' phones. Cash transfers can be done between phones and issued back in cash with a small commission, making remittances safer. Poverty can also be reduced as an improved economic policy is developed by the governing authorities to facilitate a more equitable distribution of the nation 's wealth. Oxfam has called for an international movement to end extreme wealth concentration as a significant step towards ameliorating global poverty. The group stated that the $240 billion added to the fortunes of the world 's richest billionaires in 2012 was enough to end extreme poverty four times over. Oxfam argues that the "concentration of resources in the hands of the top 1 % depresses economic activity and makes life harder for everyone else -- particularly those at the bottom of the economic ladder. '' It has been reported that only 1 % of the world population controls 50 % of the wealth today, and the other 99 % is having access to the remaining 50 % only, and the gap has sharply increased in the recent past. José Antonio Ocampo, professor at Columbia University and former finance minister of Colombia, and Magdalena Sepúlveda Carmona, former UN Special Rapporteur on Extreme Poverty and Human Rights, argue that global tax reform is integral to human development and fighting poverty, as corporate tax avoidance has disproportionately impacted those mired in poverty, noting that "the human impact is devastatingly real. When profits are shifted out, the tax revenues from those profits that could be available to fund healthcare, schools, water sanitation and other public goods vanish from the ledger, leaving women and men, boys and girls without pathways to a better future. '' Raghuram G. Rajan, former governor of the Reserve Bank of India, former chief economist at the International Monetary Fund and professor of finance at the University of Chicago Booth School of Business has blamed the ever - widening gulf between the rich and the poor especially in the USA to be one of the main Fault Lines which caused the financial institutions to pump money into subprime mortgages -- on political behest, as a palliative and not a remedy, for poverty -- causing the financial crisis of 2007 -- 2009. In Rajan 's view the main cause of increasing gap between the high income and low income earners, was lack of equal access to high class education for the latter. The existence of inequality is in part due to a set of self - reinforcing behaviors that all together constitute one aspect of the cycle of poverty. These behaviors, in addition to unfavorable, external circumstances, also explain the existence of the Matthew effect, which not only exacerbates existing inequality, but is more likely to make it multigenerational. Widespread, multigenerational poverty is an important contributor to civil unrest and political instability. The concept of business serving the world 's poorest four billion or so people has been popular since CK Prahalad introduced the idea through his book Fortune at the Bottom of the Pyramid: Eradicating Poverty Through Profits in 2004, among many business corporations and business schools. Kash Rangan, John Quelch, and other faculty members at the Global Poverty Project at Harvard Business School "believe that in pursuing its own self - interest in opening and expanding the BoP market, business can make a profit while serving the poorest of consumers and contributing to development. '' According to Rangan "For business, the bulk of emerging markets worldwide is at the bottom of the pyramid so it makes good business sense -- not a sense of do - gooding -- to go after it. ''. In their 2013 book, "The Business Solution to Poverty, '' Paul Polak and Mal Warwick directly addressed the criticism leveled against Prahalad 's concept. They noted that big business often failed to create products that actually met the needs and desires of the customers who lived at the bottom - of - the - pyramid. Their answer was that a business that wanted to success in that market had to spend time talking to and understanding those customers. Polak had previously promoted this approach in his previous book, "Out of Poverty, '' that described the work of International Development Enterprises (iDE), which he had formed in 1982. Polak and Warwick provided practical advice: a product needed to affect at least a billion people (i.e., have universal appeal), it had to be able to be delivered to customers living where there was n't a FedEx office or even a road, and it had to be "radically affordable '' to attract someone who earned less than $2 a day. Rather than encouraging multinational businesses to meet the needs of the poor, some organizations such as iDE, the World Resources Institute, and the United Nations Development Programme began to focus on working directly with helping bottom - of - the - pyramid populations become local, small - scale entrepreneurs. Since so much of this population is engaged in agriculture, these NGOs have addressed market gaps that enable small - scale (i.e., plots less than 2 hectares) farmers to increase their production and find markets for their harvests. This is done by increasing the availability of farming equipment (e.g., pumps, tillers, seeders) and better quality seed and fertilizer, as well as expanding access for training in farming best practices (e.g., crop rotation). Creating entrepreneurs through microfinance can produce unintended outcomes: Some entrepreneurial borrowers become informal intermediaries between microfinance initiatives and poorer micro-entrepreneurs. Those who more easily qualify for microfinance split loans into smaller credit to even poorer borrowers. Informal intermediation ranges from casual intermediaries at the good or benign end of the spectrum to ' loan sharks ' at the professional and sometimes criminal end of the spectrum. Milton Friedman argues that the social responsibility of business is to increase its profits only, thus, it needs to be examined whether business in BoP markets is capable of achieving the dual objective of making a profit while serving the poorest of consumers and contributing to development? Erik Simanis has reported that the model has a fatal flaw. According to Erik "Despite achieving healthy penetration rates of 5 % to 10 % in four test markets, for instance, Procter & Gamble could n't generate a competitive return on its Pur water - purification powder after launching the product on a large scale in 2001... DuPont ran into similar problems with a venture piloted from 2006 to 2008 in Andhra Pradesh, India, by its subsidiary Solae, a global manufacturer of soy protein... Because the high costs of doing business among the very poor demand a high contribution per transaction, companies must embrace the reality that high margins and price points are n't just a top - of - the - pyramid phenomenon; they 're also a necessity for ensuring sustainable businesses at the bottom of the pyramid. '' Marc Gunther states that "The bottom - of - the - pyramid (BOP) market leader, arguably, is Unilever... Its signature BOP product is Pureit, a countertop water - purification system sold in India, Africa and Latin America. It 's saving lives, but it 's not making money for shareholders. '' This leaves the ideal of eradicating poverty through profits or with a good business sense -- not a sense of do - gooding rather questionable. Others have noted that relying on BoP consumers to choose to purchase items that increase their incomes is naive. Poor consumers may spend their income disproportionately on events or goods and services that offer short - term benefits rather than invest in things that could change their lives in the long - term. A report published in 2013 by the World Bank, with support from the Climate & Development Knowledge Network, found that climate change was likely to hinder future attempts to reduce poverty. The report presented the likely impacts of present day, 2 ° C and 4 ° C warming on agricultural production, water resources, coastal ecosystems and cities across Sub-Saharan Africa, South Asia and South East Asia. The impacts of a temperature rise of 2 ° C included: regular food shortages in Sub-Saharan Africa; shifting rain patterns in South Asia leaving some parts under water and others without enough water for power generation, irrigation or drinking; degradation and loss of reefs in South East Asia, resulting in reduced fish stocks; and coastal communities and cities more vulnerable to increasingly violent storms. In 2016, a UN report claimed that by 2030, an additional 122 million more people could be driven to extreme poverty because of climate change. Frank Fenner said in 2010 that he believed that humans will not be able to survive the population explosion and "unbridled consumption, '' and would become extinct, perhaps within a century, along with many other species. He believed the situation was irreversible, and that it was too late because the effects we have had on Earth since industrialisation rivals any effects of ice ages or comet impacts. Harvard biologist E.O. Wilson calculated that Earth would lose half its higher life forms by 2100 if the current rate of human disruption continued. Many think that poverty is the cause of environmental degradation, while there are others who claim that rather the poor are the worst sufferers of environmental degradation caused by reckless exploitation of natural resources by the rich. A Delhi - based environment organisation, the Centre for Science and Environment, points out that if the poor world were to develop and consume in the same manner as the West to achieve the same living standards, "we would need two additional planet Earths to produce resources and absorb wastes. '', reports Anup Shah (2003). in his article Poverty and the Environment on Global Issues. Among some individuals, poverty is considered a necessary or desirable condition, which must be embraced to reach certain spiritual, moral, or intellectual states. Poverty is often understood to be an essential element of renunciation in religions such as Buddhism, Hinduism (only for monks, not for lay persons) and Jainism, whilst in Roman Catholicism it is one of the evangelical counsels. The main aim of giving up things of the materialistic world is to withdraw oneself from sensual pleasures (as they are considered illusionary and only temporary in some religions - such as the concept of dunya in Islam). This self - invited poverty (or giving up pleasures) is different from the one caused by economic imbalance. Some Christian communities, such as the Simple Way, the Bruderhof, and the Amish value voluntary poverty; some even take a vow of poverty, similar to that of the traditional Catholic orders, in order to live a more complete life of discipleship. Benedict XVI distinguished "poverty chosen '' (the poverty of spirit proposed by Jesus), and "poverty to be fought '' (unjust and imposed poverty). He considered that the moderation implied in the former favors solidarity, and is a necessary condition so as to fight effectively to eradicate the abuse of the latter. As it was indicated above the reduction of poverty results from religion, but also can result from solidarity.
when does the us play the world cup
United States at the FIFA World Cup - wikipedia The United States men 's national soccer team has played in several World Cup finals, with their best result occurring during their first appearance at the 1930 World Cup, when the United States finished in third place. After the 1950 World Cup, in which the United States upset England in group play 1 -- 0, the U.S. was absent from the finals until 1990. The United States has participated in every World Cup since 1990 until they failed to qualify for the 2018 competition after a loss to Trinidad and Tobago in 2017. The World Cup is an international football competition contested by the men 's national teams of the members of Fédération Internationale de Football Association (FIFA), the sport 's global governing body. The championship has been awarded every four years since the first tournament in 1930, except in 1942 and 1946, due to World War II. The current format of the World Cup involves 32 teams competing for the title, at venues within the host nation (or nations) over a period of about a month. The World Cup finals is the most widely viewed sporting event in the world, with an estimated 715 million people watching the 2006 tournament final. All times local (UYT) All times local BRT (UTC - 03) All times local (CEST / UTC + 2) All times local (CEST / UTC + 2) All times local (UTC + 9) All times local (CEST / UTC + 2) All times local (UTC + 02)
what happens to zoey on how i met your mother
How I Met Your Mother (season 6) - wikipedia The sixth season of the American television comedy series How I Met Your Mother premiered on September 20, 2010, and concluded on May 16, 2011 on CBS. Season six of How I Met Your Mother was met with mostly positive reviews.
where did aaron judge play baseball in college
Aaron Judge - wikipedia Aaron James Judge (born April 26, 1992) is an American professional baseball outfielder for the New York Yankees of Major League Baseball (MLB). Judge played college baseball at Fresno State. The Yankees selected Judge in the first round of the 2013 MLB draft. After making his MLB debut in 2016 and hitting a home run in his first career at bat, Judge went on to have a record - breaking rookie season in 2017. He was named an All - Star and won the Home Run Derby, the first rookie to do so. He broke the Yankees ' record for home runs by a rookie (besting Joe DiMaggio 's 29 with 30 before the All - Star break). He won the American League 's (AL) Rookie of the Month Awards for April, May, June and September, as well as the AL 's Player of the Month Award for June and September. Judge hit 52 home runs as a rookie, breaking Mark McGwire 's MLB rookie record of 49. He also hit 33 home runs at Yankee Stadium, breaking the record of 32 set by Babe Ruth in 1921. Judge was born and raised in Linden, California and was adopted the day after he was born by Patty and Wayne Judge, who both worked as teachers. When he was 10 years old, his parents told him that he was adopted; he recalls, "I knew I did n't look like them. '' He telephones his parents every day. He has an older brother, John, who was also adopted. Judge attended Linden High School, where he was a three - sport star. He played as a pitcher and first baseman for the baseball team, a wide receiver for the football team, and as a center for the basketball team. He set a school record for touchdowns (17) in football and led the team in points per game (18.2) in basketball. In baseball, he was part of the Linden High School team that made the California Interscholastic Federation Division III playoffs. Various colleges recruited Judge to play tight end in football, including Notre Dame, Stanford, and UCLA, but he preferred baseball. The Oakland Athletics selected him in the 31st round of the 2010 Major League Baseball draft, but he opted to enroll at California State University, Fresno (Fresno State), to play for the Fresno State Bulldogs baseball team in the Western Athletic Conference (WAC). Louisville Slugger named him a Freshman All - American. He won the 2012 TD Ameritrade College Home Run Derby. In his junior year, Judge led the Bulldogs in home runs, doubles, and runs batted in (RBIs). Judge was named to the all - conference team in all three of his seasons for the Bulldogs -- in the WAC in his first two seasons, and the Mountain West Conference (MW) as a junior (the Bulldogs joined the MW in July 2012, between his sophomore and junior seasons). The Yankees drafted Judge in the first round of the 2013 Major League Baseball draft with the 32nd overall selection, a pick the team received as compensation after losing Nick Swisher in free agency. Judge signed with the Yankees, receiving a $1.8 million signing bonus. He tore a quadriceps femoris muscle while participating in a base running drill, which kept him out of the 2013 season. He made his professional debut with the Charleston RiverDogs of the Class A South Atlantic League in 2014. He had a. 333 batting average,. 428 on - base percentage (OBP),. 530 slugging percentage (SLG), and hit nine home runs with 45 RBIs in 65 games for Charleston. The Yankees promoted him to the Tampa Yankees of the Class A-Advanced Florida State League during the season, where he hit. 283 with a. 411 OBP,. 442 SLG, eight home runs, and 33 RBIs in 66 games for Tampa. The Yankees invited Judge to spring training as a non-roster player in 2015. Judge began the 2015 season with the Trenton Thunder of the Class AA Eastern League. After Judge batted. 284 with a. 350 OBP and 12 home runs in 63 games for Trenton, the Yankees promoted Judge to the Scranton / Wilkes - Barre RailRiders of the Class AAA International League in June. He was chosen to represent the Yankees at the 2015 All - Star Futures Game. The Yankees decided not to include Judge in their September call - ups. Judge batted. 224 with eight home runs in 61 games for Scranton / Wilkes - Barre. The Yankees invited Judge to spring training in 2016, and he began the season with Scranton / Wilkes - Barre. Judge was named to the International League All - Star Team in 2016, but did not play in the 2016 Triple - A All - Star Game after he spent on a month of the disabled list due to a knee sprain. In 93 games for the RailRiders, Judge had a. 270 batting average, 19 home runs, and 65 RBIs. Judge made his MLB debut on August 13, 2016, starting in right field against the Tampa Bay Rays. In his first at - bat, Judge hit a home run off Matt Andriese; the previous batter, Tyler Austin, also making his MLB debut, had done the same. This marked the first time that two teammates had hit home runs in their first career at bats in the same game. Judge also hit a home run in his second MLB game, becoming the second Yankees player to do so, after Joe Lefebvre in 1980. Judge 's debut season, in which he batted. 179 and struck out 42 times in 84 at - bats (95 plate appearances), ended prematurely when he was placed on the 15 - day disabled list with a grade 2 right oblique strain on September 13, 2016 against the Los Angeles Dodgers. The Yankees named Judge their right fielder for Opening Day against the Tampa Bay Rays. He had his first multi-home run game on April 28 against the Baltimore Orioles to help the Yankees win 14 -- 11, coming back from a 9 -- 1 deficit. One of the home runs had a measured exit velocity of 119.4 miles per hour (192.2 km / h), the fastest exit velocity for a home run measured by Statcast since it was adopted in 2015. Judge ended the month of April with 10 home runs, tying the rookie record set by José Abreu and Trevor Story. He was named the American League 's (AL) Rookie of the Month for April. In April, he had a. 303 batting average, 10 home runs, 20 RBIs, and a. 411 OBP in 22 games. On May 3, Judge hit his 13th home run of the season, becoming the youngest player to hit 13 home runs within the first 26 games of a season. On May 21, 2017, Judge earned his first career Golden Sombrero in a game played against the Tampa Bay Rays. The Yankees debuted a cheering section in the right - field seats of Yankee Stadium on May 22, called "The Judge 's Chambers '', three rows in section 104, containing 18 seats. Fans are chosen by the team to sit there and are outfitted with black robes, wigs, and foam gavels. In a game against the Oakland Athletics on May 28, Judge hit his first career grand slam. Judge was named AL Rookie of the Month once again for May. In May, he had a. 347 batting average, seven home runs, 17 RBIs, and a. 441 OBP in 26 games. On June 10, Judge hit a home run that had an exit velocity of 121.1 miles per hour (194.9 km / h), again setting a new record for the hardest ever measured by Statcast. The following day, Judge went 4 - for - 4 with two home runs, one of which traveled 495 feet (151 m), which was the longest in MLB in the 2017 season. On June 12, Judge was named the AL Player of the Week. His week ended with him leading the AL in all three Triple Crown categories. Judge was named the AL Player of the Month for the month of June, batting. 324 with 10 home runs, 25 RBIs and a. 481 OBP. His performance in the month of June also earned him his third consecutive AL Rookie of the Month award, the longest streak since Mike Trout won four in a row in 2012. Judge had a 32 - game on - base streak, including reaching base in every game in the month of June. On July 2, Judge was voted as a starting outfielder to the 2017 MLB All - Star Game, receiving 4,488,702 votes, the most out of any player in the AL. Judge broke Joe DiMaggio 's record for most home runs hit in a Yankees ' rookie season with his 30th on July 7. He became the second rookie to hit 30 home runs before the All - Star break after Mark McGwire in 1987, the first Yankee to do so since Alex Rodriguez in 2007 and the first player in baseball since Chris Davis and Miguel Cabrera in 2013. Before the All - Star break, Judge hit. 329 with 30 home runs and 66 RBIs. Judge won the 2017 Home Run Derby, besting Minnesota Twins third baseman Miguel Sanó 11 -- 10 in the final round to become the first rookie to ever win the Derby outright. Judge hit four home runs over 500 feet, one of which travelled 513 feet, the farthest in the Derby. After his performance, MLB commissioner Rob Manfred stated that Judge is a player "who can become the face of the game. '' On July 21, Judge a hit a home run that almost travelled out of Safeco Field. The ball was hit so hard that Statcast could not measure the details on the home run. On July 27, Judge lost a portion of his front left tooth during a celebration circle after Brett Gardner hit a walk - off home run. The next game, Judge hit his 33rd home run of the season, for 37 home runs total through his first 125 career games, third-most in MLB history. On August 17, Judge hit a 457 - foot home run at Citi Field that reached the third deck but also struck out in the game, which marked 33 consecutive games with a strikeout, breaking Adam Dunn 's record for a position player. On August 20, Judge tied pitcher Bill Stoneman 's streak of striking out in 37 consecutive games. On September 4, Judge became the first AL rookie to record 100 walks in a single season since Al Rosen (1950), and the first player in MLB to do it since Jim Gilliam (1953). During a game on September 10, Judge received his 107th walk, the most walks by a rookie in a season since Ted Williams in 1939. During the same game, he also became the second rookie in MLB history to hit 40 home runs in a season since McGwire (1987). He joined Babe Ruth (1920), Lou Gehrig (1927), Joe DiMaggio (1937) and Mickey Mantle (1956) as the only Yankees to hit 40 home runs in a season at age 25 or younger. On September 20, Judge became the first player since José Bautista in 2010 and the first rookie to record 100 runs, 45 home runs, 100 RBIs, and 100 walks in a single season. On September 25, Judge hit his 49th and 50th home runs, tying and surpassing Mark McGwire 's single season rookie home run record. On September 30, Judge hit his 52nd home run of the season and his 33rd at Yankee Stadium, beating Babe Ruth 's record for the franchise set in 1921. After the conclusion of September Judge won Player of the Month for the second time and Rookie of the Month for the fourth time, slashing. 311 /. 463 /. 889 with 15 homers, 32 RBIs, 28 walks and 29 runs scored. Entering September, Judge 's second - half batting average was. 179, but he managed to raise it to. 228 by the end of the month. Judge finished the 2017 season with a. 284 batting average, 154 hits, 114 RBIs, a. 422 on - base percentage, a. 627 slugging percentage and 9 stolen bases. He led the American League in three categories, with 128 runs scored, 52 home runs, and 127 walks (11 intentional). He became the first Yankee to lead the league in home runs, walks and runs scored since Mark Teixeira, Jason Giambi and Curtis Granderson in 2009, 2005 and 2011 respectively. He ranked second in the league in RBIs, on - base percentage and slugging. He also struck out an MLB - leading 208 times, breaking the Yankees record previously set by Curtis Granderson in 2012 and a rookie record previously set by Kris Bryant in 2015. With the Yankees finishing the year 91 - 71, the team clinched a Wild Card spot. During the AL Wild Card Round against the Minnesota Twins, Judge hit his first career postseason home run en route to an 8 - 4 victory. In the first ALDS game against the Cleveland Indians, Judge struck out four times to earn his first postseason Golden Sombrero. Judge has worn the unusual uniform number of 99 since it was given to him during 2016 spring training (higher numbers are often given to young players who are not expected to make the regular - season team). Judge has stated he would prefer either No. 44 (retired by the Yankees to honor Reggie Jackson) or No. 35 (worn by Michael Pineda since 2014), but is not sure whether he would switch if the latter two were to become available. MLB along with the MLB Players Association, created Players Weekend to let players ' express themselves while connecting with their past in youth baseball '. From August 25 - 27, 2017, players wore alternate team jerseys inspired by youth league designs. They also had the option to replace their last names with their nicknames on their jersey nameplates, and the vast majority of players did so. Judge chose the nickname "All Rise '' (given to him by teammate Todd Frazier) to be worn on the back of his jersey nameplate. Judge is listed at 6 feet 7 inches (2.01 m) and 282 pounds (128 kg). Due to his large size and strength, he has elicited comparisons to Giancarlo Stanton, Richie Sexson, Dave Winfield, and Willie Stargell. Judge is a Christian and has posted about his faith on his Twitter account. He keeps a note on his phone that reads ``. 179 '', his batting average with the Yankees in 2016, and looks at it daily as a source of motivation. Judge appeared on the cover of the edition of May 15, 2017 of Sports Illustrated. On May 15, 2017, he appeared on an episode of The Tonight Show Starring Jimmy Fallon where he posed undercover to ask Yankee fans questions about himself. Judge has earned praise for his humble personality and willingness to be a team player. He is African - American.
what is the movie the good student about
Mr. Gibb - Wikipedia Mr. Gibb (released on DVD as The Good Student) is a 2006 American dark comedy drama film starring Tim Daly and Hayden Panettiere. Mr. Ronald Gibb (Tim Daly) is a high school history teacher. Although he is depicted as a creative if weak - willed teacher, his students lack interest for the subject matter. One of his students, Ally Palmer (Hayden Panettiere) is a popular teenage girl and local celebrity. She was featured in a television commercial for her father 's car dealership. Mr. Gibb secretly has a crush on Ally. He stares at her inappropriately multiple times a day. One day after school, he overhears Ally and her boyfriend Brett (John Gallagher, Jr.) get into a big argument. She no longer has a ride home from school, so he offers to take her home. Mr. Gibb and Ally are photographed just as she spontaneously kisses him on the lips (to thank him for giving her an unexpected A), and as such the school board deems his behavior unacceptable and unprofessional. She goes missing after he drops her off at her house. The news reports that a kidnapping has occurred. Mr. Gibb is the primary suspect because police officials find evidence that he was with Ally moments before she was kidnapped. The film follows Gibb as he copes with public humiliation and ends with a sudden twist in the final minutes.
name the group of island in arabian sea
Arabian Sea - wikipedia The Arabian Sea is a region of the northern Indian Ocean bounded on the north by Pakistan and Iran, on the west by northeastern Somalia and the Arabian Peninsula, and on the east by India. Historically the sea has been known by other names including the Erythraean Sea and the Persian Sea. Its total area is 3,862,000 km (1,491,000 sq mi) and its maximum depth is 4,652 metres (15,262 ft). The Gulf of Aden is in the southwest, connecting the Arabian Sea to the Red Sea through the strait of Bab - el - Mandeb, and the Gulf of Oman is in the northwest, connecting it to the Persian Gulf. The Arabian Sea has been crossed by important marine trade routes since the third or second millennium BCE. Major seaports include Jawaharlal Nehru Port in Mumbai, the Port of Karachi and the Gwadar Port in Pakistan and the Port of Salalah in Oman. Other important ports include in India, Cochin Port, Kandla Port, and Mormugao in Goa. The largest islands in the Arabian Sea include Socotra (Yemen), Masirah Island (Oman), Astola Island (Pakistan) and Andrott (India). The Arabian Sea 's surface area is about 3,862,000 km (1,491,130 sq mi). The maximum width of the Sea is approximately 2,400 km (1,490 mi), and its maximum depth is 4,652 metres (15,262 ft). The biggest river flowing into the Sea is the Indus River. The Arabian Sea has two important branches -- the Gulf of Aden in the southwest, connecting with the Red Sea through the strait of Bab - el - Mandeb; and the Gulf of Oman to the northwest, connecting with the Persian Gulf. There are also the gulfs of Khambhat and Kutch on the Indian coast. The countries with coastlines on the Arabian Sea are Somalia, Yemen, Oman, Pakistan, India and the Maldives. There are several large cities on the sea 's coast including Mumbai, Surat, Karachi, Gwadar, Pasni, Ormara, Aden, Muscat, Keti Bandar, Salalah and Duqm. International Hydrographic Organization defines the limits of the Arabian Sea as follows: On the West. The Eastern limit of the Gulf of Aden (The meridian of Cape Guardafui (Ras Asir, 51 ° 16'E)). On the North. A line joining Ràs al Hadd, East point of Arabia (22 ° 32'N) and Ràs Jiyùni (61 ° 43'E) on the coast of Pakistan. On the South. A line running from the South extremity of Addu Atoll (Maldives), to the Eastern extreme of Ràs Hafun (Africa, 10 ° 26'N). On the East. The Western limit of the Laccadive Sea (A line running from Sadashivgad Lt. on West Coast of India (14 ° 48 ′ N 74 ° 07 ′ E  /  14.800 ° N 74.117 ° E  / 14.800; 74.117) to Corah Divh (13 ° 42 ′ N 72 ° 10 ′ E  /  13.700 ° N 72.167 ° E  / 13.700; 72.167) and thence down the West side of the Laccadive and Maldive Archipelagos to the most Southerly point of Addu Atoll in the Maldives). The Arabian Sea historically and geographically has been referred to by many different names by Arab travellers and European geographers, that include Indian Sea, Persian Sea, Sindhu Sagar, Erythraean Sea, Sindh Sea, and Akhzar Sea. The Arabian Sea has been an important marine trade route since the era of the coastal sailing vessels from possibly as early as the 3rd millennium BCE, certainly the late 2nd millennium BCE through the later days known as the Age of Sail. By the time of Julius Caesar, several well - established combined land - sea trade routes depended upon water transport through the Sea around the rough inland terrain features to its north. These routes usually began in the Far East or down river from Madhya Pradesh with transshipment via historic Bharuch (Bharakuccha), traversed past the inhospitable coast of today 's Iran then split around Hadhramaut into two streams north into the Gulf of Aden and thence into the Levant, or south into Alexandria via Red Sea ports such as Axum. Each major route involved transhipping to pack animal caravan, travel through desert country and risk of bandits and extortionate tolls by local potentiates. This southern coastal route past the rough country in the southern Arabian peninsula (Yemen and Oman today) was significant, and the Egyptian Pharaohs built several shallow canals to service the trade, one more or less along the route of today 's Suez canal, and another from the Red Sea to the Nile River, both shallow works that were swallowed up by huge sand storms in antiquity. Later the kingdom of Axum arose in Ethiopia to rule a mercantile empire rooted in the trade with Europe via Alexandria. Jawaharlal Nehru Port in Mumbai is the largest port in the Arabian Sea, and the largest container port in India. The Port of Karachi (Urdu: بندر گاہ كراچى, Bandargāh - i Karācī) is Pakistan 's largest and busiest seaport, handling about 60 % of the nation 's cargo (25 million tons per annum). It is located between the Karachi towns of Kiamari and Saddar, close to the main business district and several industrial areas. The geographic position of the port places it in close proximity to major shipping routes such as the Strait of Hormuz. The history of the port is intertwined with that of the city of Karachi. Several ancient ports have been attributed in the area including "Krokola '', "Morontobara '' (Woman 's Harbour) (mentioned by Nearchus), Barbarikon (the Periplus of the Erythraean Sea, and Debal (a city invaded and captured by the Muslim general Muhammad bin Qasim in 712 AD). There is a reference to the early existence of the port of Karachi in the "Umdah '', by the Arab navigator Sulaiman al Mahri (AD 1511), who mentions "Ras al Karazi '' and "Ras Karashi '' while describing a route along the coast from Pasni to Ras Karashi. Karachi is also mentioned in the sixteenth century Turkish treatise Mirat ul Memalik (Mirror of Countries, 1557) by the Ottoman captain Seydi Ali Reis, which is a compilation of sailing directions from the Portuguese island of Diu to Hormuz in the Persian Gulf. It warns sailors about whirlpools and advises them to seek safety in "Kaurashi '' harbour if they found themselves drifting dangerously. The gate facing the sea was called "Kharadar '' (salt gate), and the gate facing the Lyari River was called "Mithadar '' (sweet gate). The modern neighbourhoods around the location of the gates are called Mithadar and Kharadar. Surrounded by mangrove swamps to the east, the sea to the southwest, and the Lyari River to the north, the town was well defended and engaged in a profitable trade with Muscat and Bahrain. The Gwadar Port is a warm - water, deep - sea port situated at Gwadar in Balochistan, Pakistan at the apex of the Arabian Sea and at the entrance of the Persian Gulf, about 460 km west of Karachi and approximately 75 km (47 mi) east of Pakistan 's border with Iran. The port is located on the eastern bay of a natural hammerhead - shaped peninsula jutting out into the Arabian Sea from the coastline. Port of Salalah in Salalah, Oman is also a major port in the area. From a modest start in 1997, the Omani container transhipment port has achieved consistent growth. It is a key container transhipment hub on the Arabian Sea and is often used as the first port of call for vessels whose crew have just been released from the clutches of Somali pirates following ransom payments for withheld vessels and crew. The port also plays host as a supply base for the visiting warships that provide protective escorts for merchant shipping in the sea lanes. From that dual role has emerged another, one as an intelligence network -- both military and civilian -- to exchange information on possible pirate sightings and near misses. Also, the International Task Force often uses the port as a base. There is a significant number of warships of all nations coming in and out of the port, which makes it a very safe bubble. The port handled just under 3.5 m teu in 2009 Major Indian ports in the Arabian Sea are Mundra Port, Kandla Port, Nava Sheva, Kochi Port, Mumbai Port, and Mormugão. There are several islands in the Arabian Sea, with the largest being Socotra (Yemen), Masirah (Oman), Astola Island (Pakistan) and Andrott (India). Astola Island, also known as Jezira Haft Talar (Urdu: زروان ءِ ہفت تلار Balochi ‎) or ' Island of the Seven Hills ', is a small, uninhabited island in the northern tip of the Arabian Sea in Pakistan 's territorial waters. It is a popular eco-tourism destination in the region. Overnight tourists camp on the island and bring their own provisions. Camping, fishing and scuba - diving expeditions are popular. It is also a site for observing turtle breeding. Endangered animals such as the green turtle (Chelonia mydas) and the Hawksbill turtle (Eretmochelys imbracata) nest on the beach at the foot of the cliffs. The island is also a very important area for endemic reptiles such as the Astola Viper (Echis carinatus astolae). Socotra (Arabic: سُقُطْرَى ‎ ‎ Suquṭra), also spelled Soqotra, is the largest island, being part of a small archipelago of four islands. It lies some 240 kilometres (150 mi) east of the Horn of Africa and 380 kilometres (240 mi) south of the Arabian Peninsula. The island is very isolated and through the process of speciation, a third of its plant life is found nowhere else on the planet. It has been described as the most alien - looking place on Earth. Masirah (Arabic: مصيرة ‎ ‎) is an island off the East coast of Oman. The main industries here are fishing and traditional textile manufacturing. Formerly, traditional ship building was important. The rugged terrain of the island and surrounding rough coastline has led to the appearance of many wrecked dhows on the beaches of the island, most of them well preserved by the salt water and intense heat. The ocean bottom environment surrounding Masirah is hostile as the majority of the area is covered in either sand or hard rock. Despite the poor quality ocean bottom, the area is very productive with marine fisheries, and any hard objects (barrels, engines) are immediately colonized by local fauna. This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Arabian Sea ''. Encyclopædia Britannica (11th ed.). Cambridge University Press.
where did the saying dressed to the nines come from
To the nines - wikipedia "To the nines '' is an English idiom meaning "to perfection '' or "to the highest degree '' or to dress "buoyantly and high class ''. In modern English usage, the phrase most commonly appears as "dressed to the nines '' or "dressed up to the nines ''. The phrase is said to be Scots in origin. The earliest written example of the phrase is from the 1719 Epistle to Ramsay by the Scottish poet William Hamilton: The bonny Lines therein thou sent me, How to the nines they did content me. Robert Burns ' "Poem on Pastoral Poetry '', published posthumously in 1800, also uses the phrase: Thou paints auld nature to the nines, In thy sweet Caledonian lines.
what is importance of cricket in india today
Cricket in India - Wikipedia 1932 (Men) 8 for IPL (Twenty20) 28 for First class and One Day Domestics 5 Zonal Teams. Cricket in India is the nation 's most popular sport by far. It is played almost everywhere in India. The Indian national cricket team won the 1983 Cricket World Cup, the 2007 ICC World Twenty20, the 2011 Cricket World Cup, the 2013 ICC Champions Trophy, and shared the 2002 ICC Champions Trophy with Sri Lanka. The domestic competitions include the Ranji Trophy, the Duleep Trophy, the Vijay Hazare Trophy, the Deodhar Trophy, the Irani Trophy and the NKP Salve Challenger Trophy. In addition, the BCCI conducts the Indian Premier League, a Twenty20 competition. The Indian cricket team is also accredited with the honour of winning all the ICC tournaments under M.S. Dhoni 's captaincy, which is a world record. The first ever match of first - class cricke played in India was in 1864 which was Madras vs Calcutta. Not many records exist from the match, however, it is known that the Man of the match was Praveen Chauhan. He played for Calcutta, but he was from Panipat. Furthermore, the Best fielder was Ashwani Sharma. Just like Chauhan, he was from Panipat as well. The entire history of cricket in India and the sub-continent as a whole is based on the existence and development of the British Raj via the East India Company. India became a member of the ' elite club ' joining Australia, England, South Africa, New Zealand and the West Indies in June 1932. India 's first match in Lords against England attracted a massive crowd of 24,000 people as well as the King of the United Kingdom. The major and defining event in the history of Indian cricket during this period was the Partition of India following full independence from the British Raj in 1947. An early casualty of change was the Bombay Quadrangular tournament, which had been a focal point of Indian cricket for over 50 years. The new India had no place for teams based on ethnic origin. As a result, the Ranji Trophy came into its own as the national championship. The last - ever Bombay Pentangular, as it had become, was won by the Hindus in 1945 -- 46. India also recorded its first Test victory in 1952, beating England by an innings in Madras. One team totally dominated Indian cricket in the 1960s. As part of 15 consecutive victories in the Ranji Trophy from 1958 -- 59 to 1972 -- 73, Bombay won the title in all ten seasons of the period under review. Among its players were Farokh Engineer, Dilip Sardesai, Bapu Nadkarni, Ramakant Desai, Baloo Gupte, Ashok Mankad and Ajit Wadekar. In the 1961 -- 1962 season, the Duleep Trophy was inaugurated as a zonal competition. It was named after Ranji 's nephew, Kumar Shri Duleepsinhji (1905 -- 59). With Bombay in its catchment, it is not surprising that the West Zone won six of the first nine titles. Bombay continued to dominate Indian domestic cricket, with only Karnataka, Delhi, and a few other teams able to mount any kind of challenge during this period. India enjoyed two international highlights. In 1971, they won a Test series in England for the first time ever, surprisingly defeating Ray Illingworth 's Ashes winners. In 1983, again in England, India were surprise winners of the 1983 Cricket World Cup under the captaincy of Kapil Dev. During the 1970s, the Indian cricket team began to see success overseas beating New Zealand, and holding Australia, South Africa and England to a draw. The backbone of the team were the Indian spin quartet -- Bishen Bedi, E.A.S. Prasanna, BS Chandrasekhar and Srinivas Venkataraghavan, giving rise to what would later be called the Golden Era of Indian cricket history. This decade also saw the emergence of two of India 's best ever batsmen, Sunil Gawaskar and Gundappa Vishwanath responsible for the back - to - back series wins in 1971 in the West Indies and in England, under the captaincy of Ajit Wadekar. (From the 1993 -- 94 season, the Duleep Trophy was converted from a knockout competition to a league format.) Several team names and spellings were altered during the 1990s when traditional Indian names were introduced to replace those that were associated with the British Raj. Most notably, Bombay became Mumbai and the famous venue of Madras became Chennai. During the 1980s, India developed a more attack - focused batting line - up with talented batsmen such as Mohammad Azharuddin, Dilip Vengsarkar and Ravi Shastri prominent during this decade. (Despite India 's victory in the Cricket World Cup in 1983, the team performed poorly in the Test arena, including 28 consecutive Test matches without a victory. However, India won the Asia Cup in 1984 and won the World Championship of Cricket in Australia in 1985.) The 1987 Cricket World Cup was held in India. Since 2000, the Indian team underwent major improvements with the appointment of John Wright, India 's first ever foreign coach. This appointment met success internationally as India maintained their unbeaten home record against Australia in Test series after defeating them in 2001 and won the inaugural ICC World T20 in 2007. India was also the first Sub-continental team to win at the WACA in January 2008 against Australia. India 's victory against the Australians in 2001 marked the beginning of a dream era for the team under the captainship of Sourav Ganguly, winning Test matches in Zimbabwe, Sri Lanka, West Indies and England. India also shared a joint victory with Sri Lanka in the ICC Championship, and went on to the finals in the 2003 Cricket World Cup only to be beaten by Australia. In September 2007, India won the first ever Twenty20 World Cup held in South Africa, beating Pakistan by 5 runs in a thrilling final. India won the Cricket World Cup in 2011 under the captainship of Mahendra Singh Dhoni, the first time since 1983 -- they beat Sri Lanka in the final held in Mumbai. India played its 500th Test match against New Zealand at Kanpur from 22 September 2016. India won this match by 197 runs. This test was played under the captaincy of Virat Kohli. International cricket in India generally does not follow a fixed pattern. For example, the English schedule under which the nation tours other countries during winter and plays at home during the summer. Generally, there has recently been a tendency to play more one - day matches than Test matches. Cricket in India is managed by the Board of Control for Cricket in India (BCCI), the richest cricket board in the cricket world, yet, average cricket fans can not get hold of tickets to see matches, much of which are distributed as largesse. Indian International Cricket Squad has also provided some of the greatest players to the world, the biggest example of which is Sachin Tendulkar. Indian cricket has a rich history. The Indian national team is currently ranked the No. 1 team in Test, ODI and but at 5th position in T20I, making it the best cricket team in the world. In Twenty20, stronger crowd participation was seen than in other forms of the game. It has been greatly acknowledged by people and has made huge profits.
why does matthew and luke have different accounts
Synoptic gospels - wikipedia Outline of Bible - related topics The gospels of Matthew, Mark, and Luke are referred to as the Synoptic Gospels because they include many of the same stories, often in a similar sequence and in similar or sometimes identical wording. They stand in contrast to John, whose content is comparatively distinct. The term synoptic (Latin: synopticus; Greek: συνοπτικός, translit. synoptikós) comes via Latin from the Greek σύνοψις, synopsis, i.e. "(a) seeing all together, synopsis ''; the sense of the word in English, the one specifically applied to these three gospels, of "giving an account of the events from the same point of view or under the same general aspect '' is a modern one. This strong parallelism among the three gospels in content, arrangement, and specific language is widely attributed to literary interdependence. The question of the precise nature of their literary relationship -- the synoptic problem -- has been a topic of lively debate for centuries and has been described as "the most fascinating literary enigma of all time ''. The longstanding majority view favors Marcan priority, in which both Matthew and Luke have made direct use of the Gospel of Mark as a source, and further holds that Matthew and Luke also drew from an additional hypothetical document, called Q. Broadly speaking, the synoptic gospels are similar to John: all are composed in Koine Greek, have a similar length, and were completed within a century of Jesus ' death. They also differ from non-canonical sources, such as the Gospel of Thomas, in that they belong to the ancient genre of biography, collecting not only Jesus ' teachings, but recounting in an orderly way his origins, his ministry and miracles, and his passion and resurrection. In content and in wording, though, the synoptics diverge widely from John but have a great deal in common with each other. Though each gospel includes some unique material, the majority of Mark and roughly half of Matthew and Luke coincide in content, in much the same sequence, often nearly verbatim. This common material is termed the triple tradition. The triple tradition, the material included by all three synoptic gospels, includes many stories and teachings: Furthermore, the triple tradition 's pericopae (passages) tend to be arranged in much the same order in all three gospels. This stands in contrast to the material found in only two of the gospels, which is much more variable in order. The classification of text as belonging to the triple tradition (or for that matter, double tradition) is not always definitive, depending rather on the degree of similarity demanded. For example, Matthew and Mark report the cursing of the fig tree, clearly a single incident, despite some substantial differences of wording and content. Searching Luke, however, we find only the parable of the barren fig tree, in a different point of the narrative. Some would say that Luke has extensively adapted an element of the triple tradition, while others would regard it as a distinct pericope. An illustrative example of the three texts in parallel is the healing of the leper: Καὶ ἰδοὺ, λεπρὸς προσελθὼν προσεκύνει αὐτῷ λέγων Κύριε, ἐὰν θέλῃς δύνασαί με καθαρίσαι. καὶ ἐκτείνας τὴν χεῖρα ἥψατο αὐτοῦ λέγων Θέλω, καθαρίσθητι καὶ εὐθέως ἐκαθαρίσθη αὐτοῦ ἡ λέπρα. Καὶ ἔρχεται πρὸς αὐτὸν λεπρὸς παρακαλῶν αὐτὸν καὶ γονυπετῶν καὶ λέγων αὐτῷ ὅτι, Ἐὰν θέλῃς δύνασαί με καθαρίσαι. καὶ σπλαγχνισθεὶς ἐκτείνας τὴν χεῖρα αὐτοῦ ἥψατο καὶ λέγει αὐτῷ Θέλω, καθαρίσθητι καὶ εὐθὺς ἀπῆλθεν ἀπ _̓ αὐτοῦ ἡ λέπρα, καὶ ἐκαθαρίσθη. Καὶ ἰδοὺ, ἀνὴρ πλήρης λέπρας ἰδὼν δὲ τὸν Ἰησοῦν πεσὼν ἐπὶ πρόσωπον ἐδεήθη αὐτοῦ λέγων Κύριε, ἐὰν θέλῃς δύνασαί με καθαρίσαι. καὶ ἐκτείνας τὴν χεῖρα ἥψατο αὐτοῦ λέγων Θέλω, καθαρίσθητι καὶ εὐθέως ἡ λέπρα ἀπῆλθεν ἀπ _̓ αὐτοῦ. And behold, a leper came and worships him, saying: Lord, if you wish, I can be cleansed. And he stretched out his hand and touched him, saying: I wish it; be cleansed. And immediately his leprosy was cleansed. And, calling out to him, there comes to him a leper and kneeling and saying to him: If you wish, I can be cleansed. And, moved with compassion, he stretched out his hand and touched him and says to him: I wish it; be cleansed. And immediately the leprosy left him, and he was cleansed. And behold, a man full of leprosy. But, upon seeing Jesus, he fell upon his face and requested him, saying: Lord, if you wish, I can be cleansed. And he stretched out his hand and touched him, saying: I wish it; be cleansed. And immediately the leprosy left him. More than half the wording in this passage is identical. Just as interesting, though, is that each gospel includes words absent in the other two and omits something included by the other two. It has been observed that the triple tradition itself constitutes a complete gospel quite similar to the shortest gospel, Mark. Mark, unlike Matthew and Luke, adds relatively little to the triple tradition. Pericopae unique to Mark are scarce, notably two healings involving saliva and the naked runaway. Mark 's additions within the triple tradition tend to be explanatory elaborations (e.g., "the stone was rolled back, for it was very large '') or Aramaisms (e.g., "Talitha kum! ''). The pericopae Mark shares with only Luke are also quite few: the Capernaum exorcism and departure from Capernaum, the strange exorcist, and the widow 's mites. A greater number, but still not many, are shared with only Matthew, most notably the so - called "Great Omission '' from Luke of Mk 6: 45 -- 8: 26. Most scholars take these observations as a strong clue to the literary relationship among the synoptics and Mark 's special place in that relationship. The hypothesis favored by most experts is Marcan priority, that Mark was composed first and that Matthew and Luke each used Mark and incorporated most of it, with adaptations, into their own gospels. A leading alternative hypothesis is Marcan posteriority, that Mark was formed primarily by extracting what Matthew and Luke shared in common. An extensive set of material -- some two hundred verses or roughly half the length of the triple tradition -- are the pericopae shared between Matthew and Luke but absent in Mark. This is termed the double tradition. Parables and other sayings predominate in the double tradition, but it also includes narrative elements: Unlike triple - tradition material, double - tradition material is very differently arranged in the two gospels. Matthew 's lengthy Sermon on the Mount, for example, is paralleled by Luke 's shorter Sermon on the Plain, with the remainder of its content scattered throughout Luke. This is consistent with the general pattern of Matthew collecting sayings into large blocks, while Luke does the opposite and intersperses them with narrative. Besides the double - tradition proper, Matthew and Luke often agree against Mark within the triple tradition to varying extents, sometimes including several additional verses, sometimes differing by a single word. These are termed the major and minor agreements (the distinction is imprecise). One example is in the passion narrative, where Mark has simply, "Prophesy! '' while Matthew and Luke both add, "Who is it that struck you? '' The double - tradition 's origin, with its major and minor agreements, is a key facet of the synoptic problem. The simplest hypothesis is that Luke relied on Matthew 's work or vice versa. But many experts, on various grounds, maintain that neither Matthew nor Luke used the other 's work. If this is the case, they must have drawn from some common source, distinct from Mark, that provided the double - tradition material and overlapped with Mark 's content where major agreements occur. This hypothetical document is termed Q, for the German Quelle, meaning "source ''. Matthew and Luke contain a large amount of material found in no other gospel. These materials are sometimes called Special Matthew or M and Special Luke or L. Both Special Matthew and Special Luke include distinct opening infancy narratives and distinct post-resurrection conclusions (with Luke continuing the story in his second book Acts). In between, Special Matthew includes mostly parables, while Special Luke includes both parables and healings. Special Luke is notable for containing a greater concentration of Semitisms than any other gospel material. Luke gives some indication of how he composed his gospel in his prologue: Since many have undertaken to set down an orderly account of the events that have been fulfilled among us, just as they were handed on to us by those who from the beginning were eyewitnesses and servants of the word, I too decided, after investigating everything carefully from the very first, to write an orderly account for you, most excellent Theophilus, so that you may know the truth concerning the things about which you have been instructed. The "synoptic problem '' is the question of the specific literary relationship among the three synoptic gospels -- that is, the question as to the source upon which gospel depended when it was written. The texts of the three synoptic gospels often agree very closely in wording and order, both in quotations and in narration. Most scholars ascribe this to documentary dependence, direct or indirect, meaning the close agreements among synoptic gospels are due to one gospel 's drawing from the text of another, or from some written source that gospel also drew from. The synoptic problem hinges on several interrelated points of controversy: Furthermore, some theories try to explain the relation of the synoptic gospels to John; to non-canonical gospels such as Thomas, Peter, and Egerton; to the Didache; and to lost documents such as the Hebrew logia mentioned by Papias, the Jewish -- Christian gospels, and the Gospel of Marcion. Ancient sources are virtually unanimous in ascribing the synoptic gospels to the apostle Matthew, Peter 's interpreter Mark, and Paul 's companion Luke, hence their respective canonical names. A remark by Augustine at the turn of the fifth century presents the gospels as composed in their canonical order (Matthew, Mark, Luke, John), with each evangelist thoughtfully building upon and supplementing the work of his predecessors -- the Augustinian hypothesis (Matthew -- Mark). This view (when any model of dependence was considered at all) was seldom questioned until the late eighteenth century, when Johann Jakob Griesbach published a synopsis of the gospels. Instead of harmonizing them, he displayed them side by side, making both similarities and divergences apparent. Griesbach, noticing the special place of Mark in the synopsis, hypothesized Marcan posteriority and advanced (as Henry Owen had a few years earlier) the two - gospel hypothesis (Matthew -- Luke). In the nineteenth century, the tools of literary criticism were applied to the synoptic problem in earnest, especially in German scholarship. Early work revolved around a hypothetical proto - gospel (Ur - Gospel), possibly in Aramaic, underlying the synoptics. From this line of inquiry, however, a consensus emerged that Mark itself was the principal source for the other two gospels -- Marcan priority. In a theory first proposed by Weisse in 1838, the double tradition was explained by Matthew and Luke independently using two sources -- thus, the two - source (Mark - Q) theory -- which were Mark and another hypothetical source consisting mostly of sayings. This additional source was at first seen as the logia (sayings) spoken of by Papias and thus called "Λ '', but later it became more generally known as "Q '', from the German Quelle, meaning source. This two - source theory eventually won wide acceptance and was seldom questioned until the late twentieth century; most scholars simply took this new orthodoxy for granted and directed their efforts toward Q itself, and this is still largely the case. The theory is also well known in a more elaborate form set forth by Streeter in 1924, which additionally hypothesized written sources "M '' and "L '' for Special Matthew and Special Luke, respectively -- hence, the influential four - document hypothesis. This exemplifies the prevailing scholarship of the time, in which the canonical gospels were seen as late products, from well into the second century, composed by unsophisticated cut - and - paste redactors out of a progression of written sources, derived in turn from oral traditions and folklore that had evolved in various communities. More recently, however, as this view has gradually fallen into disfavor, so too has the centrality of documentary interdependence and hypothetical documentary sources as an explanation for all aspects of the synoptic problem. In recent decades, weaknesses of the two - source theory have been more widely recognized, and debate has reignited. Many have independently argued that Luke did make some use of Matthew after all -- the Common Sayings Source. British scholars went further and dispensed with Q entirely, ascribing the double tradition to Luke 's direct use of Matthew -- the Farrer hypothesis. New attention is also being given to the Wilke hypothesis which, like Farrer, dispenses with Q but ascribes the double tradition to Matthew 's direct use of Luke. Meanwhile, the Augustinian hypothesis has also made a comeback, especially in American scholarship. The Jerusalem school hypothesis has also attracted fresh advocates, as has the Independence hypothesis, which denies documentary relationships altogether. On this collapse of consensus, Wenham observed: "I found myself in the Synoptic Problem Seminar of the Society for New Testament Studies, whose members were in disagreement over every aspect of the subject. When this international group disbanded in 1982 they had sadly to confess that after twelve years ' work they had not reached a common mind on a single issue. '' Nearly every conceivable theory has been advanced as a solution to the synoptic problem. The most notable theories are listed here:
where is calcium stored in a muscle cell
Sarcoplasmic reticulum - wikipedia The sarcoplasmic reticulum (SR) is a membrane bound structure found within muscle cells, that is similar to the endoplasmic reticulum in other cells. The main function of the SR is to store calcium ions (Ca). Calcium ion levels are kept relatively constant, with the concentration of calcium ions within a cell being 100,000 times smaller than the concentration of calcium ions outside the cell. This means that small increases in calcium ions within the cell are easily detected and can bring about important cellular changes (the calcium is said to be a second messenger; see calcium in biology for more details). Calcium is used to make calcium carbonate (found in chalk) and calcium phosphate, two compounds that the body uses to make teeth and bones. This means that too much calcium within the cells can lead to hardening (calcification) of certain intracellular structures, including the mitochondria, leading to cell death. Therefore, it is vital that calcium ion levels are controlled tightly, and can be released into the cell when necessary and then removed from the cell. The sarcoplasmic reticulum is a network of tubules that extend throughout muscle cells, wrapping around (but not in direct contact with) the myofibrils (contractile units of the cell). Cardiac and skeletal muscle cells, contain structures called transverse tubules (T - tubules), which are extensions of the cell membrane that travel into the centre of the cell. T - tubules are closely associated with a specific region of the SR, known as the terminal cisternae in cardiac muscle or junctional SR in skeletal muscle, with a distance of roughly 12 nanometers, separating them. This is the primary site of calcium release. The longitudinal SR are thinner projects, that run between the terminal cisternae / junctional SR, and are the location where ion channels necessary for calcium ion absorption are most abundant. These processes are explained in more detail below and are fundamental for the process of excitation - contraction coupling in skeletal, cardiac and smooth muscle. The SR contains ion channel pumps, within its membrane that are responsible for pumping Ca into the SR. As the calcium ion concentration within the SR is higher than in the rest of the cell, the calcium ions wo n't freely flow into the SR, and therefore pumps are required, that use energy, which they gain from a molecule called adenosine triphosphate (ATP). These calcium pumps are called Sarco (endo) plasmic reticulum ATPases (SERCA). There are a variety of different forms of SERCA, with SERCA 2a being found primarily in cardiac and skeletal muscle. SERCA consists of 13 subunits (labelled M1 - M10, N, P and A). Calcium ions bind to the M1 - M10 subunits (which are located within the membrane), whereas ATP binds to the N, P and A subunits (which are located outside the SR). When 2 calcium ions, along with a molecule of ATP, bind to the cytosolic side of the pump (i.e the region of the pump outside the SR), the pump opens. This occurs because ATP (which contains three phosphate groups) releases a single phosphate group (becoming adenosine diphosphate). The released phosphate group then binds to the pump, causing the pump to change shape. This shape change causes the cytosolic side of the pump to open, allowing the two Ca to enter. The cytosolic side of the pump then closes and the sarcoplasmic reticulum side opens, releasing the Ca into the SR. A protein found in cardiac muscle, called phospholamban (PLB) has been shown to prevent SERCA from working. It does this by binding to the SERCA and decreasing its attraction (affinity) to calcium, therefore preventing calcium uptake into the SR. Failure to remove Ca from the cytosol, prevents muscle relaxation and therefore means that there is a decrease in muscle contraction too. However, molecules such as adrenaline and noradrenaline, can prevent PLB from inhibiting SERCA. When these hormones bind to a receptor, called a beta 1 adrenoceptor, located on the cell membrane, they produce a series of reactions (known as a cyclic AMP pathway) that produces an enzyme called protein kinase A (PKA). PKA can add a phosphate to PLB (this is known as phosphorylation), preventing it from inhibiting SERCA and allowing for muscle relaxation. Located within the SR is a protein called calsequestrin. This protein can bind to around 50 Ca, which decreases the amount of free Ca within the SR (as more is bound to calsequestrin). Therefore more calcium can be stored (the calsequestrin is said to be a buffer). It is primarily located within the junctional SR / terminal cisternae, in close association with the calcium release channel (described below) Calcium ion release from the SR, occurs in the junctional SR / terminal cisternae through a ryanodine receptor (RyR) and is known as a calcium spark. There are three types of ryanodine receptor, RyR1 (in skeletal muscle), RyR2 (in cardiac muscle) and RyR3 (in the brain). Calcium release through ryanodine receptors in the SR is triggered differently in different muscles. In cardiac and smooth muscle an electrical impulse (action potential) triggers calcium ions to enter the cell through an L - type calcium channel located in the cell membrane (smooth muscle) or T - tubule membrane (cardiac muscle). These calcium ions bind to and activate the RyR, producing a larger increase in intracellular calcium. In skeletal muscle, however, the L - type calcium channel is bound to the RyR. Therefore activation of the L - type calcium channel, via an action potential, activates the RyR directly, causing calcium release (see calcium sparks for more details). Also, caffeine (found in coffee) can bind to and stimulate RyR. Caffeine works by making the RyR more sensitive to either the action potential (skeletal muscle) or calcium (cardiac or smooth muscle) therefore producing calcium sparks more often (this can result in increased heart rate, which is why we feel more awake after coffee). Triadin and Junctin are proteins found within the SR membrane, that are bound to the RyR. The main role of these proteins is to anchor calsequestrin (see above) to the ryanodine receptor. At ' normal ' (physiological) SR calcium levels, calsequestrin binds to the RyR, Triadin and Junctin, which prevents the RyR from opening. If calcium concentration within the SR falls too low, there will be less calcium bound to the calsequestrin. This means that there is more room on the calsequestrin, to bind to the junctin, triadin and ryanodine receptor, therefore it binds tighter. However, if calcium within the SR rises too high, more calcium binds to the calsequestrin and therefore it binds to the junctin - triadin - RyR complex less tightly. The RyR can therefore open and release calcium into the cell. In addition to the effects that PKA had on phospholamban (see above) that resulted in increased relaxation of the cardiac muscle, PKA (as well as another enzyme called calmodulin kinase II) can also phosphorylate ryanodine receptors. When phosphorylated, RyRs are more sensitive to calcium, therefore they open more often and for longer periods of time. This increases calcium release from the SR, increasing the rate of contraction. Therefore, in cardiac muscle, activation of PKA, through the cyclic AMP pathway, results in increased muscle contraction (via RyR2 phosphorylation) and increased relaxation (via phospholamban phosphorylation), which increases heart rate. The mechanism behind the termination of calcium release through the RyR is still not fully understood. Some researchers believe it is due to the random closing of ryanodine receptors (known as stochastic attrition), or the ryanodine receptors becoming inactive after a calcium spark, while others believe that a decrease in SR calcium, triggers the receptors to close (see calcium sparks for more details).
how far is eagle mountain from salt lake
Eagle Mountain, Utah - Wikipedia Eagle Mountain is a city in Utah County, Utah, United States. It is part of the Provo -- Orem, Utah Metropolitan Statistical Area. The city is located to the west as well as north of the Lake Mountains, which are west of Utah Lake. It was incorporated December 3, 1996 and has been rapidly growing ever since. The population was 21,415 at the 2010 census. Although Eagle Mountain was a town in 2000, it has since been classified as a fourth - class city by state law. In its short history, the city has quickly become known for its rapid growth. Eagle Mountain is located at the western and northern bases of the Lake Mountains in the flat Cedar Valley east, and northeast of the town of Cedar Fort. According to the United States Census Bureau, the city has a total area of 41.7 square miles (108.0 km). SR - 73, Eagle Mountain Boulevard, and Ranches Parkway provide access to the city from Salt Lake Valley, and Pioneer Crossing, Redwood Road, and Pony Express Parkway provide access to the city from Utah County although the city center sits at least 15 miles (24 km) from the two valleys ' main transportation corridor along Interstate 15. The Utah Department of Transportation is in the process of building a western freeway for the Salt Lake Valley (the Mountain View Corridor), which will connect to SR - 73 only a few miles from the city. The area is home to a number of natural landmarks, including a site along the original Pony Express trail and 1,800 - year - old rock art petroglyphs carved by ancient Fremont Indians. In 2011 Eagle Mountain extended further west with the annexation of the White Hills neighborhood, which had about 400 residents, as well as area that is part of the Pole Canyon development plan. The land outside of white hills was Almost 2,900 acres. As of the census of 2010, there were 21,415 people, 5,111 households, and 4,741 families residing in the town. The population density was 513.6 inhabitants per square mile (20.0 / km2). There were 5,546 housing units at an average density of 133 per square mile. The racial makeup of the town was 91.9 % White, 0.6 % African American, 0.5 % American Indian and Alaskan Native, 0.6 % Asian, 0.6 % Pacific Islander, 2.7 % from other races, and 3.1 % from two or more races. Hispanic or Latino of any race were 8.6 % of the population. There were 5,111 households of which 72.9 % had children under the age of 18 living with them, 84.7 % were married couples living together, 5.6 % had a female householder with no husband present, and 7.2 % were non-families. 5.0 % of all households were made up of individuals and 0.3 % had someone living alone who was 65 years of age or older. The average household size was 4.19 and the average family size was 4.34. In the town the population was spread out with 49.5 % under the age of 18, 4.6 % from 20 to 24, 35.7 % from 25 to 44, 8.0 % from 45 to 64, and 1.8 % who were 65 years of age or older. The median age was 20.3 years. According to the U.S. Census Bureau 's 2007 - 2011 statistics, the median income for a household in the city was $64,676. The per capita income for the town was $17,814 (U.S. Census Bureau 2007 - 2011). About 7.6 % of the population was below the poverty line. In 2015, Eagle Mountain was the 10th most conservative city in the United States as judged by political donations. Eagle Mountain City has a six - member Traditional Council form of government. The Mayor is a non-voting member of the Council except in the situation of a tie vote. The mayor acts as an elected executive with the city council functioning with legislative powers. Eagle Mountain by ordinance offers candidates for mayor the option of declaring candidacy as primary source of income at $70,000 per year or secondary source of income at $27,700. The mayor may select a chief administrative officer to oversee the different departments. The current mayor is Tom Westmoreland who took office in January 2018. Eagle Mountain City has seen a voting history from 3 % (2014) of registered voter to 95 % (1997) of registered voters participating in an election over the course of its incorporation. Eagle Mountain is located in the Alpine School District and currently has five elementary schools (Eagle Valley, Hidden Hollow, Mountain Trails, Pony Express, and Blackridge). Frontier Middle School serves students in grades 7 - 9 except for those in the Silverlake area who attend Vista Heights Middle School in Saratoga Springs. High school students attend Westlake High School in nearby Saratoga Springs. A high school in Eagle Mountain is planned to open in 2019. Samuel Jarman is the Superintendent of Schools. The city also has two public charter schools (The Ranches Academy, grades K - 6 and Rockwell Charter High School, grades 7 - 12). The three major roads running into Eagle Mountain are Utah State Route 73, which runs through the northern part of the city and along its western edge into Cedar Fort, Eagle Mountain Blvd which goes straight to city center, and Pony Express Pkwy, which was extended east to Redwood road in Saratoga Springs in 2010. This was done to facilitate access with the rest of Utah County via connection with Pioneer Crossing, the east - west connector from Redwood Road to I - 15. In 2008, the Utah Transit Authority (UTA) began service on an express bus route (# 806) into Eagle Mountain. It is the first UTA bus to service the city. The city lists four regional parks and about 35 local parks. Eagle Mountain City parks are identified on the city 's Parks Finder Map. In 2009, Eagle Mountain opened the Mountain Ranch Bike Park. This park is the first of its kind on the Wasatch Front. It features a jump line, two slope style tracks, a single track network, and a skills area with a pump track and wood features. On January 20, 2015 the city council approved budget for expanding Cory Wride Memorial park.
what type of animals are crocodiles and alligators
Crocodile - wikipedia Crocodiles (subfamily Crocodylinae) or true crocodiles are large aquatic reptiles that live throughout the tropics in Africa, Asia, the Americas and Australia. Crocodylinae, all of whose members are considered true crocodiles, is classified as a biological subfamily. A broader sense of the term crocodile, Crocodylidae that includes Tomistoma, is not used in this article. The term crocodile here applies to only the species within the subfamily of Crocodylinae. The term is sometimes used even more loosely to include all extant members of the order Crocodilia, which includes Tomistoma, the alligators and caimans (family Alligatoridae), the gharials (family Gavialidae), and all other living and fossil Crocodylomorpha. Although they appear to be similar to the untrained eye, crocodiles, alligators and the gharial belong to separate biological families. The gharial with its narrow snout is easier to distinguish, while morphological differences are more difficult to spot in crocodiles and alligators. The most obvious external differences are visible in the head, with crocodiles having narrower and longer heads, with a more V - shaped than a U-shaped snout compared to alligators and caimans. Another obvious trait is that the upper and lower jaws of the crocodiles are the same width, and the teeth in the lower jaw fall along the edge or outside the upper jaw when the mouth is closed; therefore, all teeth are visible unlike an alligator which possesses in the upper jaw small depressions into which the lower teeth fit. Also, when the crocodile 's mouth is closed, the large fourth tooth in the lower jaw fits into a constriction in the upper jaw. For hard - to - distinguish specimens, the protruding tooth is the most reliable feature to define the species ' family. Crocodiles have more webbing on the toes of the hind feet and can better tolerate saltwater due to specialized salt glands for filtering out salt, which are present but non-functioning in alligators. Another trait that separates crocodiles from other crocodilians is their much higher levels of aggression. Crocodile size, morphology, behaviour and ecology differ somewhat among species. However, they have many similarities in these areas as well. All crocodiles are semiaquatic and tend to congregate in freshwater habitats such as rivers, lakes, wetlands and sometimes in brackish water and saltwater. They are carnivorous animals, feeding mostly on vertebrates such as fish, reptiles, birds and mammals, and sometimes on invertebrates such as molluscs and crustaceans, depending on species and age. All crocodiles are tropical species that, unlike alligators, are very sensitive to cold. They separated from other crocodilians during the Eocene epoch, about 55 million years ago. Many species are at the risk of extinction, some being classified as critically endangered. The word "crocodile '' comes from the Ancient Greek κροκόδιλος (crocodilos), "lizard '', used in the phrase ho krokódilos tou potamoú, "the lizard of the (Nile) river ''. There are several variant Greek forms of the word attested, including the later form κροκόδειλος (crocodeilos) found cited in many English reference works. In the Koine Greek of Roman times, crocodilos and crocodeilos would have been pronounced identically, and either or both may be the source of the Latinized form crocodīlus used by the ancient Romans. Crocodilos or crocodeilos is a compound of krokè ("pebbles ''), and drilos / dreilos ("worm ''), although drilos is only attested as a colloquial term for "penis ''. It is ascribed to Herodotus, and supposedly describes the basking habits of the Egyptian crocodile. The form crocodrillus is attested in Medieval Latin. It is not clear whether this is a medieval corruption or derives from alternative Greco - Latin forms (late Greek corcodrillos and corcodrillion are attested). A (further) corrupted form cocodrille is found in Old French and was borrowed into Middle English as cocodril (le). The Modern English form crocodile was adapted directly from the Classical Latin crocodīlus in the 16th century, replacing the earlier form. The use of - y - in the scientific name Crocodylus (and forms derived from it) is a corruption introduced by Laurenti (1768). A total of 14 extant species have been recognized. Further genetic study is needed for the confirmation of proposed species under the genus Osteolaemus, which is currently monotypic. A crocodile 's physical traits allow it to be a successful predator. Its external morphology is a sign of its aquatic and predatory lifestyle. Its streamlined body enables it to swim swiftly; it also tucks its feet to the side while swimming, making it faster by decreasing water resistance. Crocodiles have webbed feet which, though not used to propel them through the water, allow them to make fast turns and sudden moves in the water or initiate swimming. Webbed feet are an advantage in shallower water where the animals sometimes move around by walking. Crocodiles have a palatal flap, a rigid tissue at the back of the mouth that blocks the entry of water. The palate has a special path from the nostril to the glottis that bypasses the mouth. The nostrils are closed during submergence. Like other archosaurs, crocodilians are diapsid, although their post-temporal fenestrae are reduced. The walls of the braincase are bony but lack supratemporal and postfrontal bones. Their tongues are not free, but held in place by a membrane that limits movement; as a result, crocodiles are unable to stick out their tongues. Crocodiles have smooth skin on their bellies and sides, while their dorsal surfaces are armoured with large osteoderms. The armoured skin has scales and is thick and rugged, providing some protection. They are still able to absorb heat through this armour, as a network of small capillaries allows blood through the scales to absorb heat. Crocodilian scales have pores believed to be sensory in function, analogous to the lateral line in fishes. They are particularly seen on their upper and lower jaws. Another possibility is that they are secretory, as they produce an oily substance which appears to flush mud off. Size greatly varies among species, from the dwarf crocodile to the saltwater crocodile. Species of Osteolaemus grow to an adult size of just 1.5 to 1.9 m (4.9 to 6.2 ft), whereas the saltwater crocodile can grow to sizes over 7 m (23 ft) and weigh 1,000 kg (2,200 lb). Several other large species can reach over 5.2 m (17 ft) long and weigh over 900 kg (2,000 lb). Crocodilians show pronounced sexual dimorphism, with males growing much larger and more rapidly than females. Despite their large adult sizes, crocodiles start their lives at around 20 cm (7.9 in) long. The largest species of crocodile is the saltwater crocodile, found in eastern India, northern Australia, throughout South - east Asia, and in the surrounding waters. The largest crocodile ever held in captivity is an estuarine -- Siamese hybrid named Yai (Thai: ใหญ่, meaning big) (born 10 June 1972) at the Samutprakarn Crocodile Farm and Zoo, Thailand. This animal measures 6 m (20 ft) in length and weighs 1,114 kg (2,456 lb). The longest crocodile captured alive is Lolong, which was measured at 6.17 m (20.2 ft) and weighed at 1,075 kg (2,370 lb) by a National Geographic team in Agusan del Sur Province, Philippines. Crocodiles are polyphyodonts; they are able to replace each of their 80 teeth up to 50 times in their 35 - to 75 - year lifespan. Next to each full - grown tooth, there is a small replacement tooth and an odontogenic stem cell in the dental lamina in standby that can be activated if required. Crocodilians are more closely related to birds and dinosaurs than to most animals classified as reptiles, the three families being included in the group Archosauria (' ruling reptiles '). Despite their prehistoric look, crocodiles are among the more biologically complex reptiles. Unlike other reptiles, a crocodile has a cerebral cortex and a four - chambered heart. Crocodilians also have the functional equivalent of a diaphragm by incorporating muscles used for aquatic locomotion into respiration. Salt glands are present in the tongues of crocodiles and they have a pore opening on the surface of the tongue, a trait that separates them from alligators. Salt glands are dysfunctional in Alligatoridae. Their function appears to be similar to that of salt glands in marine turtles. Crocodiles do not have sweat glands and release heat through their mouths. They often sleep with their mouths open and may pant like a dog. Four species of freshwater crocodile climb trees to bask in areas lacking a shoreline. Crocodiles have acute senses, an evolutionary advantage that makes them successful predators. The eyes, ears and nostrils are located on top of the head, allowing the crocodile to lie low in the water, almost totally submerged and hidden from prey. Crocodiles have very good night vision, and are mostly nocturnal hunters. They use the disadvantage of most prey animals ' poor nocturnal vision to their advantage. The light receptors in crocodilians ' eyes include cones and numerous rods, so it is assumed all crocodilians can see colours. Crocodiles have vertical - slit shaped pupils, similar to those of domestic cats. One explanation for the evolution of slit pupils is that they exclude light more effectively than a circular pupil, helping to protect the eyes during daylight. On the rear wall of the eye is a tapetum lucidum, which reflects incoming light back onto the retina, thus utilizing the small amount of light available at night to best advantage. In addition to the protection of the upper and lower eyelids, crocodiles have a nictitating membrane (sometimes called a "third eye - lid '') that can be drawn over the eye from the inner corner while the lids are open. The eyeball surface is thus protected under the water while a certain degree of vision is still possible. Crocodilian sense of smell is also very well developed, aiding them to detect prey or animal carcasses that are either on land or in water, from far away. It is possible that crocodiles use olfaction in the egg prior to hatching. Chemoreception in crocodiles is especially interesting because they hunt in both terrestrial and aquatic surroundings. Crocodiles have only one olfactory chamber and the vomeronasal organ is absent in the adults indicating all olfactory perception is limited to the olfactory system. Behavioural and olfactometer experiments indicate that crocodiles detect both air - borne and water - soluble chemicals and use their olfactory system for hunting. When above water, crocodiles enhance their ability to detect volatile odorants by gular pumping, a rhythmic movement of the floor of the pharynx. Crocodiles close their nostrils when submerged, so olfaction underwater is unlikely. Underwater food detection is presumably gustatory and tactile. Crocodiles can hear well; their tympanic membranes are concealed by flat flaps that may be raised or lowered by muscles. Caudal: The upper and lower jaws are covered with sensory pits, visible as small, black speckles on the skin, the crocodilian version of the lateral line organs seen in fish and many amphibians, though arising from a completely different origin. These pigmented nodules encase bundles of nerve fibers innervated beneath by branches of the trigeminal nerve. They respond to the slightest disturbance in surface water, detecting vibrations and small pressure changes as small as a single drop. This makes it possible for crocodiles to detect prey, danger and intruders, even in total darkness. These sense organs are known as domed pressure receptors (DPRs). Post-Caudal: While alligators and caimans have DPRs only on their jaws, crocodiles have similar organs on almost every scale on their bodies. The function of the DPRs on the jaws is clear; to catch prey, but it is still not clear what the function is of the organs on the rest of the body. The receptors flatten when exposed to increased osmotic pressure, such as that experienced when swimming in sea water hyperosmotic to the body fluids. When contact between the integument and the surrounding sea water solution is blocked, crocodiles are found to lose their ability to discriminate salinities. It has been proposed that the flattening of the sensory organ in hyperosmotic sea water is sensed by the animal as "touch '', but interpreted as chemical information about its surroundings. This might be why in alligators they are absent on the rest of the body. Crocodiles are ambush predators, waiting for fish or land animals to come close, then rushing out to attack. Crocodiles mostly eat fish, amphibians, crustaceans, molluscs, birds, reptiles, and mammals, and they occasionally cannibalize smaller crocodiles. What a crocodile eats varies greatly with species, size and age. From the mostly fish - eating species, like the slender - snouted and freshwater crocodiles, to the larger species like the Nile crocodile and the saltwater crocodile that prey on large mammals, such as buffalo, deer and wild boar, diet shows great diversity. Diet is also greatly affected by the size and age of the individual within the same species. All young crocodiles hunt mostly invertebrates and small fish, gradually moving on to larger prey. Being ectothermic (cold - blooded) predators, they have a very slow metabolism, so they can survive long periods without food. Despite their appearance of being slow, crocodiles have a very fast strike and are top predators in their environment, and various species have been observed attacking and killing other predators such as sharks and big cats. As opportunistic predators, crocodiles would also prey upon young and dying elephants and hippos when given the chance. Crocodiles are also known to be aggressive scavengers who feed upon carrion and steal from other predators. Evidence suggests that crocodiles also feed upon fruits, based on the discovery of seeds in stools and stomachs from many subjects as well as accounts of them feeding. Crocodiles have the most acidic stomach of any vertebrate. They can easily digest bones, hooves and horns. The BBC TV reported that a Nile crocodile that has lurked a long time underwater to catch prey builds up a large oxygen debt. When it has caught and eaten that prey, it closes its right aortic arch and uses its left aortic arch to flush blood loaded with carbon dioxide from its muscles directly to its stomach; the resulting excess acidity in its blood supply makes it much easier for the stomach lining to secrete more stomach acid to quickly dissolve bulks of swallowed prey flesh and bone. Many large crocodilians swallow stones (called gastroliths or stomach stones), which may act as ballast to balance their bodies or assist in crushing food, similar to grit ingested by birds. Herodotus claimed that Nile crocodiles had a symbiotic relationship with certain birds, such as the Egyptian plover, which enter the crocodile 's mouth and pick leeches feeding on the crocodile 's blood; with no evidence of this interaction actually occurring in any crocodile species, it is most likely mythical or allegorical fiction. Since they feed by grabbing and holding onto their prey, they have evolved sharp teeth for piercing and holding onto flesh, and powerful muscles to close the jaws and hold them shut. The teeth are not well - suited to tearing flesh off of large prey items as are the dentition and claws of many mammalian carnivores, the hooked bills and talons of raptorial birds, or the serrated teeth of sharks. However, this is an advantage rather than a disadvantage to the crocodile since the properties of the teeth allow it to hold onto prey with the least possibility of the prey animal to escape. Otherwise combined with the exceptionally high bite force, the flesh would easily cut through; thus creating an escape opportunity for the prey item. The jaws can bite down with immense force, by far the strongest bite of any animal. The force of a large crocodile 's bite is more than 5,000 lbf (22,000 N), which was measured in a 5.5 m (18 ft) Nile crocodile, on the field, compared to just 335 lbf (1,490 N) for a Rottweiler, 670 lbf (3,000 N) for a great white shark, 800 lbf (3,600 N) for a hyena, or 2,200 lbf (9,800 N) for an American alligator. A 5.2 m (17 ft) long saltwater crocodile has been confirmed as having the strongest bite force ever recorded for an animal in a laboratory setting. It was able to apply a bite force value of 3,700 lbf (16,000 N), and thus surpassed the previous record of 2,125 lbf (9,450 N) made by a 3.9 m (13 ft) long American alligator. Taking the measurements of several 5.2 m (17 ft) crocodiles as reference, the bite forces of 6 - m individuals were estimated at 7,700 lbf (34,000 N). The study, led by Dr. Gregory M. Erickson, also shed light to the larger, extinct species of crocodilians. Since crocodile anatomy has changed only slightly for the last 80 million years, current data on modern crocodilians can be used to estimate the bite force of extinct species. An 11 to 12 metres (36 -- 39 ft) long Deinosuchus would apply a force of 23,100 lbf (103,000 N), twice that of the latest, higher bite force estimations of Tyrannosaurus. The extraordinary bite of crocodilians is a result of their anatomy. The space for the jaw muscle in the skull is very large, which is easily visible from the outside as a bulge at each side. The nature of the muscle is so stiff, it is almost as hard as bone to touch, as if it were the continuum of the skull. Another trait is that most of the muscle in a crocodile 's jaw is arranged for clamping down. Despite the strong muscles to close the jaw, crocodiles have extremely small and weak muscles to open the jaw. Crocodiles can thus be subdued for study or transport by taping their jaws or holding their jaws shut with large rubber bands cut from automobile inner tubes. Crocodiles can move quickly over short distances, even out of water. The land speed record for a crocodile is 17 km / h (11 mph) measured in a galloping Australian freshwater crocodile. Maximum speed varies between species. Some species can gallop, including Cuban crocodiles, Johnston 's crocodiles, New Guinea crocodiles, African dwarf crocodiles, and even small Nile crocodiles. The fastest means by which most species can move is a "belly run '', in which the body moves in a snake - like (sinusoidal) fashion, limbs splayed out to either side paddling away frantically while the tail whips to and fro. Crocodiles can reach speeds of 10 -- 11 km / h (6 -- 7 mph) when they "belly run '', and often faster if slipping down muddy riverbanks. When a crocodile walks quickly, it holds its legs in a straighter and more upright position under its body, which is called the "high walk ''. This walk allows a speed of up to 5 km / h. Crocodiles may possess a homing instinct. In northern Australia, three rogue saltwater crocodiles were relocated 400 km (249 mi) by helicopter, but returned to their original locations within three weeks, based on data obtained from tracking devices attached to them. Measuring crocodile age is unreliable, although several techniques are used to derive a reasonable guess. The most common method is to measure lamellar growth rings in bones and teeth -- each ring corresponds to a change in growth rate which typically occurs once a year between dry and wet seasons. Bearing these inaccuracies in mind, it can be safely said that all crocodile species have an average lifespan of at least 30 -- 40 years, and in the case of larger species an average of 60 -- 70 years. The oldest crocodiles appear to be the largest species. C. porosus is estimated to live around 70 years on average, with limited evidence of some individuals exceeding 100 years. In captivity, some individuals are claimed to have lived for over a century. A male crocodile lived to an estimated age of 110 -- 115 years in a Russian zoo in Yekaterinburg. Named Kolya, he joined the zoo around 1913 to 1915, fully grown, after touring in an animal show, and lived until 1995. A male freshwater crocodile lived to an estimated age of 120 -- 140 years at the Australia Zoo. Known affectionately as "Mr. Freshie '', he was rescued around 1970 by Bob Irwin and Steve Irwin, after being shot twice by hunters and losing an eye as a result, and lived until 2010. Crocworld Conservation Centre, in Scottburgh, South Africa, claims to have a male Nile crocodile that was born in 1900 (age 116 -- 117). Named Henry, the crocodile is said to have lived in Botswana along the Okavango River, according to centre director Martin Rodrigues. Crocodiles are the most social of reptiles. Even though they do not form social groups, many species congregate in certain sections of rivers, tolerating each other at times of feeding and basking. Most species are not highly territorial, with the exception of the saltwater crocodile, which is a highly territorial and aggressive species. A mature male will not tolerate any other males at any time of the year. Most other species are more flexible. There is a certain form of hierarchy in crocodiles: the largest and heaviest males are at the top, having access to the best basking site, while females are priority during a group feeding of a big kill or carcass. A good example of the hierarchy in crocodiles would be the case of the Nile crocodile. This species clearly displays all of these behaviours. Studies in this area are not thorough, however, and many species are yet to be studied in greater detail. Mugger crocodiles are also known to show toleration in group feedings and tend to congregate in certain areas. However, males of all species are aggressive towards each other during mating season, to gain access to females. Crocodiles are also the most vocal of all reptiles, producing a wide variety of sounds during various situations and conditions, depending on species, age, size and sex. Depending on the context, some species can communicate over 20 different messages through vocalizations alone. Some of these vocalizations are made during social communication, especially during territorial displays towards the same sex and courtship with the opposite sex; the common concern being reproduction. Therefore most conspecific vocalization is made during the breeding season, with the exception being year - round territorial behaviour in some species and quarrels during feeding. Crocodiles also produce different distress calls and in aggressive displays to their own kind and other animals; notably other predators during interspecific predatory confrontations over carcasses and terrestrial kills. Specific vocalisations include - Chirp: When about to hatch, the young make a "peeping '' noise which encourages the female to excavate the nest. The female then gathers the hatchlings in her mouth and transports them to the water, where they remain in a group for several months, protected by the female Distress call: A high - pitched call used mostly by younger animals to alert other crocodiles to imminent danger or an animal being attacked. Threat call: A hissing sound that has also been described as a coughing noise. Hatching call: Emitted by a female when breeding to alert other crocodiles that she has laid eggs in her nest. Bellowing: Male crocodiles are especially vociferous. Bellowing choruses occur most often in the spring when breeding groups congregate, but can occur at any time of year. To bellow, males noticeably inflate as they raise the tail and head out of water, slowly waving the tail back and forth. They then puff out the throat and with a closed mouth, begin to vibrate air. Just before bellowing, males project an infrasonic signal at about 10 Hz through the water which vibrates the ground and nearby objects. These low - frequency vibrations travel great distances through both air and water to advertise the male 's presence and are so powerful they result in the water 's appearing to ' dance '. Crocodiles lay eggs, which are laid in either holes or mound nests, depending on species. A hole nest is usually excavated in sand and a mound nest is usually constructed out of vegetation. Nesting periods range from a few weeks up to six months. Courtship takes place in a series of behavioural interactions that include a variety of snout rubbing and submissive display that can take a long time. Mating always takes place in water, where the pair can be observed mating several times. Females can build or dig several trial nests which appear incomplete and abandoned later. Egg - laying usually takes place at night and about 30 -- 40 minutes. Females are highly protective of their nests and young. The egg are hard shelled but translucent at the time of egg - laying. Depending on the species of crocodile, 7 to 95 eggs are laid. Crocodile embryos do not have sex chromosomes, and unlike humans, sex is not determined genetically. Sex is determined by temperature, where at 30 ° C (86 ° F) or less most hatchlings are females and at 31 ° C (88 ° F), offspring are of both sexes. A temperature of 32 to 33 ° C (90 to 91 ° F) gives mostly males whereas above 33 ° C (91 ° F) in some species continues to give males but in other species resulting in females, which are sometimes called as high - temperature females. Temperature also affects growth and survival rate of the young, which may explain the sexual dimorphism in crocodiles. The average incubation period is around 80 days, and also is dependent on temperature and species that usually ranges from 65 to 95 days. The eggshell structure is very conservative through evolution but there are enough changes to tell different species apart by their eggshell microstructure. At the time of hatching, the young start calling within the eggs. They have an egg - tooth at the tip of their snouts, which is developed from the skin, and that helps them pierce out of the shell. Hearing the calls, the female usually excavates the nest and sometimes takes the unhatched eggs in her mouth, slowly rolling the eggs to help the process. The young is usually carried to the water in the mouth. She would then introduce her hatchlings to the water and even feed them herself. The mother would then take care of her young for over a year before the next mating season. In the absence of the mother crocodile, the father would substitute itself to take care of the young. However even with a sophisticated parental nurturing, young crocs have a very high mortality rate due to their vulnerability to predation. A group of hatchlings is called a pod or crèche and may be protected for months. Crocodiles possess some advanced cognitive abilities. They can observe and use patterns of prey behaviour, such as when prey come to the river to drink at the same time each day. Vladimir Dinets of the University of Tennessee, observed that crocodiles use twigs as bait for birds looking for nesting material. They place sticks on their snouts and partly submerge themselves. When the birds swooped in to get the sticks, the crocodiles then catch the birds. Crocodiles only do this in spring nesting seasons of the birds, when there is high demand for sticks to be used for building nests. Vladimir also discovered other similar observations from various scientists, some dating back to the 19th century. Aside from using sticks, crocodiles are also capable of cooperative hunting. Large numbers of crocodiles swim in circles to trap fish and take turns snatching them. In hunting larger prey, crocodiles swarm in, with one holding the prey down as the others rip it apart. Most species are grouped into the genus Crocodylus. The other extant genus, Osteolaemus, is monotypic (as is Mecistops, if recognized). The cladogram below follows the topology from a 2012 analysis of morphological traits by Christopher A. Brochu and Glenn W. Storrs. Many extinct species of Crocodylus might represent different genera. "Crocodylus '' pigotti, for example, was placed in the newly erected genus Brochuchus in 2013. C. suchus was not included because its morphological codings were identical to those of C. niloticus. However, the authors suggested that the lack of differences was due to limited specimen sampling, and considered the two species to be distinct. This analysis found weak support for the clade Osteolaeminae. Brochu named Osteolaeminae in 2003 as a subfamily of Crocodylidae separate from Crocodylinae, but the group has since been classified within Crocodylinae. It includes the living genus Osteolaemus as well as the extinct species Voay robustus and Rimasuchus lloydi. † "Crocodylus '' pigotti † "Crocodylus '' gariepensis † Euthecodon arambourgii † Euthecodon brumpti † Rimasuchus lloydi † Voay robustus Osteolaemus osborni Osteolaemus tetraspis Mecistops cataphractus † C. checchiai † C. palaeindicus † C. anthropophagus † C. thorbjarnarsoni C. niloticus C. siamensis C. palustris C. porosus C. johnsoni C. mindorensis C. novaeguineae C. raninus C. acutus C. intermedius C. rhombifer C. moreletii A 2013 analysis by Jack L. Conrad, Kirsten Jenkins, Thomas Lehmann, and others did not support Osteolaeminae as a true clade but rather a paraphyletic group consisting of two smaller clades. They informally called these clades "osteolaemins '' and "mecistopins ''. "Osteolaemins '' include Osteolaemus, Voay, Rimasuchus, and Brochuchus and "mecistopins '' include Mecistops and Euthecodon. The larger species of crocodiles are very dangerous to humans, mainly because of their ability to strike before the person can react. The saltwater crocodile and Nile crocodile are the most dangerous, killing hundreds of people each year in parts of Southeast Asia and Africa. The mugger crocodile and American crocodile are also dangerous to humans. Crocodiles are protected in many parts of the world, but they also are farmed commercially. Their hides are tanned and used to make leather goods such as shoes and handbags; crocodile meat is also considered a delicacy. The most commonly farmed species are the saltwater and Nile crocodiles, while a hybrid of the saltwater and the rare Siamese crocodile is also bred in Asian farms. Farming has resulted in an increase in the saltwater crocodile population in Australia, as eggs are usually harvested from the wild, so landowners have an incentive to conserve their habitat. Crocodile leather can be made into goods such as wallets, briefcases, purses, handbags, belts, hats, and shoes. Crocodile oil has been used for various purposes. Crocodiles were eaten by Vietnamese while they were taboo and off limits for Chinese. Vietnamese women who married Chinese men adopted the Chinese taboo. Crocodile meat is occasionally eaten as an "exotic '' delicacy in the western world. Crocodiles have appeared in various forms in religions across the world. Ancient Egypt had Sobek, the crocodile - headed god, with his cult - city Crocodilopolis, as well as Taweret, the goddess of childbirth and fertility, with the back and tail of a crocodile. The Jukun shrine in the Wukari Federation, Nigeria is dedicated to crocodiles in thanks for their aid during migration. Crocodiles appear in different forms in Hinduism. Varuna, a Vedic and Hindu god, rides a part - crocodile makara; his consort Varuni rides a crocodile. Similarly the goddess personifications of the Ganga and Yamuna rivers are often depicted as riding crocodiles. Also in India, in Goa, crocodile worship is practised, including the annual Mannge Thapnee ceremony. In Latin America, Cipactli was the giant earth crocodile of the Aztec and other Nahua peoples. The term "Crocodile tears '' (and equivalents in other languages) refers to a false, insincere display of emotion, such as a hypocrite crying fake tears of grief. It is derived from an ancient anecdote that crocodiles weep in order to lure their prey, or that they cry for the victims they are eating, first told in the Bibliotheca by Photios I of Constantinople. The story is repeated in bestiaries such as De bestiis et aliis rebus. This tale was first spread widely in English in the stories of the Travels of Sir John Mandeville in the 14th century, and appears in several of Shakespeare 's plays. In fact, crocodiles can and do generate tears, but they do not actually cry. The name of Surabaya, Indonesia, is locally believed to be derived from the words "suro '' (shark) and "boyo '' (crocodile), two creatures which, in a local myth, fought each other in order to gain the title of "the strongest and most powerful animal '' in the area. It was said that the two powerful animals agreed for a truce and set boundaries; that the shark 's domain would be in the sea while the crocodile 's domain would be on the land. However one day the shark swam into the river estuary to hunt, this angered the crocodile, who declared it his territory. The Shark argued that the river was a water - realm which meant that it was shark territory, while the crocodile argued that the river flowed deep inland, so it was therefore crocodile territory. A ferocious fight resumed as the two animals bit each other. Finally the shark was badly bitten and fled to the open sea, and the crocodile finally ruled the estuarine area that today is the city. Another source alludes to a Jayabaya prophecy -- a 12th - century psychic king of Kediri Kingdom -- as he foresaw a fight between a giant white shark and a giant white crocodile taking place in the area, which is sometimes interpreted as a foretelling of the Mongol invasion of Java, a major conflict between the forces of the Kublai Khan, Mongol ruler of China, and those of Raden Wijaya 's Majapahit in 1293. The two animals are now used as the city 's symbol, with the two facing and circling each other, as depicted in a statue appropriately located near the entrance to the city zoo (see photo on the Surabaya page). In the UK, a row of schoolchildren walking in pairs, or two by two is known as ' crocodile '.
world war 1 gas warfare tactics and equipment
Chemical weapons in World War I - wikipedia Although the use of toxic chemicals as weapons dates back thousands of years, the first large scale use of chemical weapons was during World War I. They were primarily used to demoralize, injure, and kill entrenched defenders, against whom the indiscriminate and generally very slow - moving or static nature of gas clouds would be most effective. The types of weapons employed ranged from disabling chemicals, such as tear gas, to lethal agents like phosgene, chlorine, and mustard gas. This chemical warfare was a major component of the first global war and first total war of the 20th century. The killing capacity of gas was limited, with about ninety thousand fatalities from a total of some 1.3 million casualties caused by gas attacks. Gas was unlike most other weapons of the period because it was possible to develop countermeasures, such as gas masks. In the later stages of the war, as the use of gas increased, its overall effectiveness diminished. The widespread use of these agents of chemical warfare, and wartime advances in the composition of high explosives, gave rise to an occasionally expressed view of World War I as "the chemist 's war '' and also the era where "weapons of mass destruction '' were created. The use of poison gas performed by all major belligerents throughout World War I constituted war crimes as its use violated the 1899 Hague Declaration Concerning Asphyxiating Gases and the 1907 Hague Convention on Land Warfare, which prohibited the use of "poison or poisoned weapons '' in warfare. The earliest military uses of chemicals were tear - inducing irritants rather than fatal or disabling poisons. During World War I, the French army was the first to employ gas, using 26 mm grenades filled with tear gas (ethyl bromoacetate) in August 1914. The small quantities of gas delivered, roughly 19 cm3 per cartridge, were not even detected by the Germans. The stocks were rapidly consumed and by November a new order was placed by the French military. As bromine was scarce among the Entente allies, the active ingredient was changed to chloroacetone. In October 1914, German troops fired fragmentation shells filled with a chemical irritant against British positions at Neuve Chapelle, though the concentration achieved was so small that it too was barely noticed. None of the combatants considered the use of tear gas to be in conflict with the Hague Treaty of 1899, which prohibited the launching of projectiles containing asphyxiating or poisonous gas. The first instance of large - scale use of gas as a weapon was on 31 January 1915, when Germany fired 18,000 artillery shells containing liquid xylyl bromide tear gas on Russian positions on the Rawka River, west of Warsaw during the Battle of Bolimov. However, instead of vaporizing, the chemical froze and failed to have the desired effect. The first killing agent used by the German military was chlorine. Chlorine is a powerful irritant that can inflict damage to the eyes, nose, throat and lungs. At high concentrations and prolonged exposure it can cause death by asphyxiation. German chemical companies BASF, Hoechst and Bayer (which formed the IG Farben conglomerate in 1925) had been producing chlorine as a by - product of their dye manufacturing. In cooperation with Fritz Haber of the Kaiser Wilhelm Institute for Chemistry in Berlin, they began developing methods of discharging chlorine gas against enemy trenches. According to the fieldpost letter of Major Karl von Zingler, the first chlorine gas attack by German forces took place before 2 January 1915: "In other war theaters it does not go better and it has been said that our Chlorine is very effective. 140 English officers have been killed. This is a horrible weapon... ''. By 22 April 1915, the German Army had 168 tons of chlorine deployed in 5,730 cylinders from Langemark -- Poelkapelle, north of Ypres. At 17: 30, in a slight easterly breeze, the gas was released, forming a gray - green cloud that drifted across positions held by French Colonial troops from Martinique who broke ranks, abandoning their trenches and creating an 8,000 - yard (7 km) gap in the Allied line. However, the German infantry were also wary of the gas and, lacking reinforcements, failed to exploit the break before the 1st Canadian Division and assorted French troops reformed the line in scattered, hastily prepared positions 1,000 -- 3,000 yards (910 -- 2,740 m) apart. The Entente governments quickly claimed the attack was a flagrant violation of international law but Germany argued that the Hague treaty had only banned chemical shells, rather than the use of gas projectors. In what became the Second Battle of Ypres, the Germans used gas on three more occasions; on 24 April against the 1st Canadian Division, on 2 May near Mouse Trap Farm and on 5 May against the British at Hill 60. The British Official History stated that at Hill 60, "90 men died from gas poisoning in the trenches or before they could be got to a dressing station; of the 207 brought to the nearest dressing stations, 46 died almost immediately and 12 after long suffering. '' On August 6, German troops used chlorine gas against Russian troops defending the Fortress of Osowiec. Surviving defenders drove back the attack and retained the fortress. Germany used chemical weapons on the eastern front in an attack at Rawka, south of Warsaw. The Russian army took 9,000 casualties, with more than 1,000 fatalities. In response, the artillery branch of the Russian army organized a commission to study the delivery of poison gas in shells. It quickly became evident that the men who stayed in their places suffered less than those who ran away, as any movement worsened the effects of the gas, and that those who stood up on the fire step suffered less -- indeed they often escaped any serious effects -- than those who lay down or sat at the bottom of a trench. Men who stood on the parapet suffered least, as the gas was denser near the ground. The worst sufferers were the wounded lying on the ground, or on stretchers, and the men who moved back with the cloud. Chlorine was less effective as a weapon than the Germans had hoped, particularly as soon as simple countermeasures were introduced. The gas produced a visible greenish cloud and strong odour, making it easy to detect. It was water - soluble, so the simple expedient of covering the mouth and nose with a damp cloth was somewhat effective at reducing the effect of the gas. It was thought to be even more effective to use urine rather than water, as it was known at the time that chlorine reacted readily with urea (present in urine) to form dichloro urea. Chlorine required a concentration of 1,000 parts per million to be fatal, destroying tissue in the lungs, likely through the formation of hydrochloric acid when dissolved in the water in the lungs (2Cl + 2H O → 4HCl + O). Despite its limitations, however, chlorine was an effective psychological weapon -- the sight of an oncoming cloud of the gas was a continual source of dread for the infantry. Countermeasures were quickly introduced in response to the use of chlorine. The Germans issued their troops with small gauze pads filled with cotton waste, and bottles of a bicarbonate solution with which to dampen the pads. Immediately following the use of chlorine gas by the Germans, instructions were sent to British and French troops to hold wet handkerchiefs or cloths over their mouths. Simple pad respirators similar to those issued to German troops were soon proposed by Lieutenant - Colonel N.C. Ferguson, the A.D.M.S. of the 28th Division. These pads were intended to be used damp, preferably dipped into a solution of bicarbonate kept in buckets for that purpose, though other liquids were also used. Because such pads could not be expected to arrive at the front for several days, army divisions set about making them for themselves. Locally available muslin, flannel and gauze were used, officers were sent to Paris to buy more and local French women were employed making up rudimentary pads with string ties. Other units used lint bandages manufactured in the convent at Poperinge. Pad respirators were sent up with rations to British troops in the line as early as the evening of 24 April. In Britain the Daily Mail newspaper encouraged women to manufacture cotton pads, and within one month a variety of pad respirators were available to British and French troops, along with motoring goggles to protect the eyes. The response was enormous and a million gas masks were produced in a day. Unfortunately, the Mail 's design was useless when dry and caused suffocation when wet -- the respirator was responsible for the deaths of scores of men. By 6 July 1915, the entire British army was equipped with the far more effective "smoke helmet '' designed by Major Cluny MacPherson, Newfoundland Regiment, which was a flannel bag with a celluloid window, which entirely covered the head. The race was then on between the introduction of new and more effective poison gases and the production of effective countermeasures, which marked gas warfare until the armistice in November 1918. The British expressed outrage at Germany 's use of poison gas at Ypres but responded by developing their own gas warfare capability. The commander of II Corps, Lieutenant General Sir Charles Ferguson, said of gas: It is a cowardly form of warfare which does not commend itself to me or other English soldiers... We can not win this war unless we kill or incapacitate more of our enemies than they do of us, and if this can only be done by our copying the enemy in his choice of weapons, we must not refuse to do so. The first use of gas by the British was at the Battle of Loos, 25 September 1915, but the attempt was a disaster. Chlorine, codenamed Red Star, was the agent to be used (140 tons arrayed in 5,100 cylinders), and the attack was dependent on a favorable wind. However, on this occasion the wind proved fickle, and the gas either lingered in no man 's land or, in places, blew back on the British trenches. This debacle was compounded when the gas could not be released from all the British canisters because the wrong turning keys were sent with them. Subsequent retaliatory German shelling hit some of those unused full cylinders, releasing more gas among the British troops. Exacerbating the situation were the primitive flannel gas masks distributed to the British. The masks got hot, and the small eye - pieces misted over, reducing visibility. Some of the troops lifted the masks to get some fresh air, causing them to be gassed. The deficiencies of chlorine were overcome with the introduction of phosgene, which was prepared by a group of French chemists led by Victor Grignard and first used by France in 1915. Colourless and having an odor likened to "mouldy hay, '' phosgene was difficult to detect, making it a more effective weapon. Although phosgene was sometimes used on its own, it was more often used mixed with an equal volume of chlorine, with the chlorine helping to spread the denser phosgene. The Allies called this combination White Star after the marking painted on shells containing the mixture. Phosgene was a potent killing agent, deadlier than chlorine. It had a potential drawback in that some of the symptoms of exposure took 24 hours or more to manifest. This meant that the victims were initially still capable of putting up a fight; although this could also mean that apparently fit troops would be incapacitated by the effects of the gas on the following day. In the first combined chlorine -- phosgene attack by Germany, against British troops at Wieltje near Ypres, Belgium on 19 December 1915, 88 tons of the gas were released from cylinders causing 1069 casualties and 69 deaths. The British P gas helmet, issued at the time, was impregnated with sodium phenolate and partially effective against phosgene. The modified PH Gas Helmet, which was impregnated with phenate hexamine and hexamethylene tetramine (urotropine) to improve the protection against phosgene, was issued in January 1916. Around 36,600 tons of phosgene were manufactured during the war, out of a total of 190,000 tons for all chemical weapons, making it second only to chlorine (93,800 tons) in the quantity manufactured: Although phosgene was never as notorious in public consciousness as mustard gas, it killed far more people, about 85 % of the 100,000 deaths caused by chemical weapons during World War I. On 29 June 1916, Austrian forces attacked the Italian lines on Monte San Michele with a mix of phosgene and chlorine gas. Thousands of Italian soldiers died in this first chemical weapons attack on the Italian Front. The most widely reported and, perhaps, the most effective gas of the First World War was mustard gas. It was a vesicant that was introduced by Germany in July 1917 prior to the Third Battle of Ypres. The Germans marked their shells yellow for mustard gas and green for chlorine and phosgene; hence they called the new gas Yellow Cross. It was known to the British as HS (Hun Stuff), while the French called it Yperite (named after Ypres). Mustard gas is not a particularly effective killing agent (though in high enough doses it is fatal) but can be used to harass and disable the enemy and pollute the battlefield. Delivered in artillery shells, mustard gas was heavier than air, and it settled to the ground as an oily liquid resembling sherry. Once in the soil, mustard gas remained active for several days, weeks, or even months, depending on the weather conditions. The skin of victims of mustard gas blistered, their eyes became very sore and they began to vomit. Mustard gas caused internal and external bleeding and attacked the bronchial tubes, stripping off the mucous membrane. This was extremely painful. Fatally injured victims sometimes took four or five weeks to die of mustard gas exposure. One nurse, Vera Brittain, wrote: "I wish those people who talk about going on with this war whatever it costs could see the soldiers suffering from mustard gas poisoning. Great mustard - coloured blisters, blind eyes, all sticky and stuck together, always fighting for breath, with voices a mere whisper, saying that their throats are closing and they know they will choke. '' The polluting nature of mustard gas meant that it was not always suitable for supporting an attack as the assaulting infantry would be exposed to the gas when they advanced. When Germany launched Operation Michael on 21 March 1918, they saturated the Flesquières salient with mustard gas instead of attacking it directly, believing that the harassing effect of the gas, coupled with threats to the salient 's flanks, would make the British position untenable. Gas never reproduced the dramatic success of 22 April 1915; however, it became a standard weapon which, combined with conventional artillery, was used to support most attacks in the later stages of the war. Gas was employed primarily on the Western Front -- the static, confined trench system was ideal for achieving an effective concentration. Germany also made use of gas against Russia on the Eastern Front, where the lack of effective countermeasures resulted in deaths of over 56,000 Russians, while Britain experimented with gas in Palestine during the Second Battle of Gaza. Russia began manufacturing chlorine gas in 1916, with phosgene being produced later in the year. However, most of the manufactured gas was never used. The British Army believed that the use of gas was needed, but did not use mustard gas until November 1917 at Cambrai, after their armies had captured a stockpile of German mustard - gas shells. It took the British more than a year to develop their own mustard gas weapon, with production of the chemicals centred on Avonmouth Docks. (The only option available to the British was the Despretz -- Niemann -- Guthrie process). This was used first in September 1918 during the breaking of the Hindenburg Line with the Hundred Days ' Offensive. The Allies mounted more gas attacks than the Germans in 1917 and 1918 because of a marked increase in production of gas from the Allied nations. Germany was unable to keep up with this pace despite creating various new gases for use in battle, mostly as a result of very costly methods of production. Entry into the war by the United States allowed the Allies to increase mustard gas production far more than Germany. Also the prevailing wind on the Western Front was blowing from west to east, which meant the British more frequently had favorable conditions for a gas release than did the Germans. When the United States entered the war, it was already mobilizing resources from academic, industry and military sectors for research and development into poison gas. A Subcommittee on Noxious Gases was created by the National Research Committee, a major research center was established at Camp American University, and the 1st Gas Regiment was recruited. The 1st Gas Regiment eventually served in France, where it used phosgene gas in a number of attacks. The Artillery used Mustard gas with significant effect during the Meuse Argonne Offensive on at least three occasions. The United States began large - scale production of an improved vesicant gas known as Lewisite, for use in an offensive planned for early 1919. By the time of the armistice on 11 November, a plant near Willoughby, Ohio was producing 10 tons per day of the substance, for a total of about 150 tons. It is uncertain what effect this new chemical would have had on the battlefield, however, as it degrades in moist conditions. By the end of the war, chemical weapons had lost much of their effectiveness against well trained and equipped troops. At that time, chemical weapon agents inflicted an estimated 1.3 million casualties. Nevertheless, in the following years, chemical weapons were used in several, mainly colonial, wars where one side had an advantage in equipment over the other. The British used poison gas, possibly adamsite, against Russian revolutionary troops beginning on August 27, 1919 and contemplated using chemical weapons against Iraqi insurgents in the 1920s; Bolshevik troops used poison gas to suppress the Tambov Rebellion in 1920, Spain used chemical weapons in Morocco against Rif tribesmen throughout the 1920s and Italy used mustard gas in Libya in 1930 and again during its invasion of Ethiopia in 1936. In 1925, a Chinese warlord, Zhang Zuolin, contracted a German company to build him a mustard gas plant in Shenyang, which was completed in 1927. Public opinion had by then turned against the use of such weapons which led to the Geneva Protocol, an updated and extensive prohibition of poison weapons. The Protocol, which was signed by most First World War combatants in 1925, bans the use (but not the stockpiling) of lethal gas and bacteriological weapons. Most countries that signed ratified it within around five years, although a few took much longer -- Brazil, Japan, Uruguay, and the United States did not do so until the 1970s, and Nicaragua ratified it only in 1990. The signatory nations agreed not to use poison gas in the future, stating "the use in war of asphyxiating, poisonous or other gases, and of all analogous liquids, materials or devices, has been justly condemned by the general opinion of the civilized world. '' Although chemical weapons have been used in at least a dozen wars since the end of the First World War, they were not used in combat on a large scale until mustard gas and the more deadly nerve agents were used by Iraq during the 8 - year Iran -- Iraq War. It killed around 20,000 Iranian troops (and injured another 80,000), which is around a quarter of the number of deaths caused by chemical weapons during the First World War. Although all major combatants stockpiled chemical weapons during the Second World War, the only reports of its use in the conflict were the Japanese use of relatively small amounts of mustard gas and lewisite in China, and very rare occurrences in Europe (for example some sulfur mustard bombs were dropped on Warsaw on 3 September 1939, which Germany acknowledged in 1942 but indicated had been accidental). Mustard gas was the agent of choice, with the British stockpiling 40,719 tons, the Soviets 77,400 tons, the Americans over 87,000 tons and the Germans 27,597 tons. The destruction of an American cargo ship containing mustard gas led to many casualties in Bari, Italy, in December 1943. In both Axis and Allied nations, children in school were taught to wear gas masks in case of gas attack. Germany developed the poison gases tabun, sarin, and soman during the war, and used Zyklon B in their extermination camps. Neither Germany nor the Allied nations used any of their war gases in combat, despite maintaining large stockpiles and occasional calls for their use. Poison gas played an important role in the Holocaust. Britain made plans to use mustard gas on the landing beaches in the event of an invasion of the United Kingdom in 1940. The United States considered using gas to support their planned invasion of Japan. The contribution of gas weapons to the total casualty figures was relatively minor. British figures, which were accurately maintained from 1916, recorded that only 3 % of gas casualties were fatal, 2 % were permanently invalid and 70 % were fit for duty again within six weeks. It was remarked as a joke that if someone yelled ' Gas ', everyone in France would put on a mask... Gas shock was as frequent as shell shock. Gas! GAS! Quick, boys! -- An ecstasy of fumbling, Fitting the clumsy helmets just in time; But someone still was yelling out and stumbling, And flound'ring like a man in fire or lime... Dim, through the misty panes and thick green light, As under a green sea, I saw him drowning. In all my dreams, before my helpless sight, He plunges at me, guttering, choking, drowning. Death by gas was often slow and painful. According to Denis Winter (Death 's Men, 1978), a fatal dose of phosgene eventually led to "shallow breathing and retching, pulse up to 120, an ashen face and the discharge of four pints (2 litres) of yellow liquid from the lungs each hour for the 48 of the drowning spasms. '' A common fate of those exposed to gas was blindness, chlorine gas or mustard gas being the main causes. One of the most famous First World War paintings, Gassed by John Singer Sargent, captures such a scene of mustard gas casualties which he witnessed at a dressing station at Le Bac - du - Sud near Arras in July 1918. (The gases used during that battle (tear gas) caused temporary blindness and / or a painful stinging in the eyes. These bandages were normally water - soaked to provide a rudimentary form of pain relief to the eyes of casualties before they reached more organized medical help.) The proportion of mustard gas fatalities to total casualties was low; only 2 % of mustard gas casualties died and many of these succumbed to secondary infections rather than the gas itself. Once it was introduced at the third battle of Ypres, mustard gas produced 90 % of all British gas casualties and 14 % of battle casualties of any type. Mustard gas was a source of extreme dread. In The Anatomy of Courage (1945), Lord Moran, who had been a medical officer during the war, wrote: After July 1917 gas partly usurped the role of high explosive in bringing to head a natural unfitness for war. The gassed men were an expression of trench fatigue, a menace when the manhood of the nation had been picked over. Mustard gas did not need to be inhaled to be effective -- any contact with skin was sufficient. Exposure to 0.1 ppm was enough to cause massive blisters. Higher concentrations could burn flesh to the bone. It was particularly effective against the soft skin of the eyes, nose, armpits and groin, since it dissolved in the natural moisture of those areas. Typical exposure would result in swelling of the conjunctiva and eyelids, forcing them closed and rendering the victim temporarily blind. Where it contacted the skin, moist red patches would immediately appear which after 24 hours would have formed into blisters. Other symptoms included severe headache, elevated pulse and temperature (fever), and pneumonia (from blistering in the lungs). Many of those who survived a gas attack were scarred for life. Respiratory disease and failing eyesight were common post-war afflictions. Of the Canadians who, without any effective protection, had withstood the first chlorine attacks during 2nd Ypres, 60 % of the casualties had to be repatriated and half of these were still unfit by the end of the war, over three years later. In reading the statistics of the time, one should bear the longer term in mind. Many of those who were fairly soon recorded as fit for service were left with scar tissue in their lungs. This tissue was susceptible to tuberculosis attack. It was from this that many of the 1918 casualties died, around the time of the Second World War, shortly before sulfa drugs became widely available for its treatment. A British nurse treating mustard gas cases recorded: They can not be bandaged or touched. We cover them with a tent of propped - up sheets. Gas burns must be agonizing because usually the other cases do not complain even with the worst wounds but gas cases are invariably beyond endurance and they can not help crying out. A postmortem account from the British official medical history records one of the British casualties: The distribution of gas cloud casualties was not only limited to the front. Nearby towns were at risk from winds blowing the poison gases through. Civilians rarely had a warning system put into place to alert their neighbours of the danger. In addition to poor warning systems, civilians often did not have access to effective gas masks. Also, when the gas came to the towns over the wind, it could easily get into houses through open windows and doors. An estimated 100,000 - 260,000 civilian casualties were caused by chemical weapons during the conflict and tens of thousands (along with military personnel) died from scarring of the lungs, skin damage, and cerebral damage in the years after the conflict ended. Many commanders on both sides knew that such weapon would cause major harm to civilians as wind would blow poison gases into nearby civilian towns but nonetheless continued to use them throughout the war. British Field Marshal Sir Douglas Haig wrote in his diary: "My officers and I were aware that such weapon would cause harm to women and children living in nearby towns, as strong winds were common on the battlefront. However, because the weapon was to be directed against the enemy, none of us were overly concerned at all. '' None of the First World War 's combatants were prepared for the introduction of poison gas as a weapon. Once gas was introduced, development of gas protection began and the process continued for much of the war producing a series of increasingly effective gas masks. Even at Second Ypres, Germany, still unsure of the weapon 's effectiveness, only issued breathing masks to the engineers handling the gas. At Ypres a Canadian medical officer, who was also a chemist, quickly identified the gas as chlorine and recommended that the troops urinate on a cloth and hold it over their mouth and nose. The first official equipment issued was similarly crude; a pad of material, usually impregnated with a chemical, tied over the lower face. To protect the eyes from tear gas, soldiers were issued with gas goggles. The next advance was the introduction of the gas helmet -- basically a bag placed over the head. The fabric of the bag was impregnated with a chemical to neutralize the gas -- however, the chemical would wash out into the soldier 's eyes whenever it rained. Eye - pieces, which were prone to fog up, were initially made from talc. When going into combat, gas helmets were typically worn rolled up on top of the head, to be pulled down and secured about the neck when the gas alarm was given. The first British version was the Hypo helmet, the fabric of which was soaked in sodium hyposulfite (commonly known as "hypo ''). The British P gas helmet, partially effective against phosgene and with which all infantry were equipped with at Loos, was impregnated with sodium phenolate. A mouthpiece was added through which the wearer would breathe out to prevent carbon dioxide build - up. The adjutant of the 1 / 23rd Battalion, The London Regiment, recalled his experience of the P helmet at Loos: The goggles rapidly dimmed over, and the air came through in such suffocatingly small quantities as to demand a continuous exercise of will - power on the part of the wearers. A modified version of the P Helmet, called the PH Helmet, was issued in January 1916, and was additionally impregnated with hexamethylenetetramine to improve the protection against phosgene. Self - contained box respirators represented the culmination of gas mask development during the First World War. Box respirators used a two - piece design; a mouthpiece connected via a hose to a box filter. The box filter contained granules of chemicals that neutralised the gas, delivering clean air to the wearer. Separating the filter from the mask enabled a bulky but efficient filter to be supplied. Nevertheless, the first version, known as the Large Box Respirator (LBR) or "Harrison 's Tower '', was deemed too bulky -- the box canister needed to be carried on the back. The LBR had no mask, just a mouthpiece and nose clip; separate gas goggles had to be worn. It continued to be issued to the artillery gun crews but the infantry were supplied with the "Small Box Respirator '' (SBR). The Small Box Respirator featured a single - piece, close - fitting rubberized mask with eye - pieces. The box filter was compact and could be worn around the neck. The SBR could be readily upgraded as more effective filter technology was developed. The British - designed SBR was also adopted for use by the American Expeditionary Force. The SBR was the prized possession of the ordinary infantryman; when the British were forced to retreat during the German Spring Offensive of 1918, it was found that while some troops had discarded their rifles, hardly any had left behind their respirators. Humans were not the only ones that needed protection from gas clouds. Horses and mules were important methods of transportation that could be endangered if they came into close contact with gas. This was not so much of a problem until it became common to launch gas great distances. This caused many researchers to develop masks that could be used on animals such as dogs, horses, mules, and even carrier pigeons. The following are some examples of improvised animal gas masks that were implemented: For mustard gas, which could cause severe damage by simply making contact with skin, no effective countermeasure was found during the war. The kilt - wearing Scottish regiments were especially vulnerable to mustard gas injuries due to their bare legs. At Nieuwpoort in Flanders some Scottish battalions took to wearing women 's tights beneath the kilt as a form of protection. Gas alert procedure became a routine for the front - line soldier. To warn of a gas attack, a bell would be rung, often made from a spent artillery shell. At the noisy batteries of the siege guns, a compressed air strombus horn was used, which could be heard nine miles (14 km) away. Notices would be posted on all approaches to an affected area, warning people to take precautions. Other British attempts at countermeasures were not so effective. An early plan was to use 100,000 fans to disperse the gas. Burning coal or carborundum dust was tried. A proposal was made to equip front - line sentries with diving helmets, air being pumped to them through a 100 ft (30 m) hose. However, the effectiveness of all countermeasures is apparent. In 1915, when poison gas was relatively new, less than 3 % of British gas casualties died. In 1916, the proportion of fatalities jumped to 17 %. By 1918, the figure was back below 3 %, though the total number of British gas casualties was now nine times the 1915 levels. The first system employed for the mass delivery of gas involved releasing the gas cylinders in a favourable wind such that it was carried over the enemy 's trenches. The Hague Convention of 1899 prohibited the use of poisons gasses delivered by projectiles. The main advantage of this method was that it was relatively simple and, in suitable atmospheric conditions, produced a concentrated cloud capable of overwhelming the gas mask defences. The disadvantages of cylinder releases were numerous. First and foremost, delivery was at the mercy of the wind. If the wind was fickle, as was the case at Loos, the gas could backfire, causing friendly casualties. Gas clouds gave plenty of warning, allowing the enemy time to protect themselves, though many soldiers found the sight of a creeping gas cloud unnerving. Also gas clouds had limited penetration, only capable of affecting the front - line trenches before dissipating. Finally, the cylinders had to be emplaced at the very front of the trench system so that the gas was released directly over no man 's land. This meant that the cylinders had to be manhandled through communication trenches, often clogged and sodden, and stored at the front where there was always the risk that cylinders would be prematurely breached during a bombardment. A leaking cylinder could issue a telltale wisp of gas that, if spotted, would be sure to attract shellfire. A British chlorine cylinder, known as an "oojah '', weighed 190 lb (86 kg), of which only 60 lb (27 kg) was chlorine gas, and required two men to carry. Phosgene gas was introduced later in a cylinder, known as a "mouse '', that only weighed 50 lb (23 kg). Delivering gas via artillery shell overcame many of the risks of dealing with gas in cylinders. The Germans, for example, used 5.9 - inch (150 mm) artillery shells ("five - nines ''). Gas shells were independent of the wind and increased the effective range of gas, making anywhere within reach of the guns vulnerable. Gas shells could be delivered without warning, especially the clear, nearly odorless phosgene -- there are numerous accounts of gas shells, landing with a "plop '' rather than exploding, being initially dismissed as dud HE or shrapnel shells, giving the gas time to work before the soldiers were alerted and took precautions. The main flaw associated with delivering gas via artillery was the difficulty of achieving a killing concentration. Each shell had a small gas payload and an area would have to be subjected to a saturation bombardment to produce a cloud to match cylinder delivery. Mustard gas, however, did not need to form a concentrated cloud and hence artillery was the ideal vehicle for delivery of this battlefield pollutant. The solution to achieving a lethal concentration without releasing from cylinders was the "gas projector '', essentially a large - bore mortar that fired the entire cylinder as a missile. The British Livens projector (invented by Captain W.H. Livens in 1917) was a simple device; an 8 - inch (200 mm) diameter tube sunk into the ground at an angle, a propellant was ignited by an electrical signal, firing the cylinder containing 30 or 40 lb (14 or 18 kg) of gas up to 1,900 meters. By arranging a battery of these projectors and firing them simultaneously, a dense concentration of gas could be achieved. The Livens was first used at Arras on 4 April 1917. On 31 March 1918 the British conducted their largest ever "gas shoot '', firing 3,728 cylinders at Lens. Over 16,000,000 acres (65,000 km) of France had to be cordoned off at the end of the war because of unexploded ordnance. About 20 % of the chemical shells were duds, and approximately 13 million of these munitions were left in place. This has been a serious problem in former battle areas from immediately after the end of the War until the present. Shells may be, for instance, uncovered when farmers plough their fields (termed the ' iron harvest '), and are also regularly discovered when public works or construction work is done. An additional difficulty is the current stringency of environmental legislation. In the past, a common method of getting rid of unexploded chemical ammunition was to detonate or dump it at sea; this is currently prohibited in most countries. The problems are especially acute in some northern regions of France. The French government no longer disposes of chemical weapons at sea. For this reason, piles of untreated chemical weapons accumulated. In 2001, it became evident that the pile stored at a depot in Vimy was unsafe; the inhabitants of the neighboring town were evacuated, and the pile moved, using refrigerated trucks and under heavy guard, to a military camp in Suippes. The capacity of the plant is meant to be 25 tons per year (extensible to 80 tons at the beginning), for a lifetime of 30 years. Germany has to deal with unexploded ammunition and polluted lands resulting from the explosion of an ammunition train in 1919. Aside from unexploded shells, there have been claims that poison residues have remained in the local environment for an extended period, though this is unconfirmed; well known but unverified anecdotes claim that as late as the 1960s trees in the area retained enough mustard gas residue to injure farmers or construction workers who were clearing them. Soldiers who claimed to have been exposed to chemical warfare have often presented with unusual medical conditions which has led to much controversy. The lack of information has left doctors, patients, and their families in the dark in terms of prognosis and treatment. Nerve agents such as sarin, tabun, and soman are believed to have the most significant long - term health effects. Chronic fatigue and memory loss have been reported to last up to three years after exposure. In the years following World War One, there were many conferences held in attempts to abolish the use of chemical weapons all together, such as The Washington Conference (1921 -- 22), Geneva Conference (1923 -- 25) and the World Disarmament Conference (1933). Although the United States was an original signatory of the Geneva Protocol in 1925, the US Senate did not formally ratify it until 1975. Although the health effects are generally chronic in nature, the exposures were generally acute. A positive correlation has been proven between exposure to mustard agents and skin cancers, other respiratory and skin conditions, leukemia, several eye conditions, bone marrow depression and subsequent immunosuppression, psychological disorders and sexual dysfunction. Chemicals used in the production of chemical weapons have also left residues in the soil where the weapons were used. The chemicals that have been detected can cause cancer and can affect the brain, blood, liver, kidneys and skin. Despite the evidence in support of long - term health effects, there are studies that show just the opposite. Some US veterans who were closely affected by chemical weapons showed no neurological evidence in the following years. These same studies showed that one single contact with chemical weapons would be enough to cause long - term health effects.
what kind of dog is chico off friday
Bull Terrier - wikipedia Wedge Head The Bull Terrier is a breed of dog in the terrier family. There is also a miniature version of this breed which is officially known as the Miniature Bull Terrier. The Bull Terrier 's most recognizable feature is its head, described as ' egg - shaped ' when viewed from the front; the top of the skull is almost flat. The profile curves gently downwards from the top of the skull to the tip of the nose, which is black and bent downwards at the tip, with well developed nostrils. The under - jaw is deep and strong. The unique triangular eyes are small, dark, and deep - set. Bull Terriers are the only dogs that have triangular eyes. The body is full and round, with strong, muscular shoulders. The tail is carried horizontally. They are either white, red, fawn, black, brindle, or a combination of these. Bull Terriers can be both independent and stubborn and for this reason are not considered suitable for an inexperienced dog owner. A Bull Terrier has an even temperament and is amenable to discipline. Although obstinate, they are particularly good with people. Early socialisation will ensure that the dog will get along with other dogs and animals. Their personality is described as courageous, full of spirit, with a fun - loving attitude, a children - loving dog and a perfect family member. A 2008 study in Germany showed that Bull Terriers have no significant temperament difference from Golden retrievers in overall temperament researches. All puppies should be checked for deafness, which occurs in 20.4 % of pure white Bull Terriers and 1.3 % of colored Bull Terriers and is difficult to notice, especially in a relatively young puppy. Many Bull Terriers have a tendency to develop skin allergies. Insect bites, such as those from fleas, and sometimes mosquitoes and mites, can produce a generalised allergic response of hives, rash, and itching. This problem can be stopped by keeping the dog free of contact from these insects, but this is definitely a consideration in climates or circumstances where exposure to these insects is inevitable. A UK breed survey puts their median lifespan at 10 years and their mean at 9 years (1 s.f., RSE = 13.87 % 2 d.p.), with a good number of dogs living to 10 -- 15 years. At the start of the 19th century the "Bull and Terrier '' breeds were developed to satisfy the needs for vermin control and animal - based blood sports. The Bull and Terriers were based on the Old English Bulldog (now extinct) and Old English Terriers with possible other terriers. This new breed combined the speed and dexterity of lightly built terriers with the dour tenacity of the Bulldog, which was a poor performer in most combat situations, having been bred almost exclusively for fighting bulls and bears tied to a post. Many breeders began to breed bulldogs with terriers, arguing that such a mixture enhances the quality of fighting. Despite the fact that a cross between a bulldog and a terrier was of high value, very little or nothing was done to preserve the breed in its original form. Due to the lack of breed standards -- breeding was for performance, not appearance -- the "Bull and Terrier '' eventually divided into the ancestors of "Bull Terriers '' and "Staffordshire Bull Terriers '', both smaller and easier to handle than the progenitor. In the mid-19th century James Hinks started breeding Bull and Terriers with "English White Terriers '' (now extinct), looking for a cleaner appearance with better legs and nicer head. In 1862, Hinks entered a dam called "Puss '' sired by his white Bulldog called "Madman '' into the Bull Terrier Class at the dog show held at the Cremorne Gardens in Chelsea. Originally known as the "Hinks Breed '' and "The White Cavalier '', these dogs did not yet have the now - familiar "egg face '', but kept the stop in the skull profile. The dog was immediately popular and breeding continued, using Dalmatian, Spanish Pointer, and Whippet to increase elegance and agility; and Borzoi and Rough Collie to reduce the stop. Hinks wanted his dogs white, and bred specifically for this. The first modern Bull Terrier is now recognised as "Lord Gladiator '', from 1917, being the first dog with no stop at all. Due to medical problems associated with all - white breeding, Ted Lyon among others began introducing colour, using Staffordshire Bull Terriers in the early 20th century. Coloured Bull Terriers were recognised as a separate variety (at least by the AKC) in 1936. Brindle is the preferred colour, but other colours are welcome. Along with conformation, specific behaviour traits were sought. The epithet "White cavalier '', harking back to an age of chivalry, was bestowed on a breed which while never seeking to start a fight was well able to finish one, while socialising well with its "pack '', including children and pups. Hinks himself had always aimed at a "gentleman 's companion '' dog rather than a pit - fighter -- though Bullies were often entered in the pits, with some success. With a miniature Bull Terrier Brindle and white Bull Terrier White Bull Terrier Red and white Bull Terrier Modern - colored Bull Terrier
what is the meaning of px in medical term
List of medical abbreviations: p - wikipedia 0 -- 9 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z PrEP pre-exposure prophylaxis 0 -- 9 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
when were catholic churches allowed back in england
Catholic Church in England and Wales - wikipedia The Catholic Church in England and Wales is part of the worldwide Catholic Church in full communion with the Pope. Celtic Christianity, with some traditions different from those of Rome, was present in Roman Britain from the first century AD, but after the departure of the Roman legions was in retreat to Paganism. In 597 AD, the first authoritative papal mission, establishing a direct link from the Kingdom of Kent to the See of Rome and to the Benedictine form of monasticism, was carried into effect by Augustine of Canterbury. The English Church continuously adhered to the See of Rome for almost a thousand years from the time of Augustine of Canterbury, but in 1534, during the reign of King Henry VIII, the church, through a series of legislative acts between 1533 and 1536 became independent from the Pope for a period as the Church of England, a national church with Henry declaring himself Supreme Head. Under Henry 's son, King Edward VI, the Church of England became more influenced by the European Protestant movement. The English Church was brought back under full papal authority in 1553, at the beginning of the reign of Queen Mary I, and Catholicism was enforced by the Marian persecutions; however, when Queen Elizabeth I came to the throne in 1558, the Church of England 's independence from Rome was reasserted through the settlement of 1559, which shifted the Church of England 's teaching and practice, and in the Act of Uniformity, which caused a rift between Catholics and Queen. In 1570 Pope Pius V responded, in his bull Regnans in Excelsis, calling on all Catholics to rebel against Elizabeth and excommunicating anyone who obeyed her. The Parliament of England made the fact of being a Jesuit or seminarian treasonable in 1571. Priests found celebrating Mass were often hanged, drawn and quartered, rather than being burned at the stake. The Catholic Church (along with other non-established churches) continued in England, although it was at times subject to various forms of persecution. Most recusant members (except those in diaspora on the Continent, in heavily Catholic areas in the north, or part of the aristocracy) practised their faith in private for all practical purposes. In 1766, the Pope recognised the English Monarchy as lawful, and this led eventually to the Roman Catholic Relief Act 1829. Dioceses (replacing districts) were re-established by Pope Pius IX in 1850. Along with the 22 Latin Rite dioceses, there are the Eastern Catholic diocese of the Ukrainian Catholic Eparchy of Holy Family of London and the Syro - Malabar Catholic Eparchy of Great Britain. At the 2001 United Kingdom census, there were 4.2 million Catholics in England and Wales, some eight per cent of the population. One hundred years earlier, in 1901, they represented only 4.8 per cent of the population. In 1981, 8.7 per cent of the population of England and Wales were Catholic. In 2009, an Ipsos Mori poll found that 9.6 per cent, or 5.2 million persons of all ethnicities were Catholic in England and Wales. Sizeable populations include North West England where one in five is Catholic, a result of large - scale Irish immigration in the nineteenth century as well as the high number of English recusants in Lancashire. Much of Great Britain was incorporated into the Roman Empire in 43 AD, after Claudius led the Roman conquest of Britain, conquering lands inhabited by Celtic Britons. The indigenous religious traditions of the Britons, under their priests the Druids were suppressed; most notably Gaius Suetonius Paulinus launched an attack on Ynys Môn in 60 AD and destroyed the shrine and sacred groves there. In the years following this, Roman influence saw the importation of several religious cults into Britain, including Roman mythology, Mithraism and the imperial cult. One of these sects, then disapproved by the Roman authorities, was the Levantine - originated religion of Christianity. While it is unclear exactly how it arrived, the earliest British figures considered saints by the Christians are St. Alban, followed by Ss. Julius and Aaron, living in the 3rd century. Eventually, the position of the Roman authorities on Christianity moved from hostility, to toleration with the Edict of Milan in 313 AD and then enforcement as state religion following the Edict of Thessalonica in 380 AD, becoming a key component of Romano - British culture and society. Records note that Romano - British bishops, such as Restitutus, attended the Council of Arles in 314, which confirmed the theological findings of an earlier convocation held in Rome (the Council of Rome) in 313. The Roman departure from Britain in the following century and the subsequent Germanic invasions sharply decreased contact between Britain and Continental Europe. Christianity, however, continued to flourish in the Brittonic areas of Great Britain. During this period certain practices and traditions took hold in Britain and in Ireland that are collectively known as Celtic Christianity. Distinct features of Celtic Christianity include a unique monastic tonsure and calculations for the date of Easter. Regardless of these differences, historians do not consider this Celtic or British Christianity a distinct church separate from general Western European Christianity. In 597, Pope Gregory I sent Augustine of Canterbury and 40 missionaries from Rome to evangelise the Anglo - Saxons, a process completed by the 7th century. The Gregorian mission, as it is known, is of particular interest in the Catholic Church as it was the first official Papal mission to found a church. With the help of Christians already residing in Kent, Augustine established an archbishopric in Canterbury, the old capital of Kent, and, having received the pallium earlier (linking his new diocese to Rome), became the first in the series of Catholic archbishops of Canterbury, four of whom (Laurence, Mellitus, Justus and Honorius) were part of the original band of Benedictine missionaries. (The last Catholic archbishop of Canterbury was Reginald Pole, who died in 1558.) During this time of mission, Rome pursued greater unity with the local church in Britain, particularly on the question of dating Easter. Columbanus, Columba 's fellow countryman and churchman, had asked for a papal judgement on the Easter question as did abbots and bishops of Ireland. Later, in his Historia ecclesiastica gentis Anglorum, Bede explained the reasons for the discrepancy: "He (Columba) left successors distinguished for great charity, Divine love, and strict attention to the rules of discipline following indeed uncertain cycles in the computation of the great festival of Easter, because far away as they were out of the world, no one had supplied them with the synodal decrees relating to the Paschal observance. '' A series of synods were held to resolve the matter, culminating with the Synod of Whitby in 644. The missionaries also introduced the Rule of Benedict, the continental rule, to Anglo - Saxon monasteries in England. Wilfrid, a Benedictine consecrated archbishop of York (in 664), was particularly skilled in promoting the Benedictine Rule. Over time, the Benedictine, continental rule engrafted upon the monasteries and parishes of England, drawing them closer to The Continent and Rome. As a result, the pope was often called upon to intervene in quarrels, affirm monarchs, and decide jurisdictions. In 787, for example, Pope Adrian I elevated Lichfield to an archdiocese and appointed Hygeberht its first archbishop. Later, in 808, Pope Leo III helped restore King Eardwulf of Northumbria to his throne; and in 859, Pope Leo IV confirmed and anointed Alfred the Great king, according to Anglo - Saxon Chronicle. Individual Benedictines seemed to play an important role throughout this period. For example, before Benedictine monk St. Dunstan was consecrated archbishop of Canterbury in 960, Pope John XII had him appointed legate, commissioning him (along with Ethelwold and Oswald) to restore discipline in the existing monasteries of England, many of which were destroyed by Danish invaders. Control of the English Church passed from the Anglo - Saxons to the Normans following the Norman conquest of England. The two clerics most prominent associated with this process were the continental - born Lanfranc and Anselm; both men were Benedictines. Anselm later became a Doctor of the Church. A century later, Pope Innocent III had to confirm the primacy of Canterbury over four Welsh churches for many reasons, but primarily to sustain the importance of the Gregorian foundation of Augustine 's mission. During mediaeval times, England and Wales were part of western Christendom. During this period, monasteries and convents, such as those at Shaftesbury and Shrewsbury, were prominent features of society providing lodging, hospitals and education. Likewise, schools like Oxford University and Cambridge University were important. Members of religious orders, notably the Dominicans and Franciscans, settled in both schools and maintained houses for students. Clerics like Archbishop Walter de Merton founded Merton College at Oxford and three different popes -- Gregory IX, Nicholas IV, and John XXII -- gave Cambridge the legal protection and status to compete with other European medieval universities. Pilgrimage was a prominent feature of mediaeval Catholicism, and England and Wales were amply provided with many popular sites of pilgrimage. The village of Walsingham, Norfolk became an important shrine after a noblewoman called Richeldis de Faverches experienced a vision of the Virgin Mary in 1061, asking her to build a replica of the Holy House at Nazareth. In 1170, Thomas Becket, Archbishop of Canterbury was murdered in Canterbury Cathedral, by followers of King Henry II and was quickly canonised as a martyr for the faith. This resulted in Canterbury becoming a major place of pilgrimage and inspired the Canterbury Tales by Geoffrey Chaucer. There were also shrines at Holywell in Wales which commemorated St Winefride and at Westminster Abbey to Edward the Confessor to name but a few. An Englishman, Nicholas Breakspear, became Pope Adrian IV, ruling from 1154 to 1159. Fifty - six years later, Cardinal Stephen Langton, the first of English cardinals and later Archbishop of Canterbury (1208 -- 28), became a pivotal figure in the dispute between King John and Pope Innocent III. This critical situation led to the signing and later promulgation of the Magna Charta in 1215. England remained a Catholic country until 1534, when it first officially separated from Rome during the reign of King Henry VIII. In response to the Pope 's refusal to annul Henry 's marriage to Catherine of Aragon, Parliament denied the Pope 's authority over the English Church, made the king Head of the Church in England, and dissolved the monasteries and religious orders in England. Henry did not himself accept Protestant innovations in doctrine or liturgy -- but he extended toleration, and even promotion, to clergy with Protestant sympathies in return for support for his break with Rome. On the other hand, failure to accept this break, particularly by prominent persons in church and state, was regarded by Henry as treason, resulting in the execution of Thomas More, former Lord Chancellor, and John Fisher, Bishop of Rochester, among others. The See of Rome Act 1536 enforced the separation from Rome, while the ' Pilgrimage of Grace ' of 1536 and ' Bigod 's Rebellion ' of 1537, risings in the North against the religious changes, were bloodily repressed. In 1536 - 41 Henry VIII engaged in a large - scale Dissolution of the Monasteries, which controlled most of the wealth of the church, and much of the richest land. He disbanded monasteries, priories, convents and friaries in England, Wales and Ireland, appropriated their income, disposed of their assets, and provided pensions for the former residents. He did not turn these properties over to a Protestant church of England (which indeed did not yet exist): they were sold, mostly to pay for the wars. The historian G.W. Bernard argues: Nevertheless, Henry maintained a strong preference for traditional Catholic practices and, during his reign, Protestant reformers were unable to make many changes to the practices of the Church of England. Indeed, this part of Henry 's reign saw the trial for heresy of Protestants as well as Catholics. The 1547 to 1553 reign of the boy King Edward VI saw the Church of England become more influenced by Protestantism in its faith and worship, with the (Latin) Mass replaced by the (English) Book of Common Prayer, representational art and statues in church buildings destroyed, and Catholic practices which had survived during Henry 's reign, for instance the public saying of prayers to the Virgin Mary such as the Salve Regina, ended. The Western Rising took place in 1549. The institutional Church in England returned to Catholic practice during the reign of the Catholic Queen Mary I from 1553 to 1558. Mary was determined to bring back the whole of England to the Catholic faith. This aim was not necessarily at odds with the feeling of a large section of the populace; Edward 's Protestant reformation had not been well received everywhere, and there was ambiguity in the responses of the parishes. Mary also had some powerful families behind her. The Jerningham family together with other East Anglian Catholic families such as the Bedingfelds, Waldegraves, Rochesters together with the Huddlestons of Sawston Hall were "the key to Queen Mary 's successful accession to the throne. Without them she would never have made it. '' However, Mary 's executions of 300 Protestants by burning at the stake proved counterproductive, as they were extremely unpopular among the populace. For example, instead of executing Archbishop Cranmer for treason for supporting Queen Jane, she had him tried for heresy and burned at a stake. With the assistance of Foxe 's Book of Martyrs, which glorified the Protestants killed at the time and vilified Catholics, this practice ensured her a place in popular memory as Bloody Mary -- for centuries after the idea of another reconciliation with Rome was linked in many English people 's minds with a renewal of Mary 's fiery stakes. When Mary died and Elizabeth I became queen in 1558, the religious situation in England was confused. Throughout the see - sawing religious landscape of the reigns of Henry VIII, Edward VI, and Mary I, a significant proportion of the population (especially in the rural and outlying areas of the country), are likely to have continued to hold Catholic views, at least in private. By the end of Elizabeth I 's reign, however, England was clearly a Protestant country, and Catholics were a minority. Elizabeth 's first act was to reverse her sister 's re-establishment of Catholicism by Acts of Supremacy and Uniformity. The Act of Supremacy of 1558 made it a crime to assert the authority of any foreign prince, prelate, or other authority, and was aimed at abolishing the authority of the Pope in England. A third offence was high treason, punishable by death. The Oath of Supremacy, imposed by the Act of Supremacy 1558, provided for any person taking public or church office in England to swear allegiance to the monarch as Supreme Governor of the Church of England. Failure to so swear was a crime, although it did not become treason until 1562, when the Supremacy of the Crown Act 1562 made a second offence of refusing to take the oath treason. However, during the first years of her reign there was relative leniency towards Catholics who were willing to keep their religion private, especially if they were prepared to continue to attend their parish churches. The wording of the official prayer book had been carefully designed to make this possible by omitting aggressively "heretical '' matter, and at first many English Catholics did in fact worship with their Protestant neighbours, at least until this was formally forbidden by Pope Pius V 's 1570 bull, Regnans in Excelsis, which also declared that Elizabeth was not a rightful queen and should be deposed, formally excommunicated her and any who obeyed her and obliged all Catholics to attempt to overthrow her. In response, the "Act to retain the Queen 's Majesty 's subjects in their obedience '', passed in 1581, made it high treason to reconcile anyone or to be reconciled to "the Romish religion '', or to procure or publish any papal Bull or writing whatsoever. The celebration of mass was prohibited under penalty of a fine of two hundred marks and imprisonment for one year for the celebrant, and a fine of one hundred marks and the same imprisonment for those who heard the Mass. This act also increased the penalty for not attending the Anglican service to the sum of twenty pounds a month, or imprisonment till the fine be paid, or till the offender went to the Protestant Church. A further penalty of ten pounds a month was inflicted on anyone keeping a schoolmaster who did not attend the Protestant service. The schoolmaster himself was to be imprisoned for one year. In the setting of England 's wars with Catholic powers such as France and Spain, culminating in the attempted invasion by the Spanish Armada in 1588, the Papal bull unleashed a nationalistic feeling which equated Protestantism with loyalty to a highly popular monarch, rendering every Catholic a potential traitor, even in the eyes of those who were not themselves extreme Protestants. The Rising of the North, the Throckmorton plot and the Babington plot, together with other subversive activities of supporters of Mary, Queen of Scots, all reinforced the association of Catholicism and treachery in the popular mind. The climax of Elizabeth 's persecution of Catholics was reached in 1585 by the "Act against Jesuits, Seminary priests and other such like disobedient persons ''. This statute, under which most of the English Catholic martyrs were executed, made it high treason for any Jesuit or any seminary priest to be in England at all, and felony for any one to harbour or relieve them. The last of Elizabeth 's anti-Catholic laws was the "Act for the better discovery of wicked and seditious persons terming themselves Catholics, but being rebellious and traitorous subjects ''. Its effect was to prohibit all recusants from removing more than five miles from their place of abode, and to order all persons suspected of being Jesuits or seminary priests, and not answering satisfactorily, to be imprisoned till they did so. However, Elizabeth did not believe that her anti-Catholic policies constituted religious persecution, finding it hard, in the context of the uncompromising wording of the Papal Bull against her, to distinguish between those Catholics engaged in conflict with her from those Catholics with no such designs. The number of English Catholics executed under Elizabeth was significant, including Edmund Campion, Robert Southwell, and Margaret Clitherow. Elizabeth herself signed the death warrant that led to regicide, the beheading of her cousin, Mary, Queen of Scots. Because of the persecution in England, Catholic priests in England were trained abroad at the English College in Rome, the English College in Douai, the English College at Valladolid in Spain, and at the English College in Seville. Given that Douai was located in the Spanish Netherlands, part of the dominions of Elizabethan England 's greatest enemy, and Valladolid and Seville in Spain itself, they became associated in the public eye with political as well as religious subversion. It was this combination of nationalistic public opinion, sustained persecution, and the rise of a new generation which could not remember pre-Reformation times and had no pre-established loyalty to Catholicism, that reduced the number of Catholics in England during this period -- although the overshadowing memory of Queen Mary I 's reign was another factor that should not be underestimated. Nevertheless, by the end of the reign probably 20 % of the population were still Catholic, with another 10 % dissident "Puritan '' Protestants, and the remainder more or less reconciled to the "official '' church. The reign of James I (1603 -- 1625) was marked by a measure of tolerance, though less so after the discovery of the Gunpowder Plot conspiracy of a small group of Catholic conspirators who aimed to kill both King and Parliament and establish a Catholic monarchy. A mix of persecution and tolerance followed: Ben Jonson and his wife, for example, in 1606 were summoned before the authorities for failure to take communion in the Church of England, yet the King tolerated some Catholics at court; for example George Calvert, to whom he gave the title Baron Baltimore, and the Duke of Norfolk, head of the Howard family. The reign of Charles I (1625 -- 49) saw a small revival of Catholicism in England, especially among the upper classes. As part of the royal marriage settlement Charles 's Catholic wife, Henrietta Maria, was permitted her own royal chapel and chaplain. Henrietta Maria was in fact very strict in her religious observances, and helped create a court with continental influences, where Catholicism was tolerated, even somewhat fashionable. Some anti-Catholic legislation became effectively a dead letter. The Counter-Reformation on the Continent of Europe had created a more vigorous and magnificent form of Catholicism (i.e., Baroque, notably found in the architecture and music of Austria, Italy and Germany) that attracted some converts, like the poet Richard Crashaw. Ironically, the explicitly Catholic artistic movement (i.e., Baroque) ended up "providing the blueprint, after the fire of London, for the first new Protestant churches to be built in England. '' While Charles remained firmly Protestant, he was personally drawn towards a consciously ' High Church ' Anglicanism. This affected his appointments to Anglican bishoprics, in particular the appointment of William Laud as Archbishop of Canterbury. How many Catholics and Puritans there were is still open to debate. Religious conflict between Charles and other "High '' Anglicans and Calvinists - at this stage mostly still within the Church of England (the Puritans) - formed a strand of the anti-monarchical leanings of the troubled politics of the period. The religious tensions between a court with ' Papist ' elements and a Parliament where the Puritans were strong was one of the major factors behind the English Civil War, in which almost all Catholics supported the King. The victory of the Parliamentarians meant a strongly Protestant, anti-Catholic (and, incidentally, anti-Anglican) regime under Oliver Cromwell. The restoration of the monarchy under Charles II (1660 -- 85) also saw the restoration of a Catholic - influenced court like his father 's. However, although Charles himself had Catholic leanings, he was first and foremost a pragmatist and realised the vast majority of public opinion in England was strongly anti-Catholic, so he agreed to laws such as the Test Act requiring any appointee to any public office or member of Parliament to deny Catholic beliefs such as transubstantiation. As far as possible, however, he maintained tacit tolerance. Like his father, he married a Catholic, Catherine of Braganza. (He would become Catholic himself on his deathbed). Charles ' brother and heir James, Duke of York (later James II) converted to Catholicism in 1668 -- 1669. When Titus Oates in 1678 alleged a (totally imaginary) ' Popish Plot ' to assassinate Charles and put James in his place, he unleashed a wave of Parliamentary and public hysteria which led to anti-Catholic purges, and another wave of sectarian persecution, which Charles was either unable or unwilling to prevent. Throughout the early 1680s the Whig element in Parliament attempted to remove James as successor to the throne. Their failure saw James become, in 1685, Britain 's first openly Catholic monarch since Mary I (and last to date). He promised religious toleration for Catholic and Protestants on an equal footing, but it is in doubt whether he did this to gain support from Dissenters or whether he was truly committed to tolerance (Contemporary Catholic regimes in Spain and Italy, for example, were hardly tolerant of Protestantism, while those in France and Poland had practised forms of toleration). James ' clear intent to work towards the restoration of the Church of England to the Catholic fold encouraged converts like the poet John Dryden, who wrote "The Hind and the Panther '', celebrating his conversion. Protestant fears mounted as James placed Catholics in the major commands of the existing standing army, dismissed the Protestant Bishop of London and dismissed the Protestant Fellows of Magdalen College and replaced them with a wholly Catholic board. The last straw was the birth of a Catholic heir in 1688, portending a return to a Pre-Reformation Catholic dynasty. In what came to be known as the Glorious Revolution, Parliament deemed James to have abdicated (effectively deposing him, though Parliament refused to call it that) in favor of his Protestant daughter and son - in - law and nephew, Mary II and William III. Although this affair is celebrated as solidifying both English liberties and the Protestant nature of the kingdom, some argue that it was "fundamentally a coup spearheaded by a foreign army and navy. '' James fled into exile, and with him many Catholic nobility and gentry. The Act of Settlement 1701, which remains in operation today, established the royal line through Sophia, Electress of Hanover, and specifically excludes any Catholic or anyone who marries a Catholic from the throne. This Act was partially changed when the ban on the monarch 's marrying a Catholic was eliminated (along with the rule of male succession). Henry Benedict Stuart (Cardinal - Duke of York), the last Jacobite heir to publicly assert a claim to the thrones of England, Scotland, and Ireland, died in Rome in 1807. A monument to the Royal Stuarts exists today at Vatican City. Franz, Duke of Bavaria, head of the Wittelsbach family, is the most senior descendant of King Charles I and is considered by Jacobites to be the heir of the Stuarts. The years from 1688 to the early 19th century were in some respects the nadir for Catholicism in England. Deprived of their dioceses, four Apostolic Vicariates were set up throughout England until the re-establishment of the diocesan episcopacy in 1850. Although the persecution was not violent as in the past, Catholic numbers, influence and visibility in English society reached their lowest ebb. Their civil rights were severely curtailed: their right to own property or inherit land was greatly limited, they were burdened with special taxes, they could not send their children abroad for Catholic education, they could not vote, and priests were liable to imprisonment. There was no longer, as once in Stuart times, any notable Catholic presence at court, in public life, in the military or professions. Many of the Catholic nobles and gentry who had preserved on their lands among their tenants small pockets of Catholicism had followed James into exile, and others, at least outwardly, conformed to Anglicanism, meaning fewer such Catholic communities survived intact. A bishop at this time (roughly from 1688 to 1850) was called a Vicar apostolic. A Vicar Apostolic was a titular bishop (as opposed to a diocesan bishop) through whom the pope exercised jurisdiction over a particular church territory in England. Interestingly, English - speaking colonial America came under the jurisdiction of the Vicar Apostolic of the London. As titular bishop over Catholics in British America, he was important to the government not only in regard to its English - speaking North American colonies, but also after the Seven Years ' War when the British Empire, in 1763, acquired the French - speaking (and predominantly Catholic) territory of Canada. Only after the Treaty of Paris in 1783 and in 1789 with the consecration of John Carroll, a friend of Benjamin Franklin, did the U.S. have its own diocesan bishop, free of the Vicar Apostolic of London, James Robert Talbot. Most Catholics retreated to complete isolation from a popular Protestant mainstream, and Catholicism in England in this period is politically, if not socially, invisible to history, Alexander Pope being one memorable English Catholic of the 18th century and the other being a member of the Catholic gentry, the Duke of Norfolk, the Premier Duke in the peerage of England and as Earl of Arundel, the Premier Earl. In virtue of his status and as head of the Howard family (which included the Earl of Carlisle, the Earl of Suffolk, the Earl of Berkshire, and the Earl of Effingham), the Duke was always at court. Pope, however, seemed to benefit from the isolation. In 1713, when he was 25, he took subscriptions for a project that filled his life for the next seven years, the result being a new version of Homer 's Iliad. Samuel Johnson pronounced it the greatest translation ever achieved in the English language. Over time, Pope became the greatest poet of the age, the Augustan Age, especially for his mock - heroic poems, Rape of the Lock and The Dunciad. Around this time, in 1720, Clement XI proclaimed Anselm of Canterbury a Doctor of the Church. In 1752, mid-century, Great Britain adopted the Gregorian calendar decreed by Pope Gregory XIII in 1582. Later in the century there was some liberalisation of the anti-Catholic laws on the basis of Enlightenment ideals. In 1778 a Catholic Relief Act allowed Catholics to own property, inherit land and join the army. Hardline Protestant mobs reacted in the Gordon Riots in 1780, attacking any building in London which was associated with Catholicism or owned by Catholics. Other reforms allowed the clergy to operate more openly and thus allowed permanent missions to be set up in the larger towns. Stonyhurst College, for example, was re-established in 1791 for wealthier Catholics. In 1837, James Arundel, the tenth Baron Arundel of Wardour, bequeathed to Stonyhurst the Arundel Library, which contained the vast Arundel family collection, including some of the school 's most important books and manuscripts such as a Shakespeare First Folio and a manuscript copy of Froissart 's Chronicles, looted from the body of a dead Frenchman after the Battle of Agincourt. Yet Catholic recusants as a whole remained a small group, except where they stayed the majority religion in various pockets, notably in rural Lancashire and Cumbria, or were part of the Catholic aristocracy and squirearchy. One of the most interesting contemporary descendents of recusants is Timothy Radcliffe, former Master of the Order of Preachers (Dominicans) and writer. Radcliffe is related to three former cardinals -- Weld, Vaughan and Hume (the last because his cousin Lord Hunt is married to Hume 's sister), and his family is connected to many of the great recusant English Catholic families, the Arundels, Tichbournes, Tablots, Stonors, and Weld - Blundells. Finally, history can not forget the famous recusant, Maria Fitzherbert, who during this period secretly married the Prince of Wales, Prince Regent, and future George IV in 1785. The British Constitution, however, did not accept it and George IV later moved on. Cast aside by the establishment, she was adopted by the town of Brighton, whose citizens, both Catholic and Protestant, called her "Mrs. Prince ''. According to journalist, Richard Abbott, "Before the town had a (Catholic) church of its own, she had a priest say Mass at her own house, and invited local Catholics '', suggesting the recusants of Brighton were not very undiscovered. In a new study of the English Catholic community, 1688 -- 1745, Gabriel Glickman notes that Catholics, especially those whose social position gave them access to the courtly centres of power and patronage, had a significant part to play in 18th - century England. They were not as marginal as one might think today. For example, Alexander Pope was not the only Catholic whose contributions (especially, Essays on Man) help define the temper of an early English Enlightenment. In addition to Pope, Glickman notes, a Catholic architect, James Gibbs, returned baroque forms to the London skyline and a Catholic composer, Thomas Arne, composed "Rule Britannia. '' According to reviewer Aidan Bellenger, Glickman also suggests that "rather than being the victims of the Stuart failure, ' the unpromising setting of exile and defeat ' had ' sown the seed of a frail but resilient English Catholic Enlightenment. ' '' Yale University historian Steve Pincus likewise argues in his book, 1688: The First Modern Revolution, that Catholics under William and Mary and their successors experienced considerable freedom. After this moribund period, the first signs of a revival occurred as thousands of French Catholics fled France during the French Revolution. The leaders of the Revolution were virulently anti-Catholic, even singling out priests and nuns for summary execution or massacre, and England was seen as a safe haven from Jacobin violence. Also around this time (1801), a new political entity was formed, the United Kingdom of Great Britain and Ireland, which merged the Kingdom of Great Britain with the Kingdom of Ireland, thus increasing the number of Catholics in the new state. Pressure for abolition of anti-Catholic laws grew, particularly with the need for Catholic recruits to fight in the Napoleonic Wars. Despite the strong opposition of King George III, which delayed reform, 1829 brought the culmination of the liberalisation of the anti-Catholic laws. Parliament passed the Roman Catholic Relief Act 1829, giving Catholics almost equal civil rights, including the right to vote and to hold most public offices. If Catholics were rich, however, exceptions were always made, even before the changes. For example, American ministers to the Court of St. James 's were often struck by the prominence of wealthy American - born Catholics, titled ladies among the nobility, like Louisa (Caton), granddaughter of Charles Carroll of Carrollton, and her two sisters, Mary Ann and Elizabeth. After Louisa 's first husband (Sir Felton Bathurst - Hervey) died, Louisa later married the son of the Duke of Leeds, and had the Duke of Wellington as her European protector. Her sister, Mary Ann, married the Marquess of Wellesley, the brother of the Duke of Wellington; and her other sister, Elizabeth (Lady Stafford), married another British nobleman. Though British law required an Anglican marriage service, each of the sisters and their Protestant spouses had a Catholic ceremony afterwards. At Louisa 's first marriage, the Duke of Wellington escorted the bride. In the 1840s and 1850s, especially during the Great Irish Famine, while much of the large outflow of emigration from Ireland was headed to the United States to seek work, hundreds of thousands of Irish people also migrated across the channel to England and Scotland, and established communities in cities there, including London, Liverpool, Manchester and Glasgow, but also in towns and villages up and down the country, thus giving English Catholicism a numerical boost. Also significant was the rise in the 1830s and 1840s of the Oxford Movement, which claimed Catholic validity for Anglican orders and sought to revive some elements of Catholic theology and ritual within the Church of England (creating Anglo - Catholicism). A proportion of the Anglicans who were involved in the Oxford Movement or "Tractarianism '' were ultimately led beyond these positions and converted to the Catholic Church, including, in 1845, the movement 's principal intellectual leader, John Henry Newman. More new Catholics would come from the Anglican Church, often via high Anglicanism, for at least the next hundred years, and something of this continues. Prominent intellectual and artistic figures who turned to Catholicism in the 19th and 20th centuries included the leading architect of the Gothic Revival, Augustus Pugin, the artist Graham Sutherland and literary figures such as Newman, Gerard Manley Hopkins, G.K. Chesterton, Ronald Knox, Siegfried Sassoon, Evelyn Waugh, Edith Sitwell, Graham Greene and Muriel Spark. Prominent cradle Catholics included the film director, Alfred Hitchcock, writers such as Hilaire Belloc, Lord Acton, and J.R.R. Tolkien and the composer Edward Elgar, whose oratorio The Dream of Gerontius was based on a 19th - century poem by Newman. At various points after the 16th century hopes have been entertained by many English Catholics that the "reconversion of England '' was near at hand. Additionally, with the arrival of immigrant masses of Irish Catholics, some considered that a "second spring '' of Catholicism across Britain was developing. Rome responded by re-establishing the Catholic hierarchy in 1850, creating 12 Catholic dioceses in England from existing apostolic vicariates and appointing diocesan bishops (to replace earlier titular bishops) with fixed sees on a more traditional Catholic pattern. The Catholic Church in England and Wales had 22 dioceses immediately before the Reformation, but none of the current 22 bear close resemblance (geographically) to those 22 earlier pre-Reformation dioceses. The re-established diocesan episcopacy specifically avoided using places that were seats of Church of England dioceses as seats, in effect temporarily abandoning the titles of Catholic dioceses before Elizabeth I because of the Ecclesiastical Titles Act of 1851, which in England favoured a state church (i.e., Church of England) and denied arms and legal existence to territorial Catholic sees on the basis that the state could not grant such "privileges '' to "entities '' that allegedly did not exist. Some of the Catholic dioceses, however, took the titles of bishoprics which had previously existed in England but were no longer used by the Anglican Church (e.g. Beverley - later divided into Leeds and Middlesbrough, Hexham - later changed to Hexham and Newcastle). In the few cases where a Catholic diocese bears the same title as an Anglican one in the same town or city (e.g. Birmingham, Liverpool, Portsmouth, and Southwark) -- this is the result of the Church of England ignoring the prior existence there of a Catholic see and of the technical repeal of the Ecclesiastical Titles Act in 1871. Of course, the Act was only carried out in England. For example, the official recognition afforded by the grant of arms to the archdiocese of St. Andrews and Edinburgh, brought into being by Lord Lyon in 1989, was made on the grounds that the Ecclesiastical Titles Act of 1851 never applied to Scotland. In recent times, former Conservative Cabinent Minister, John Gummer, who is a prominent convert to Catholicism and columnist for the Catholic Herald in 2007, objected to the fact that no Catholic diocese can have the same name as an Anglican diocese (such as London, Canterbury, Durham, etc.) "even though those dioceses had, shall we say, been borrowed. '' English Catholicism continued to grow throughout the first two thirds of the 20th century, when it was associated primarily with elements in the English intellectual class and the ethnic Irish population. Numbers attending Mass remained very high in stark contrast with the Anglican church (although not to other Protestant churches), Clergy numbers, which began the 20th century at under 3,000, reached a high of 7,500 in 1971. By the latter years of the twentieth century low numbers of vocations also affected the church with ordinations to the priesthood dropping from the hundreds in the late 20th century into the teens in 2006 - 2011 (16 in 2009 for example) and a recovery into the 20 's thereafter, with a prediction for 2018 of 24. As in other English - speaking countries such as the United States and Australia, the movement of Irish Catholics out of the working - class into the middle - class suburban mainstream often meant their assimilation with broader, secular English society and loss of a separate Catholic identity. The Second Vatican Council has been followed, as in other Western countries, by divisions between traditional Catholicism and a more liberal form of Catholicism claiming inspiration from the Council. This caused difficulties for not a few pre-conciliar converts, though others have still joined the Church in recent decades (for instance, Malcolm Muggeridge and Joseph Pearce), and public figures (often descendants of the recusant families) such as Paul Johnson; Peter Ackroyd; Antonia Fraser; Mark Thompson, Director General of the BBC; Michael Martin, first Catholic to hold the office of Speaker of the House of Commons since the Reformation; Chris Patten, first Catholic to hold the post of Chancellor of Oxford since the Reformation; Piers Paul Read; Helen Liddel, Britain 's High Commissioner to Australia; and former Prime Minister 's wife, Cherie Blair, have no difficulty making their Catholicism known in public life. The former Prime Minister, Tony Blair, was received into full communion with the Catholic Church in 2007. Catherine Pepinster, Editor of Tablet, notes: "The impact of Irish immigrants is one. There are numerous prominent campaigners, academics, entertainers (like Danny Boyle the most successful Catholic in showbiz owing to his film, Slumdog Millionaire), politicians and writers. But the descendants of the recusant families are still a force in the land. '' Since the Council the Church in England has tended to focus on ecumenical dialogue with the Anglican Church rather than winning converts from it as in the past. However, the 1990s have seen a number of conversions from Anglicanism to the Catholic Church, largely prompted by the Church of England 's decision to ordain women as priests (among other moves away from traditional doctrines and structures). The resultant converts included members of the Royal Family (Katharine, Duchess of Kent, her son Lord Nicholas Windsor and her grandson Baron Downpatrick), a number of Anglican priests. Converts to Catholicism in Britain, for this reason, tend to be more conservative and even traditionalist than Catholics on the European mainland, often opposing trends within the Catholic Church similar to those which induced them to abandon Anglicanism in the first place. The spirit of ecumenism fostered by Vatican II resulted in 1990 with the Catholic Church in England, Wales, Scotland and Ireland, joining Churches Together in Britain and Ireland as an expression of the churches ' commitment to work ecumenically. Recently, for example, a memorial was put up to St John Houghton and fellow Carthusian monks martyred at the London Charterhouse, 1535. Anglican priest, Geoffrey Curtis, campaigned for it with the current Archbishop of Canterbury 's blessing. Also, in another ecumenical gesture, a plaque in Holywell Street, Oxford, now commemorates the Catholic martyrs of England. It reads: "Near this spot George Nichols, Richard Yaxley, Thomas Belson, and Humphrey Pritchard were executed for their Catholic faith, 5 July 1589. '' And at Lambeth Palace, in February 2009, the Archbishop of Canterbury hosted a reception to launch a book, Why Go To Church?, by Fr Timothy Radcliffe OP, one of Britain 's best known religious and the former master of the Dominican Order. A large number of young Dominican friars attended. Fr Radcliffe said, "I do n't think there have been so many Dominicans in one place since the time of Robert Kilwardby, the Dominican Archbishop of Canterbury in the 13th century. '' The Church 's principles of social justice influenced initiatives to tackle the challenges of poverty and social inclusion. In Southampton, Fr Pat Murphy O'Connor founded the St Dismas Society as an agency to meet the needs of ex-prisoners discharged from Winchester prison. Some of St Dismas Society 's early members went on to help found the Simon Community in Sussex then in London. Their example gave new inspiration to other clergy, such as the Revd Kenneth Leech (CofE) of St Anne 's Church, Soho who helped found the homeless charity Centrepoint, and the Revd Bruce Kenrick (Church of Scotland) who helped found the homeless charity Shelter. In 1986 Cardinal Basil Hume established the Cardinal Hume Centre to work with homeless young people, badly housed families and local communities to access accommodation, support and advice, education, training and employment opportunities. In 2006 Cardinal Cormac Murphy - O'Connor instituted an annual Mass in Support of Migrant Workers at Westminster Cathedral in partnership with the ethnic chaplains of Brentwood, Southwark and Westminster. The Catholic Church in England and Wales has five provinces: Birmingham, Cardiff, Liverpool, Southwark and Westminster. There are 22 dioceses which are divided into parishes (for comparison, the Church of England and Church in Wales currently have a total of 50 dioceses). In addition to these, there are four dioceses covering England and Wales for specific groups which are the Bishopric of the Forces, the Eparchy for Ukrainians, the Syro - Malabar Catholic Eparchy of Great Britain and the Personal Ordinariate for former Anglicans. The Catholic bishops in England and Wales come together in a collaborative structure known as the Bishops ' Conference. Currently the Archbishop of Westminster, Vincent Gerard Nichols, is the President of the Bishops ' Conference. For this reason in the global Catholic Church (outside England), he is de facto Primate of England though not in the eyes of English law and the established Church of England. Historically, the avoidance of the title of "Primate '' was to eschew whipping up anti-Catholic tension, in the same way the bishops of the restored hierarchy avoided using current titles of Anglican sees (Archbishop of Westminster rather than "Canterbury '' or "London ''). However, the Archbishop of Westminster had certain privileges: he was the only metropolitan in the country until 1911 (when the archdioceses of Birmingham and Liverpool were created) and he has always acted as leader at meetings of the English bishops. Although the bishops of the restored hierarchy took new titles, such as that of Westminster, they saw themselves very much in continuity with the pre-Reformation Church. Westminster in particular saw itself as the continuation of Canterbury, hence the similarity of the coat of arms of the two sees (with Westminster believing it has more right to it since it features the pallium, no longer given to Anglican archbishops). At the back of Westminster Cathedral is a list of Popes and, alongside this, a list of Catholic Archbishops of Canterbury beginning with Augustine of Canterbury and the year they received the pallium. After Cardinal Pole, the last Catholic incumbent of Canterbury, the names of the Catholic vicars apostolic or titular bishops (from 1685) are recorded and then the Archbishops of Westminster, in one unimpaired line, from 597 to the present, according to the Archdiocese of Westminster. To highlight this continuity or unimpaired line today, the installation rites of pre-Reformation Catholic Archbishops of Canterbury and earlier Archbishops of Westminster were used at the installation of the current Cardinal Archbishop of Westminster, Vincent Gerard Nichols. He became the forty - third of English cardinals since the 12th century. Mar Joseph Srampickal In October 2009, following closed - circuit talks between some Anglicans and the Holy See, Pope Benedict made a relatively unconditional offer to accommodate disaffected Anglicans in the Church of England, enabling them, for the first time, to retain parts of their liturgy and heritage under Anglicanorum coetibus, while being in full communion with Rome. By April 2012 the ordinariate numbered about 1200, including five bishops and 60 priests. The ordinariate has recruited a group of aristocrats as honorary vice-presidents to help out. These include the Duke of Norfolk, the Countess of Oxford and Asquith and the Duchess of Somerset. Other vice-presidents include Lord Nicholas Windsor, Sir Josslyn Gore - Booth and the Squire de Lisle, whose ancestor Ambrose de Lisle was a 19th - century Catholic convert who advocated the corporate reunion of the Anglican Church with Rome. According to the group leader, Mgr Keith Newton, the ordinariate will "work on something with an Anglican flavour, but they are not bringing over any set of Anglican liturgy. '' The director of music at Westminster Abbey (Anglican), lay Catholic James O'Donnell, likens the ordinariate to a Uniate church or one of the many non-Latin Catholic rites, saying: "This is a good opportunity for us to remember that there is n't a one size fits all, and that this could be a good moment to adopt the famous civil service philosophy - ' celebrating diversity '. '' In May 2013, a former Anglican priest, Alan Hopes, was appointed the new Bishop of East Anglia, whose diocese includes the Shrine of Our Lady of Walsingham. There exists the Apostolic Exarchate for Ukrainians which serves the 15,000 Ukrainian Greek Catholics in Great Britain, with a cathedral and various churches across the country. The Lebanese Maronite Order (LMO) runs in England and Wales. The LMO is an order of the Maronite Catholic Church, serving Maronite Catholics in England and Wales. The Revd Augustine Aoun is the parish priest for Maronites. The LMO runs a few churches, for example Our Lady of Sorrows in Paddington and Our Lady of Lebanon in Swiss Cottage. There are also Catholic chaplains of the Eritrean, Chaldean, Syriac, Syro - Malabar, Syro - Malankara, and Melkite Rites. For information about the Syro - Malabar chaplaincy within the Diocese of Westminster in London, see Syro - Malabar Catholic Church of London. Mass in the Syro - Malabar rite is celebrated each Sunday in St Joseph 's Catholic Church, New Zealand Road, Cardiff. Migration from Ireland in the 19th and 20th centuries and more recent Eastern European migration have significantly increased the numbers of Catholics in England and Wales. While figures for England and Wales alone are difficult to estimate, the ethnic make - up of the Catholic population in the UK (which includes Northern Irish as British) in 2008 was as follows: The White Eastern European members are mainly from Poland, with smaller numbers from Lithuania, Latvia, and Slovakia. Polish - speaking Catholics first arrived in England in some numbers after the partitions of Poland during the 19th century. One of the most notable Poles at this time, who eventually settled in England, was Joseph Conrad. At the end of the Second World War, many Polish servicemen were unable to return to their homeland following the imposition of a communist regime hostile to their return, and the Polish Resettlement Corps was formed by the British government to ease their transition into British life. They were joined by several thousand Displaced Persons (DPs), many were their family members. This influx of Poles gave rise to the 1947 Polish Resettlement Act which allowed approximately 250,000 Polish Servicemen and their dependents, to settle in Britain. Many assimilated into existing Catholic congregations. According to the Polish Catholic Mission in England and Wales in 1948 the Catholic hierarchy in England Wales agreed the appointment of a vicar delegate, nominated by the Polish Episcopate, with ordinary power over the Polish clergy and laity throughout England and Wales with certain exceptions relating to marriage. Subsequently, whenever a Polish Catholic community emerges within England and Wales, the vicar delegate appoints a Polish priest to organise a local branch of the Polish Catholic Mission. A priest thus appointed is the priest in charge, not a parish priest. There are no Polish parishes or quasiparishes in England and Wales (in accordance with Canons 515 § 1 and 516 § 1) with the exception of the church at Devonia Road in London. A Polish Community is sometimes referred to as a "parish '' but is not a parish in the canonical sense. Hence the Community is not a juridical person. The canonical juridical personality which represents the interests of all Polish Communities is vested in the Polish Catholic Mission. Since the 2004 accession of Poland to the European Union there has been further large - scale Polish immigration to the UK. Currently the Polish Catholic Mission includes around 219 parishes and pastoral centres with 114 priests. The current rector of the Polish Catholic Mission is Rev. Stefan Wylężek. In Poland, the Polish Bishops Conference has a delegate with special responsibility for émigré Poles. The current postholder is Bishop Ryszard Karpiński. The Tablet reported in December 2007 that the Polish Catholic Mission says these parishes follow a pastoral programme set by the Polish conference of bishops and are viewed as "an integral part of the Polish church ''. In December 2007 Cardinal Cormac Murphy - O'Connor said "I 'm quite concerned that Poles are creating a separate Church in Britain -- I would want them to be part of the Catholic life of this country. I would hope those responsible for the Polish Church here, and the Poles themselves, will be aware that they should become a part of local parishes as soon as possible when they learn enough of the language. '' Mgr Kukla stressed that the Polish Catholic Mission continues to have a "good relationship '' with the hierarchy in England and Wales and said "Integration is a long process. '' Significantly, the Polish Mission co-operated fully with the English hierarchy 's recent research enquiry into the needs of migrants in London 's catholic community. "The Ground of Justice '' report by Francis Davis and Jolanta Stanke et al. Von Hügel Institute at St Edmund 's College, Cambridge was commissioned by Archbishop Kevin McDonald of Southwark, and Bishop Thomas McMahon of Brentwood. 1000 people attending Mass in three London dioceses were surveyed using anonymous questionnaires available in Polish, Lithuanian, Chinese, French, Spanish, Portuguese, and English. The congregations were from mainstream Diocesan parishes, ethnic chaplaincies, and churches of the Polish Vicariate. The report findings described how 86 % of eastern Europeans said the availability of Mass in their mother tongue was a reason for their choosing to worship in a particular church. The report 's recommendations emphasised cooperation with key overseas bishops conferences, dioceses, and religious institutes on the recruitment and appointment of ethnic chaplains; the recognition of language skills as a legitimate training activity and cost for seminarians, clergy, parish volunteer and lay employees; and the consolidation of dispersed charitable funds for pastoral development and the poor in London. On November 3, 2016, John Bingham reported in The Daily Telegraph that Cardinal Vincent Nichols officially acknowledged that the Catholic Church in England and Wales had pressured young, unmarried mothers in the country to put their children up for adoption in agencies linked to the Catholic Church throughout the decades following World War II and offered an apology. Saints and Doctors of the Church, notable and Pre-Reformation: Saints from the period of the Reformation to the present: Blesseds Venerables Servants of God Other Open Cause A number of events which Catholics hold to be miracles are associated with England. A number of Marian apparitions are associated with England, the best known are the following; A number of cases of alleged incorruptibility of some Catholic saints are associated with England; Two cases of alleged stigmata are associated with England, neither have been approved by the Vatican;
where was the longest war between pandav and kaurav held
Kurukshetra war - Wikipedia The Kurukshetra War, also called the Mahabharata War, is a war described in the Indian epic Mahabharata. The conflict arose from a dynastic succession struggle between two groups of cousins, the Kauravas and Pandavas, for the throne of Hastinapura in an Indian kingdom called Kuru. It involved a number of ancient kingdoms participating as allies of the rival groups. The location of the battle is described as having occurred in Kurukshetra in north India. Despite only referring to these eighteen days, the war narrative forms more than a quarter of the book, suggesting its relative importance within the epic, which overall spans decades of the warring families. The narrative describes individual battles and deaths of various heroes of both sides, military formations, war diplomacy, meetings and discussions among the characters, and the weapons used. The chapters (parvas) dealing with the war (from chapter six to ten) are considered amongst the oldest in the entire Mahabharata. The historicity of the war remains subject to scholarly discussions. Attempts have been made to assign a historical date to the Kurukshetra War. Suggested dates range from 5561 to around 950 BCE, while popular tradition holds that the war marks the transition to Kaliyuga and thus dates it to 3102 BCE. Mahabharata, one of the most important Hindu epics, is an account of the life and deeds of several generations of a ruling dynasty called the Kuru clan. Central to the epic is an account of a war that took place between two rival families belonging to this clan. Kurukshetra (literally "field of the Kurus ''), was the battleground on which this war, known as the Kurukshetra War, was fought. Kurukshetra was also known as "Dharmakshetra '' (the "field of Dharma ''), or field of righteousness. Mahabharata tells that this site was chosen because a sin committed on this land was forgiven on account of the sanctity of this land. The Kuru territories were divided into two and were ruled by Dhritarashtra (with his capital at Hastinapura) and Yudhishthira of the Pandavas (with his capital at Indraprastha). The immediate dispute between the Kauravas (sons of Dhritarashtra) and the Pandavas arose from a game of dice, which Duryodhana won by deceit, forcing his Pandava cousins to transfer their entire territories to the Kauravas (to Hastinapura) and to "go into exile '' for thirteen years. The dispute escalated into a full - scale war when Duryodhana, driven by jealousy, refused to restore to the Pandavas their territories after the exile as earlier decided, because Duryodhana objected that they were discovered while in exile, and that no return of their kingdom had been agreed upon. Swaraj Prakash Gupta and K.S. Ramachandran state that the Divergence of views regarding the Mahabharata war is due to the absence of reliable history of the ancient period. This is also true of the historical period, where also there is no unanimity of opinion on innumerable issues. Dr Mirashi accepts that there has been interpolation in the Mahabharata and observes that, ' Originally it (Mahabharata) was a small poem of 8,800 verses and was known by the name Jaya (victory), then it swelled to 24,000 verses and became known as Bharata, and, finally, it reached the present stupendous size of the one lakh verses, passing under the name Mahabharata. ' The historicity of the Kurukshetra War is subject to scholarly discussion and dispute. The existing text of the Mahabharata went through many layers of development, and mostly belongs to the period between c. 500 BCE and 400 CE. Within the frame story of the Mahabharata, the historical kings Parikshit and Janamejaya are featured significantly as scions of the Kuru clan, and Michael Witzel concludes that the general setting of the epic has a historical precedent in Iron Age (Vedic) India, where the Kuru kingdom was the center of political power during roughly 1200 to 800 BCE. According to Professor Alf Hiltebeitel, the Mahabharata is essentially mythological. Indian historian Upinder Singh has written that: Whether a bitter war between the Pandavas and the Kauravas ever happened can not be proved or disproved. It is possible that there was a small - scale conflict, transformed into a gigantic epic war by bards and poets. Some historians and archaeologists have argued that this conflict may have occurred in about 1000 BCE. Despite the inconclusiveness of the data, attempts have been made to assign a historical date to the Kurukshetra War. Popular tradition holds that the war marks the transition to Kaliyuga and thus dates it to 3102 BCE. A number of other proposals have been put forward: Though the Kurukshetra War is not mentioned in Vedic literature, its prominence in later literature led A.L. Basham, writing in 1954, to conclude that there was a great battle at Kurukshetra which, "magnified to titanic proportions, formed the basis of the story of the greatest of India 's epics, the Mahabharata. '' Acknowledging that later "generations looked upon it as marking an end of an epoch, '' he suggested that rather than being a civil war it might have been "a muddled recollection of the conquest of the Kurus by a tribe of Mongol type from the hills. '' He saw it as useless to the historian and dates the war to the 9th century BCE based on archaeological evidence and "some evidence in the Brahmana literature itself to show that it can not have been much earlier. '' According to Asko Parpola, the war may have taken place during the later phase of the Painted Grey Ware, circa 75 - 350 BCE. Parpola also notes that the Pandava heroes are not being mentioned in the Vedic literature from before the Grhyasutras. Parpola suggests that the Pandavas were Iranian migrants, who came to south Asia around 800 BCE. Puranic literature presents genealogical lists associated with the Mahabharata narrative. The evidence of the Puranas is of two kinds. Of the first kind, there is the direct statement that there were 1015 (or 1050) years between the birth of Parikshit (Arjun 's grandson) and the accession of Mahapadma Nanda, commonly dated to 382 BCE, which would yield an estimate of about 1400 BCE for the Bharata battle. However, this would imply improbably long reigns on average for the kings listed in the genealogies. Of the second kind are analyses of parallel genealogies in the Puranas between the times of Adhisimakrishna (Parikshit 's great - grandson) and Mahapadma Nanda. Pargiter accordingly estimated 26 generations by averaging 10 different dynastic lists and assuming 18 years for the average duration of a reign, arrived at an estimate of 850 BCE for Adhisimakrishna and thus approximately 950 BCE for the Bharata battle. B.B. Lal used the same approach with a more conservative assumption of the average reign to estimate a date of 836 BCE and correlated this with archaeological evidence from Painted Grey Ware sites, the association being strong between PGW artifacts and places mentioned in the epic. John Keay confirm this and also gives 950 BCE for the Bharata battle. Jaya, the core of Mahabharata, is structured in the form of a dialogue between Kuru king Dhritarashtra (born blind) and Sanjaya, his advisor and chariot driver. Sanjaya narrates each incident of the Kurukshetra War, fought in 18 days, as and when it happened. Dhritarashtra sometimes asks questions and doubts and sometimes laments, knowing about the destruction caused by the war, to his sons, friends and kinsmen. He also feels guilty, due to his own role that led to this war, destructive to the entire Indian subcontinent. Some 18 chapters of Vyasa 's Jaya constitutes the Bhagavad Gita, one of the sacred texts of the Hindus. Thus, this work of Vyasa, called Jaya, deals with diverse subjects like geography, history, warfare, religion and morality. According to the Mahabharata itself, the Jaya was recited to the King Janamejaya, the great - grandson of Arjuna, by Vaisampayana, a disciple of Vyasa (then called the Bharata). The recitation of Vaisampayana to Janamejaya was then recited again by a professional storyteller named Ugrasrava Sauti, many years later, to an assemblage of sages performing the 12 - year - long sacrifice for King Saunaka Kulapati in the Naimisha forest (then called the Mahabharata). In the beginning, Sanjaya gives a description of the various continents of the Earth, the other planets, and focuses on the Indian Subcontinent, then gives an elaborate list of hundreds of kingdoms, tribes, provinces, cities, towns, villages, rivers, mountains, forests etc. of the (ancient) Indian Subcontinent (Bharata Varsha). He also explains about the military formations adopted by each side on each day, the death of each hero and the details of each war - racing. As a last attempt at peace is called for in Rajadharma, Krishna, the chieftain of the Yadavas, lord of the kingdom of Dwaraka, travelled to the kingdom of Hastinapur to persuade the Kauravas to see reason, avoid bloodshed of their own kin, and to embark upon a peaceful path with him as the "Divine '' ambassador of the Pandavas. Duryodhana was insulted that Krishna had turned down his invitation to accommodate himself in the royal palace. Determined to stop & hinder the peace mission & adamant of going to war with the Pandavas, Duryodhana plotted to arrest Krishna, and insult, humiliate, and defame him in front of the entire royal court of Hastinapura as a challenge to prestige of the Pandavas and declaration of an act of open war. At the formal presentation of the peace proposal by Krishna in the Kuru Mahasabha, at the court of Hastinapur, Krishna asked Duryodhana to return Indraprastha to the Pandavas and restore the status quo; or, if not, give over at least five villages, one for each of the Pandavas. Duryodhana said he would not give land even as much as tip of a needle to the Pandavas. Krishna 's peace proposals were ignored and dismissed, and Duryodhana publicly ordered his soldiers, even after the warnings from all the elders, to arrest Krishna. Krishna laughed and displayed his divine form, radiating intense light. Lord Krishna cursed Duryodhana that his downfall was certain at the hands of the one who was sworn to tear off his thigh, to the shock of the blind king, who tried to pacify the Lord with words as calm as he could find. His peace mission utterly insulted by Duryodhana, Krishna returned to the Pandava camp at Upaplavya to inform the Pandavas that the only course left to uphold the principles of virtue and righteousness was inevitable - war. During the course of his return, Krishna met Karna, Kunti 's firstborn (before Yudhishthira) and requested him to help his brothers and fight on the side of dharma. However, being helped by Duryodhana, Karna said to Krishna that he would battle against Pandavas as he had a debt to pay. Duryodhana and Arjuna go to Krishna at Dwarka to ask for his help and that of his army. Duryodhana arrived first and found Krishna asleep. Being arrogant and viewing himself as equal to Krishna, Duryodhana chose a seat at Krishna 's head and waited for him to rouse. Arjuna arrived later and being a humble devotee of Krishna, chose to sit and wait at Krishna 's feet. When Krishna woke up, he saw Arjuna first and gave him the first right to make his request. Krishna told Arjuna and Duryodhana that he would give the Narayani Sena to one side and himself as a non-combatant to the other. Since Arjuna was given the first opportunity to choose, Duryodhana was worried that Arjuna would choose the mighty army of Krishna. When given the choice of either Krishna 's army or Krishna himself on their side, Arjuna on behalf of the Pandavas chose Krishna, unarmed on his own, relieving Duryodhana, who thought Arjuna to be the greatest fool. Later Arjuna requested Krishna to be his charioteer and Krishna, being an intimate friend of Arjuna, agreed wholeheartedly and hence received the name Parthasarthy, or ' charioteer of the son of Pritha '. Both Duryodhana and Arjuna returned satisfied. While camping at Upaplavya in the territory of Virata the Pandavas gathered their armies. Contingents arrived from all parts of the country and soon the Pandavas had a large force of seven divisions. The Kauravas managed to raise an even larger army of eleven divisions. Many kingdoms of ancient India such as Dwaraka, Kasi, Kekaya, Magadha, Chedi, Matsya, Pandya, and the Yadus of Mathura were allied with the Pandavas; while the allies of the Kauravas comprised the kings of Pragjyotisha, Kalinga, Anga, Kekaya, Sindhudesa, Avanti in Madhyadesa, Gandharas, Bahlikas, Mahishmati, Kambojas (with the Yavanas, Sakas, Trilinga, Tusharas) and many others. Seeing that there was now no hope for peace, Yudhishthira, the eldest of the Pandavas, asked his brothers to organize their army. The Pandavas accumulated seven Akshauhinis army with the help of their allies. Each of these divisions were led by Drupada, Virata, Abhimanyu, Shikhandi, Satyaki, Nakula and Sahadeva. After consulting his commanders, the Pandavas appointed Dhrishtadyumna as the supreme commander of the Pandava army. The Mahabharata says that kingdoms from all over ancient India supplied troops or provided logistic support on the Pandava side. Some of these were: Kekaya, Pandya, Cholas, Magadha, and many more. The Kaurava army consisted of 11 Akshauhinis. Duryodhana requested Bhishma to command the Kaurava army. Bhishma accepted on the condition that, while he would fight the battle sincerely, he would not harm the five Pandava brothers. In addition, Bhishma said that Karna would not fight under him as long as he was in the battlefield. Having little choice, Duryodhana agreed to Bhishma 's conditions and made him the supreme commander of the Kaurava army, while Karna was debarred from fighting. But Karna entered the war later when Bhishma was severely wounded by Arjuna. Apart from the one hundred Kaurava brothers, headed by Duryodhana himself and his brother Dussasana, the Kauravas were assisted on the battlefield by Drona and his son Ashwatthama, the Kauravas ' brother - in - law Jayadratha, the Brahmin Kripa, Kritavarma, Shalya, Sudakshina, Bhurishravas, Bahlika, Shakuni, Bhagadatta and many more who were bound by their loyalty towards either Hastinapura or Dhritarashtra. The kingdom of Bhojakata, with its King Rukmi, Vidura, the ex-prime minister of Hastinapur and younger brother to Dhritarashtra, and Balarama were the only neutrals in this war. Rukmi wanted to join the war, but Arjuna refused to allow him because he had lost to Krishna during Rukmini 's swayavar and yet he boasted about his war strength and army, whereas Duryodhana did not want Arjuna 's reject. Vidura did not want to see the bloodshed of the war and was insulted extremely by Duryodhana, although he was the embodiment of Dharama himself and would have won the war for the Kauravas. The powerful Balarama refused to fight at Kurukshetra, because he was both Bhima 's and Duryodhana 's coach in gadhayudhdh (fighting with maces) and his brother Krishna is on the other side. The combined number of warriors and soldiers in both armies was approximately 3.94 million. Each Akshauhini was under a commander or a general, apart from the commander - in - chief or the generalissimo who was the head of the entire army. During the Kurukshetra War, various types of weapons were used by prominent warriors as well as ordinary soldiers. The weapons included: the bow, the mace, the sword, the lance and the dart. Almost all prominent warriors used bows, including the Pandavas, the Kauravas, Bhishma, Drona, Karna, Arjuna, Satyaki, Drupada, Jayadratha, Abhimanyu, Kripa, Kritavarma, Dhrishtadyumna and Shalya. However, many of them frequently used other weapons as well, for instance; the mace was used by Bhima, Duryodhana, Shalya, and Karna; the sword by Nakula, Satyaki, Jayadratha, Dhrishtadyumna, Karna and Kripa; and the lance by Karna, Yudhishthira, Shalya and Sahadeva. The two supreme commanders met and framed "rules of ethical conduct '', dharmayuddha, for the war. The rules included: Most of these rules were broken in the course of the war after the fall of Bhishma. For example, the first rule was violated on the 13th day, when Abimanyu was slain. It was observed that the year in which the Mahabharata War took place, the year had three solar eclipses on earth in a span of thirty days. Eclipses are considered ill for life on earth according to Hindu astrology. On the first day of the war, as would be on all the following days, the Kaurava army stood facing west and the Pandava army stood facing east. The Kaurava army was formed such that it faced all sides: elephants formed its body; the kings, its head; and the steeds, its wings. Bhishma, in consultation with his commanders Drona, Bahlika and Kripa, remained in the rear. The Pandava army was organised by Yudhishthira and Arjuna in the Vajra formation. Because the Pandava army was smaller than the Kaurava 's, they decided to employ the tactic of each warrior engaging as many enemies as possible. This involved an element of surprise, with the bowmen showering arrows hiding behind the frontal attackers. The attackers in the front were equipped with short - range weapons like maces, battle - axes, swords and lances. Ten divisions (Akshauhinis) of the Kaurava army were arranged in a formidable phalanx. The eleventh was put under the immediate command of Bhishma, partly to protect him. The safety of the supreme commander Bhishma was central to Duryodhana 's strategy, as he had placed all his hope on the great warrior 's abilities. Dushasana, the younger brother of Duryodhana, was the military officer in - charge of Bhishma 's protection. When the war was declared and the two armies were facing each other, Arjuna realized that he would have to kill his dear granduncle (Bhishma), on whose lap he had played as a child and his respected teacher (Drona), who had held his hand and taught him how to hold the bow and arrow, making him the greatest archer in the world. Arjuna felt weak and sickened at the prospect of killing his entire family, including his 100 cousins and friends such as Ashwatthama. Despondent and confused about what is right and what is wrong, Arjuna turned to Krishna for divine advice and teachings. Krishna, who Arjuna chose as his charioteer, advised him of his duty. This conversation forms the Bhagavad Gita, one of the most respected religious and philosophical texts in the Hindu religion. Krishna instructs Arjuna not to yield to degrading impotence and to fight his kin, for that was the only way to righteousness. He also reminded him that this was a war between righteousness and unrighteousness (dharma and adharma) and it was Arjuna 's duty to slay anyone who supported the cause of unrighteousness, or sin. Krishna then revealed his divine form and explained that he is born on earth in each aeon when evil raises its head. It also forms one of the foremost treatise on the several aspects of Yoga and mystical knowledge. Before the battle began, Yudhishthira did something unexpected. He suddenly dropped his weapons, took off his armour and started walking towards the Kaurava army with folded hands in prayer. The Pandava brothers and the Kauravas looked on in disbelief, thinking Yudhishthira was surrendering before the first arrow was shot. Yudhishthira 's purpose became clear, however, when he fell on Bhishma 's feet to seek his blessing for success in battle. Bhishma, grandfather to both the Pandavas and Kauravas, blessed Yudhishthira. Yudhishthira returned to his chariot and the battle was ready to commence. When the battle was commenced, Arjuna created a Vajra formation and Bhishma went through the Pandava formation wreaking havoc wherever he went, but Abhimanyu, Arjuna 's son, seeing this went straight at Bhishma, defeated his bodyguards and directly attacked the commander of the Kaurava forces. However, the young warrior could n't match the prowess of Bhishma, and was defeated. The Pandavas suffered heavy losses and were defeated at the end of the first day. Virata 's sons, Uttara and Sweta, were slain by Shalya and Bhishma. Krishna consoled the distraught Yudhishthira saying that eventually victory would be his. The second day of the war commenced with a confident Kaurava army facing the Pandavas. Arjuna, realizing that something needed to be done quickly to reverse the Pandava losses, decided that he had to try to kill Bhishma. Krishna skillfully located Bhishma 's chariot and steered Arjuna toward him. Arjuna tried to engage Bhishma in a duel, but the Kaurava soldiers placed a cordon around Bhishma to protect him and attacked Arjuna to try to prevent him from directly engaging Bhishma. Arjuna and Bhishma fought a fierce battle that raged for hours. Drona and Dhrishtadyumna similarly engaged in a duel in which Drona defeated Dhrishtadyumna. Bhima intervened and rescued Dhrishtadyumna. Duryodhana sent the troops of Kalinga to attack Bhima and most of them, including the king of Kalinga, lost their lives at his hands. Bhishma immediately came to relieve the battered Kalinga forces. Satyaki, who was assisting Bhima, shot at Bhishma 's charioteer and killed him. Bhishma 's horses, with no one to control them, bolted carrying Bhishma away from the battlefield. The Kaurava army had suffered great losses at the end of the second day, and were considered defeated. On the third day, Bhishma arranged the Kaurava forces in the formation of an eagle with himself leading from the front, while Duryodhana 's forces protected the rear. Bhishma wanted to be sure of avoiding any mishap. The Pandavas countered this by using the crescent formation with Bhima and Arjuna at the head of the right and the left horns, respectively. The Kauravas concentrated their attack on Arjuna 's position. Arjuna 's chariot was soon covered with arrows and javelins. Arjuna, with amazing skill, built a fortification around his chariot with an unending stream of arrows from his bow. Abhimanyu and Satyaki combined to defeat the Gandhara forces of Shakuni. Bhima and his son Ghatotkacha attacked Duryodhana in the rear. Bhima 's arrows hit Duryodhana, who swooned in his chariot. His charioteer immediately drove them out of danger. Duryodhana 's forces, however, saw their leader fleeing the battlefield and soon scattered. Bhishma soon restored order and Duryodhana returned to lead the army. He was angry at Bhishma, however, at what he saw as leniency towards the five Pandava brothers and spoke harshly at his commander. Bhishma, stung by this unfair charge, fell on the Pandava army with renewed vigor. It was as if there were more than one Bhishma on the field. Arjuna attacked Bhishma trying to restore order. Arjuna and Bhishma again engaged in a fierce duel, however Arjuna 's heart was not in the battle as he did not like the idea of attacking his grand - uncle. During the battle, Bhishma killed numerous soldiers of Arjuna 's armies. The fourth - day of the battle was noted for the valour shown by Bhima. Bhishma commanded the Kaurava army to move on the offensive from the outset. While Abhimanyu was still in his mother 's womb, Arjuna had taught Abhimanyu on how to break and enter the chakra vyuha. But, before explaining how to exit the chakra Vyuha, Arjuna was interrupted by Krishna (another story is that Abhimanyu 's mother falls asleep while Arjuna was explaining the chakra vyuha exit strategy). Thus from birth, Abhimanyu only knew how to enter the Chakra vyuha but did n't know how to come out of it. When the Kauravas formed the chakravyuha, Abhimanyu entered it but was surrounded and attacked by a number of Kaurava princes. Arjuna joined the fray in aid of Abhimanyu. Bhima appeared on the scene with his mace aloft and started attacking the Kauravas. Duryodhana sent a huge force of elephants at Bhima. When Bhima saw the mass of elephants approaching, he got down from his chariot and attacked them singlehandedly with his iron mace. They scattered and stampeded into the Kaurava forces killing many. Duryodhana ordered an all - out attack on Bhima. Bhima withstood all that was thrown at him and attacked Duryodhana 's brothers, killing eight of them. Bhima was soon struck by an arrow from Dushasana, the second - eldest Kaurava, on the chest and sat down in his chariot dazed. Duryodhana was distraught at the loss of his brothers. Duryodhana, overwhelmed by sorrow at the loss of his brothers, went to Bhishma at the end of the fourth day of the battle and asked his commander how could the Pandavas, facing a superior force against them, still prevail and win. Bhishma replied that the Pandavas had justice on their side and advised Duryodhana to seek peace. When the battle resumed on the fifth day, the slaughter continued. The Pandava army again suffered against Bhishma 's attacks. Satyaki bore the brunt of Drona 's attacks and could not withstand them. Bhima drove by and rescued Satyaki. Arjuna fought and killed thousands of soldiers sent by Duryodhana to attack him. Bhima engaged in a fierce duel with Bhishma, which remained inconclusive. Drupada and his son Shikandi drove to aid Bhima with his fight with Bhishma, but they were stopped by Vikarna, one of Duryodhana 's brothers, who attacked them with his arrows, injuring both father and son badly. The unimaginable carnage continued during the ensuing days of the battle. The sixth day was marked by a prodigious slaughter. Drona caused immeasurable loss of life on the Pandava side. The formations of both the armies were broken. However, Bhima managed to penetrate the Kaurava formation and attacked Duryodhana. Duryodhana was defeated, but was rescued by others. The Upapandavas (sons of Draupadi) fought with Ashwathama and destroyed his chariot. The day 's battle ended with the defeat of the Kauravas. On the 7th day, Drona slew Shanka, a son of Virata. The terrific carnage continued, and the day 's battle ended with the victory of the Kauravas. On the 8th day, Bhima killed 17 of Dhritarashtra 's sons. Iravan, the son of Arjuna and the snake - princess Ulupi killed 5 brothers of Shakuni, princes hailing from Gandhara. Duryodhana sent the Rakshasa fighter Alamvusha to kill Iravan, and the latter was killed by the Rakshasa after a fierce fight. The day ended with a crushing defeat of the Kauravas. On the 9th day, Krishna, overwhelmed by anger at the apparent inability of Arjuna to defeat Bhishma, rushed towards the Kaurava commander, jumping furiously from the chariot taking the wheel of a fallen chariot in his hands. According to some texts, Bhishma however tried to attack krishna with his arrows when the entire cosmos comes to rest and the time arrives for bhishma as instructed by his mother ganga to learn the actual dharma where krishna reveals him as the "SUPREME PARABRAHMAN '' after which bhishma laid down his arms and stood ready to die at the hands of the Lord, but Arjuna stopped him, reminding of his promise not to wield a weapon. Realizing that the war could not be won as long as Bhishma was standing, Krishna suggested the strategy of placing a eunuch in the field to face him. Some sources however state that it was Yudhishthira who visited Bhishma 's camp at night asking him for help. To this Bhishma said that he would not fight a eunuch. On the tenth day, the Pandavas, unable to withstand Bhishma 's prowess, decided to put Shikhandi, who had been a woman in a prior life in front of Bhishma, as Bhishma has taken a vow not to attack a woman. Shikhandi 's arrows fell on Bhishma without hindrance. Arjuna positioned himself behind Shikhandi, protecting himself from Bhishma 's attack and aimed his arrows at the weak points in Bhishma 's armour. Soon, with arrows sticking from every part of his body, the great warrior fell from his chariot. His body did not touch the ground as it was held aloft by the arrows protruding from his body. The Kauravas and Pandavas gathered around Bhishma and at his request, Arjuna placed three arrows under Bhishma 's head to support it. Bhishma had promised his father, King Shantanu, that he would live until Hastinapur were secured from all directions. To keep this promise, Bhishma used the boon of "Ichcha Mrityu '' (self wished death) given to him by his father. After the war was over, when Hastinapur had become safe from all sides and after giving lessons on politics and Vishnu Sahasranama to the Pandavas, Bhishma died on the first day of Uttarayana. With Bhishma unable to continue, Karna entered the battlefield, much to Duryodhana 's joy. He made Drona the supreme commander of the Kaurava forces, according to Drona 's suggestion. Duryodhana wanted to capture Yudhishthira alive. Killing Yudhishthira in battle would only enrage the Pandavas more, whereas holding him as hostage would be strategically useful. Drona formulated his battle plans for the eleventh day to this aim. He cut down Yudhishthira 's bow and the Pandava army feared that their leader would be taken prisoner. Arjuna rushed to the scene, however and with a flood of arrows stopped Drona. With his attempts to capture Yudhishthira thwarted, Drona confided to Duryodhana that it would be difficult as long as Arjuna was around. So, he ordered the Samsaptakas (the Trigarta warriors headed by Susharma, who had vowed to either conquer or die) to keep Arjuna busy in a remote part of the battlefield, an order which they readily obeyed, on account of their old hostilities with the Pandava scion. However, Arjuna managed to defeat them before the afternoon, and then faced Bhagadatta, the ruler of Pragjyotisha (modern day Assam, India), who had been creating havoc among the Pandava troops, defeating great warriors like Bhima, Abhimanyu and Satyaki. Bhagadatta fought with Arjuna riding on his gigantic elephant named Supratika. Arjuna and Bhagadatta fought a fierce duel, and finally Arjuna succeeded in defeating and killing his antagonist. Drona continued his attempts to capture Yudhishthira. The Pandavas, however, fought hard and delivered severe blows to the Kaurava army, frustrating Drona 's plans. On the 13th day, Drona arrayed his troops in the Chakra / Padma / Kamala formation, a very complex and almost impenetrable formation. His target remained the same, that is, to capture Yudhishthira. Among the Pandavas, only Arjuna and Krishna knew how to penetrate this formation, and in order to prevent them from doing so, the Samsaptakas led by Susharma again challenged Arjuna, and kept him busy at a remote part of the battlefield the whole day. Arjuna killed thousands of Samsaptakasa, however, he could n't exterminate all of them. On the other side of the battlefield, the remaining four Pandavas and their allies were finding it impossible to break Drona 's Chakra formation. Yudhishthira instructed, Abhimanyu, the son of Arjuna and Subhadra, to break the Chakra / Padma formation. Abhimanyu knew the strategy of entering the Chakra formation, but did not know how to exit it. So, the Pandava heroes followed him to protect him from any potential danger. As soon as, however, Abhimanyu entered the formation, King Jayadratha stopped the Pandava warriors. He held at bay the whole Pandava army, thanks to a boon obtained from Lord Shiva, and defeated Bhima and Satyaki. Inside the Chakra / Kamala formation, Abhimanyu slew tens of thousands of warriors. Some of them included Vrihadvala (the ruler of Kosala), the ruler of Asmaka, Martikavata (the son of Kritavarma), Rukmaratha (the son of Shalya), Shalya 's younger brother, Lakshmana (the son of Duryodhana) and many others. He also managed to defeat great warriors like Drona, Ashwatthama, Kritavarma and others. Facing the prospect of the complete annihilation of their army, the Kaurava commanders devised a strategy to deter Abhimanyu from causing further damage to their force. According to Drona 's instructions, six warriors together attacked Abhimanyu (the warriors included Drona himself, Karna, Kripa and Kritavarma), and deprived Abhimanyu of his chariot, bow, sword and shield. Abhimanyu, however, determined to fight, picked up a mace, smashed Ashwatthma 's chariot (upon which the latter fled), killed one of Shakuni 's brothers and numerous troops and elephants, and finally encountered the son of Dussasana in a mace - fight. The latter was a strong mace - fighter, and an exhausted Abhimanyu was defeated and killed by his adversary. Upon learning of the death of his son, Arjuna vowed to kill Jayadratha on the morrow before the battle ended at sunset, otherwise he would throw himself into the fire. While searching for Jayadrath on the battlefield, Arjuna slew an akshauhini (battle formation that consisted of 21,870 chariots (Sanskrit ratha); 21,870 elephants; 65,610 cavalry and 109,350 infantry) of Kaurav soldiers. By shakuni 's plot Duryodhana hid jayadrath in their camp as if Arjuna failed t kill jayadrath he would walk into the fire himself according to his vow which would make war easier for the kauravas. Lord Krishna fakes sunset using his sudarshan chakra and all the kauravas insults and laughs at arjuna remembering his vow to walk into fire if, he fails to kill jayadrath. Arjuna Simply saying to krishna "IT MUST HAVE BEEN YOUR WISH MADHAV '' and starts walking towards fire. And jayadrath knowing the sunset being informed by soldiers, starts towards kurukshetra to kill arjuna. But shakuni soon learns about krishna 's plot and returns to duryodhan However Jayadrath returns to warfield where shakuni reveals it 's just krishna 's plot. and the Lord krishna removes chakra removing the sunset environment. Jayadrath warns arjuna if his head falls on the ground due to his bow he would be fired too, because of his boon by his father. Arjuna uses "PASHUPATHSTRA '' acquired from Lord Shiva to carry jayadrath 's head to his father leading to his own father 's death. Many maharathis including Drona, Karna try to protect jayadratha But fails to do so. Arjun warns that, Everyone who supported Adharma will be killed in this war in such pathetic way. While Arjuna destroying the rest of the Shakatavuyha, Vikarna, the third eldest Kaurava, challenged Arjuna to an archery fight. Arjuna asked Bhima to decimate Vikarna, but Bhima refused to, because Vikarna had defended the Pandavas during the Draupadi Vastrapaharanam. Bhima and Vikarna showered arrows at each other. Later Bhima threw his mace at Vikarna, killing him. The muscular Pandava was devastated and mourned his death saying he was a man of Dharma and it was a pity how he lived his life. Drona killed Vrihatkshatra, the ruler of Kekaya and Dhrishtakethu, the ruler of Chedi. The battle continued past sunset. Dushasana 's son, Durmashana, was slain by Prativindya, the eldest son of Draupadi and Yudishtira, in a duel. When the bright moon rose, Ghatotkacha, the Rakshasa son of Bhima, slaughtered numerous warriors, like Alambusha and Alayudha attacking while flying in the air. Karna stood against him and both fought fiercely until Karna released the Shakti, a divine weapon given to him by Indra. Ghatotkacha increased his size and fell dead on the Kaurav army killing an Akshauhini of them. After King Drupada and King Virata were slain by Drona, Bhima and Dhrishtadyumna fought him on the fifteenth day. Because Drona was very powerful and inconquerable having the irresistible Brahmanda astra, Krishna hinted to Yudhishthira that Drona would give up his arms if his son Ashwatthama was dead. Bhima proceeded to kill an elephant named Ashwatthama and loudly proclaimed that Ashwatthama was dead. Drona approached Yudhishthira to seek the truth of his son 's death. Yudhishthira proclaimed Ashwathama Hatahath, Naro Va Kunjaro Va, implying Ashwathama had died but he was not sure whether it was a Drona 's son or an elephant, The latter part of his proclamation (Naro va Kunjaro va) were drowned out by sound of the conch blown by Krishna intentionally (a different version of the story is that Yudhishthira pronounced the last words so feebly that Drona could not hear the word elephant). Prior to this incident, the chariot of Yudhishthira, proclaimed as Dharma raja (King of righteousness), hovered a few inches off the ground. After the event, the chariot landed on the ground as he lied. Drona was disheartened, and laid down his weapons. He was then killed by Dhrishtadyumna to avenge his father 's death and satisfy his vow. Later, the Pandava 's mother Kunti secretly met her abandoned son Karna and requested him to spare the Pandavas, as they were his younger brothers. Karna promised Kunti that he would spare them except for Arjuna, but also added that he would not fire a same weapon against Arjun twice. On the sixteenth day, Karna was made the supreme commander of the Kuru army. Karna fought valiantly but was surrounded and attacked by Pandava generals, who were unable to prevail upon him. Karna inflicted heavy damage on the Pandava army, which fled. Then Arjuna successfully resisted Karna 's weapons with his own and also inflicted casualties upon the Kaurava army. The sun soon set and with darkness and dust making the assessment of proceedings difficult, the Kaurava army retreated for the day. On the same day, Bhima swung his mace and shattered Dushasana 's chariot. Bhima seized Dushasana, ripped his right hand from shoulder and killed him, tearing open his chest and drinking his blood and carrying some to smear on Draupadi 's untied hair, thus fulfilling his vow made when Draupadi was humiliated. On the seventeenth day, Karna defeated the Pandava brothers Nakula, Sahadeva and Yudhishthira in battle but spared their lives. Later, Karna resumed duelling with Arjuna. During their duel, Karna 's chariot wheel got stuck in the mud and Karna asked for a pause. Krishna reminded Arjuna about Karna 's ruthlessness unto Abhimanyu while he was similarly left without chariot and weapons. Hearing his son 's fate, Arjuna shot his arrow and decapitated Karna. Before the battle, Karna 's sacred armour (' Kavacha ') and earrings (' Kundala ') were taken as alms by Lord Indra when asked for, which resulted in his death by Arjuna 's arrows. On the 18th day, Shalya took over as the commander - in - chief of the remaining Kaurava forces. Yudhishthira killed king Shalya in a spear combat and Sahadeva killed Shakuni. Realizing that he had been defeated, Duryodhana fled the battlefield and took refuge in the lake, where the Pandavas caught up with him. Under the supervision of the now returned Balarama, a mace battle took place between Bhima and Duryodhana. Bhima flouted the rules (under instructions from Krishna) to strike Duryodhana beneath the waist in which he was mortally wounded. Ashwatthama, Kripacharya, and Kritavarma met Duryodhana at his deathbed and promised to avenge the actions of Bhima. They attacked the Pandavas ' camp later that night and killed all the Pandavas ' remaining army including their children. Amongst the dead were Dhrishtadyumna, Shikhandi, Uttamaujas, and children of Draupadi. Other than the Pandavas and Krishna, only Satyaki and Yuyutsu survived. At the end of the 18th day, only twelve major warriors survived the war -- the five Pandavas, Krishna, Satyaki, Ashwatthama, Kripacharya, Yuyutsu, Vrishakethu, and Kritvarma. Yudhishthira was crowned king of Hastinapur. After ruling for 36 years, he renounced the throne, passing the title on to Arjuna 's grandson, Parikshit. He then left for the Himalayas with Draupadi and his brothers. Draupadi and four Pandavas -- Bhima, Arjuna, Nakula and Sahadeva died during the journey. Yudhishthira, the lone survivor and being of pious heart, was invited by Dharma to enter the heavens as a mortal.
they might be giants they might be giants songs
They Might Be Giants (album) - wikipedia They Might Be Giants, sometimes called The Pink Album, is the debut album from Brooklyn - based band They Might Be Giants. It was released by Bar / None in 1986. The album generated two singles, "Do n't Let 's Start '' and "(She Was A) Hotel Detective ''. It is included on Then: The Earlier Years, a compilation of the band 's early material, in its entirety, with the exception of "Do n't Let 's Start '', which is replaced with the single mix for the compilation. "Do n't Let 's Start '', one of the album 's two singles, is often cited as a key track in the band 's catalogue, and its success contributed positively to the sale of the album. As a result of this prominence, when the band made the move to the major label Elektra Records, "Do n't Let 's Start '' was reissued in Europe with alternative B - sides. In July 2014, the band released a live version of the album at no charge on NoiseTrade. All the songs were recorded during the band 's 2013 tour. They Might Be Giants was the second album to be released on the fledgling Bar / None label, with They Might Be Giants as the second group signed to the independent label. Many of the songs on the album existed in a demo form on the band 's 1985 demo tape, which was also technically self - titled, though many were re-recorded or brought into new mixes for the commercial album. The material from the tape was recorded at Studio PASS in New York City with the assistance of Alex Noyes, who permitted the band to use the studio after it closed each day. Additional recording and mixing was done at Dubway Studios. Some unconventional recording techniques were used in the production of the album. The music is spare, mostly uptempo synthesizer - and - guitar pop punctuated with odd sound samples and occasionally veering into sparse country - or folk - like arrangements. Drum and bass tracks are almost entirely synthesized, though the album prominently features the accordion. In order to record the guitar solo in "Absolutely Bill 's Mood '', the band telephoned Eugene Chadbourne in Greensboro, NC from Dubway Studios. Chadbourne played the acoustic solo and it was recorded onto the studio 's answering machine, then mixed into the song. Flansburgh and Linnell occasionally borrow from other recordings throughout the album. "Rhythm Section Want Ad '' features an excerpt from Raymond Scott 's composition, "Powerhouse '', played on the accordion. "Boat of Car '' prominently samples the Johnny Cash song "Daddy Sang Bass ''. Music videos were produced for the album 's two singles, "Do n't Let 's Start '' and "(She Was A) Hotel Detective '', both directed by Adam Bernstein. Significantly, prior to the album 's release, a music video was also created for the song "Put Your Hand Inside the Puppet Head. '' This was the first commercial video ever shot by the band, and was also directed by Bernstein. The album 's cover art was illustrated by Rodney Alan Greenblat, who had no prior association with the band. It extends to the back cover of the album, and shows John Linnell and John Flansburgh riding a large blue dog. John Linnell commented that the illustration caused the record to be mistaken for a children 's album. The art was pasted up by John Flansburgh. The album was received with positive critical attention. Robert Christgau of The Village Voice, who gave the album an A, praised the album for its wide range of variety and cleverness, saying "the hits just keep on coming in an exuberantly annoying show of creative superabundance ''. He also made note of the unusualness of the subject matter contained in the lyrics. Jim Farber of Rolling Stone stressed the album 's "weirdness '', and also noted the diversity in style among songs that "incorporate genres from art pop to country to polka ''. Neither the album nor its singles saw success on the Billboard charts, but the three music videos associated with the album generated positive attention for the band. All tracks written by John Flansburgh and John Linnell.
why divers have to wear special suits before diving into a deep sea
Diving suit - wikipedia A diving suit is a garment or device designed to protect a diver from the underwater environment. A diving suit may also incorporate a breathing gas supply (i.e. Standard diving dress or atmospheric diving suit). but in most cases applies only to the environmental protective covering worn by the diver. The breathing gas supply is usually referred to separately. There is no generic term for the combination of suit and breathing apparatus alone. It is generally referred to as diving equipment or dive gear along with any other equipment necessary for the dive. Diving suits can be divided into two classes: "soft '' or ambient pressure diving suits - examples are wetsuits, dry suits, semi-dry suits and dive skins, and "hard '' or atmospheric pressure diving suits - armored suits that keep the diver at atmospheric pressure at any depth within the operating range of the suit. The first diving suit designs appeared in the early 18th century. Two English inventors developed the first pressure - proof diving suits in the 1710s. John Lethbridge built a completely enclosed suit to aid in salvage work. It consisted of a pressure - proof air - filled barrel with a glass viewing hole and two watertight enclosed sleeves. This suit gave the diver more maneouverability to accomplish useful underwater salvage work. After testing this machine in his garden pond (specially built for the purpose) Lethbridge dived on a number of wrecks: four English men - of - war, one East Indiaman (both English and Dutch), two Spanish galleons and a number of galleys. He became very wealthy as a result of his salvages. One of his better - known recoveries was on the Dutch Slot ter Hooge, which had sunk off Madeira with over three tons of silver on board. At the same time, Andrew Becker created a leather - covered diving suit with a helmet featuring a window. Becker used a system of tubes for inhaling and exhaling, and demonstrated his suit in the River Thames, London, during which he remained submerged for an hour. German - born British engineer Augustus Siebe developed the standard diving dress in the 1830s. Expanding on improvements already made by another engineer, George Edwards, Siebe produced his own design - a helmet fitted to a full length watertight canvas diving suit. Later suits were made from waterproofed canvas invented by Charles Mackintosh. From the late 1800s and throughout most of the 20th century, most standard dress was made from a sheet of solid rubber laminated between layers of tan twill. Dry suits made of latex rubber were used in World War II by Italian frogmen who found them indispensable. They were made by Pirelli and patented in 1951. Ambient pressure suits are a form of exposure protection protecting the wearer from the cold. They also provide some defence from abrasive and sharp objects as well as potentially harmful underwater life. They do not protect divers from the pressure of the surrounding water or resulting barotrauma and decompression sickness. There are five main types of ambient pressure diving suits; dive skins, wetsuits and their derivative semi-dry suit and hot - water suits, and dry suits. Apart from hot water suits, these types of suit are not exclusively used by divers but are often used for thermal protection by people engaged in other water sports activities such as surfing, sailing, powerboating, windsurfing, kite surfing, waterskiing, caving and swimming. Added buoyancy due to the volume of the suit is a side effect of most diving suits. A diving weighting system can be worn to counteract this buoyancy. Dive skins are used when diving in water temperatures above 25 ° C (77 ° F). They are made from spandex or Lycra and provide little thermal protection, but do protect the skin from jellyfish stings, abrasion and sunburn. This kind of suit is also known as a ' Stinger Suit '. Some divers wear a dive skin under a wetsuit, which allows easier donning and (for those who experience skin problems from neoprene) provides additional comfort. The "Dive Skin '' was originally invented to protect scuba divers in Queensland Australia against the "Box '' jellyfish (Chironex fleckeri) In 1978, Tony Farmer was a swimsuit designer and manufacturer who owned a business called "Daring Designs ''. Besides swimwear he also did underwear and aerobic wear which included a full suit in Lycra / Spandex. He became a scuba diver and that was the catalyst to the invention of the "dive skin '' as we know it today. Wetsuits are relatively inexpensive, simple, expanded neoprene suits that are typically used where the water temperature is between 10 and 25 ° C (50 and 77 ° F). The foamed neoprene of the suit thermally insulates the wearer. Although water can enter the suit, a close fitting suit prevents excessive heat loss because little of the water warmed inside the suit escapes from the suit to be replaced by cold water, a process referred to as "flushing ''. Proper fit is critical for warmth. A suit that is too loose will allow a large amount of water to circulate over the diver 's skin, taking up body heat. A suit that is too tight is very uncomfortable and can impair circulation at the neck, a very dangerous condition which can cause blackouts. For this reason, many divers choose to have wetsuits custom - tailored instead of buying them "off the rack ''. Many companies offer this service and the cost is often comparable to an off - the - rack suit. Wetsuits are limited in their ability to preserve warmth by three factors: the wearer is still exposed to some water, the suit is compressed by the ambient pressure, reducing effectiveness at depth, and the insulating neoprene can only be made to a certain thickness before it becomes impractical to don and wear. The thickest commercially available wetsuits are usually 10 mm thick. Other common thicknesses are 7 mm, 5 mm, 3 mm, and 1 mm. A 1 mm suit provides very little warmth and is usually considered a dive skin, rather than a wetsuit. Wetsuits can be made using more than one thickness of neoprene, to put the most thickness where it will be most effective in keeping the diver warm. A similar effect can be achieved by layering wetsuits of different coverage. Some makes of neoprene are softer, lighter and more compressible than others for the same thickness, and are more suitable for wetsuits for non-diving purposes as they will compress and lose their insulating value more quickly under pressure, though they are more comfortable for surface sports because they are more flexible and allow more freedom of movement. Semi-dry suits are effectively a thick wetsuit with nearly watertight seals at wrist, neck and ankles and zip. They are typically used where the water temperature is between 10 and 20 ° C (50 and 68 ° F). The seals limit the volume of water entering and leaving the suit, and a close fit minimises pumping action caused by limb motion. The wearer gets wet in a semi-dry suit but the water that enters is soon warmed up and does not readily leave the suit, so the wearer remains warm. The trapped layer of water does not add to the suit 's insulating ability, and any water circulation past the seals still causes heat loss, but semi-dry suits are cheap and simple compared to dry suits, and do not fail catastrophically. They are made from thick Neoprene, which provides good thermal protection, but lose buoyancy and thermal protection as the trapped gas bubbles in the neoprene foam compress at depth. Semi-dry suits are usually made as a one piece full suit with slick inside surface neoprene wrist, cuff and neck seals Two - piece sets tend to be a one piece full length suit, sometimes described as "long johns '', plus accessories to be worn over, under or with the one - piece suit, such as a shortie tunic, which may be worn separately in warm water, but has no flush - limiting seals at the openings. Semi dry suits do not usually include hoods, boots or gloves, so separate insulating hoods, boots and gloves are worn. Hot water suits are used in cold water commercial surface supplied diving. A hose in the umbilical line, which links the diver to the surface support, carries the hot water from a heater on the surface down to the suit. The diver controls the flow rate of the water from a valve near his waist, allowing him to vary the warmth of the suit in response to changes in environmental conditions and workload. Tubes inside the suit distribute the water to the limbs, chest, and back. Special boots, gloves, and hood are worn. These suits are normally made of foamed neoprene and are similar to wetsuits in construction and appearance, but they do not fit as closely by design. The wrists and ankles of the suit are open, allowing water to flush out of the suit as it is replenished with fresh hot water from the surface. Hot water suits are often employed for extremely deep dives when breathing mixes containing helium are used. Helium conducts heat much more efficiently than air, which means that the diver will lose large quantities of body heat through the lungs when breathing it. This fact compounds the risk of hypothermia already present in the cold temperatures found at these depths. Under these conditions a hot water suit is a matter of survival, not comfort. Just as an emergency backup source of breathing gas is required, a backup water heater is also an essential precaution whenever dive conditions warrant a hot water suit. If the heater fails and a backup unit can not be immediately brought online, a diver in the coldest conditions can die within minutes; depending on decompression obligations, bringing the diver directly to the surface could prove equally deadly. Heated water in the suit forms an active insulation barrier to heat loss, but the temperature must be regulated within fairly close limits. If the temperature falls below about 32 ° C, hypothermia can result, and temperatures above 45 ° C can cause burn injury to the diver. The diver may not notice a gradual change in inlet temperature, and in the early stages of hypo - or hyperthermia, may not notice the deteriorating condition. The suit is loose fitting to allow unimpeded water flow. This causes a large volume of water (13 to 22 litres) to be held in the suit, which can impede swimming due to the added inertia. When controlled correctly, the hot water suit is safe, comfortable and effective, and allows the diver adequate control of thermal protection, however hot water supply failure can be life - threatening. Dry suits are used typically where the water temperature is between − 2 and 15 ° C (28 and 59 ° F). Water is prevented from entering the suit by seals at the neck and wrists; also, the means of getting the suit on and off (typically a zipper) is waterproof. The suit insulates the wearer in one of two main ways: by maintaining an insulating layer of air in the undersuit between the body and the suit shell, (in exactly the way that thermal insulation garments work above water) or by using a watertight expanded neoprene suit shell, which is inherently insulating in the same way as a wet suit, and which can usually be worn with additional insulating undergarments. Both fabric and neoprene drysuits have advantages and disadvantages: a fabric drysuit is more adaptable to varying water temperatures because different garments can be layered underneath. However, they are quite bulky and this causes increased drag and swimming effort. Additionally, if a fabric drysuit malfunctions and floods, it loses nearly all of its insulating properties. Neoprene drysuits are comparatively streamlined like wetsuits, but in some cases do not allow garments to be layered underneath and are thus less adaptable to varying temperatures. An advantage of this construction is that even it if floods completely, it essentially becomes a wetsuit and will still provide a degree of insulation. Special dry suits (typically made of strong rubberised fabric) are worn by commercial divers who work in contaminated environments such as sewage or hazardous chemicals. The hazmat dry suit has integral boots and is sealed to a diving helmet and dry gloves to prevent any contact with the hazardous material. Constant volume dry suits have a system allowing the suit to be inflated to prevent "suit squeeze '' caused by increasing pressure and to prevent excessive compression of the insulating undergarments. They also have vents allowing the excess air to escape from the suit during ascent. For additional warmth, some dry suit users inflate their suits with argon, an inert gas which has superior thermal insulating properties compared to air. The argon is carried in a small cylinder, separate from the diver 's breathing gas. This arrangement is frequently used when the breathing gas contains helium, which is a very poor insulator in comparison with other breathing gases. A "shortie '' wetsuit may be worn over a full wetsuit for added warmth. Some vendors sell a very similar item and refer to it as a ' core warmer ' when worn over another wetsuit. A "skin '' may also be worn under a wetsuit. This practice started with divers (of both sexes) wearing body tights under a wetsuit for extra warmth and to make donning and removing the wetsuit easier. A "skin '' may also be as an undersuit beneath a drysuit in temperatures where a full undersuit is not necessary. An atmospheric diving suit is a small one - man articulated submersible of anthropomorphic form which resembles a suit of armour, with elaborate pressure joints to allow articulation while maintaining an internal pressure of one atmosphere. These can be used for very deep dives for long periods without the need for decompression, and eliminate the majority of physiological dangers associated with deep diving. Divers do not even need to be skilled swimmers. Mobility and dexterity are usually restricted by mechanical constraints, and the ergonomics of movement are problematic.
where does the sea get its colour from
Ocean color - wikipedia The "color '' of the ocean is determined by the interactions of incident light with substances or particles present in the water. White light from the sun is made up of a combination of colors that are broken apart by water droplets in a "rainbow '' spectrum. Large quantities of water, even in a swimming pool, would appear blue as well. When light hits the water surface, the different colors are absorbed, transmitted, scattered, or reflected in differing intensities by water molecules and other so - called optically - active constituents in suspension in the upper layer of the ocean. The reason that open ocean waters often appear blue is due to the absorption and scattering of light. The blue wavelengths of light are scattered, similar to the scattering of blue light in the sky but absorption is a much larger factor than scattering for the clear ocean water. In water, absorption is strong in the red and weak in the blue and so red light is absorbed quickly in the ocean leaving blue. Almost all sunlight that enters the ocean is absorbed, except very close to the coast. The red, yellow, and green wavelengths of sunlight are absorbed by water molecules in the ocean. When sunlight hits the ocean, some of the light is reflected back directly, but most of it penetrates the ocean surface and interacts with the water molecules that it encounters. The red, orange, yellow, and green wavelengths of light are absorbed and so the remaining light we see is composed of the shorter wavelength blues and violets. If there are any particles suspended in the water, they will increase the scattering of light. In coastal areas, runoff from rivers, resuspension of sand and silt from the bottom by tides, waves, and storms and a number of other substances can change the color of the near - shore waters. Some types of particles can also contain substances that absorb certain wavelengths of light, which alters its characteristics. For example, microscopic marine algae, called phytoplankton, have the capacity to absorb light in the blue and red region of the spectrum owing to specific pigments like chlorophyll. Accordingly, as the concentration of phytoplankton increases in the water, the color of the water shifts toward the green part of the spectrum. Fine mineral particles like sediment absorb light in the blue part of the spectrum, causing the water to turn brownish if there is a massive sediment load. The most important light - absorbing substance in the oceans is chlorophyll, which phytoplankton use to produce carbon by photosynthesis. Chlorophyll, a green pigment, makes phytoplankton preferentially absorb the red and blue portions of the light spectrum and reflect green light. Ocean regions with high concentrations of phytoplankton have shades of blue - green depending upon the type and density of the phytoplankton population there. The basic principle behind the remote sensing of ocean color from space is that the more phytoplankton is in the water, the greener it is. There are other substances that may be found dissolved in the water that can also absorb light. Since the substances are usually composed of organic carbon, researchers generally refer to them as colored dissolved organic matter. Ocean color radiometry is a technology, and a discipline of research, concerning the study of the interaction between the visible electromagnetic radiation coming from the sun and aquatic environments. In general, the term is used in the context of remote - sensing observations, often made from Earth - orbiting satellites. Using sensitive radiometers, such as those on - board satellite platforms, one can measure carefully the wide array of colors emerging out of the ocean. These measurements can be used to infer important information such as phytoplankton biomass or concentrations of other living and non-living material that modify the characteristics of the incoming radiation. Monitoring the spatial and temporal variability of algal blooms from satellite, over large marine regions up to the scale of the global ocean, has been instrumental in characterizing variability of marine ecosystems and is a key tool for research into how marine ecosystems respond to climate change and anthropogenic perturbations. Remote sensing of ocean colour from space began in 1978 with the successful launch of NASA 's Coastal Zone Color Scanner (CZCS). Despite the fact that CZCS was an experimental mission intended to last only one year, the sensor continued to generate a valuable time - series of data over selected test sites until early 1986. Ten years passed before other sources of ocean - colour data became available with the launch of other sensors, and in particular the Sea - viewing Wide Field - of - view sensor (SeaWiFS) in 1997 on board the NASA SeaStar satellite. Subsequent sensors have included NASA 's Moderate - resolution Imaging Spectroradiometer (MODIS) on board the Aqua and Tearra satellites, ESA 's MEdium Resolution Imaging Spectrometer (MERIS) onboard its environmental satellite Envisat. Several new ocean - colour sensors have recently been launched, including the Indian Ocean Colour Monitor (OCM - 2) on - board ISRO 's Oceansat - 2 satellite and the Korean Geostationary Ocean Color Imager (GOCI), which is the first ocean colour sensor to be launched on a geostationary satellite, and Visible Infrared Imager Radiometer Suite (VIIRS) aboard NASA 's Suomi NPP. More ocean colour sensors are planned over the next decade by various space agencies. Ocean Colour Radiometry and its derived products are also seen as fundamental Essential Climate Variables as defined by the Global Climate Observing System. Ocean colour datasets provide the only global synoptic perspective of primary production in the oceans, giving insight into the role of the world 's oceans in the global carbon cycle. Ocean colour data is a vital resource for a wide variety of operational forecasting and oceanographic research, earth sciences, and related applications, as well as in many of the Societal Benefit Areas identified by the Group on Earth Observations. A few examples of some of the ways that ocean colour data and related data types have been used include.
who pitched joe dimaggio's first home run
1951 World Series - wikipedia The 1951 World Series matched the two - time defending champion New York Yankees against the New York Giants, who had won the National League pennant in a thrilling three - game playoff with the Brooklyn Dodgers on the legendary home run by Bobby Thomson (the Shot Heard ' Round the World). In the Series, the Yankees showed some power of their own, including Gil McDougald 's grand slam home run in Game 5, at the Polo Grounds. The Yankees won the Series in six games, for their third straight title and 14th overall. This would be the last World Series for Joe DiMaggio, who retired afterward, and the first for rookies Willie Mays and Mickey Mantle. This was the last Subway Series the Giants played in. Both teams would meet again eleven years later after the Giants relocated to San Francisco. They have not played a World Series together since. This was the first World Series announced by Bob Sheppard, who was in his first year as Yankee Stadium 's public address announcer. It was also the first World Series to be televised nationwide, as coaxial cable had recently linked both coasts. This World Series also matched up two of baseball 's most colorful managers, Casey Stengel of the Yankees and Leo Durocher of the Giants. This was the 13th appearance by the Giants in Series play, their ninth loss, and their first appearance since the 1937 World Series. "The Commerce Comet arrives on the final voyage of the Yankee Clipper. '' The 1951 World Series was the first for Mickey Mantle and the final for Joe DiMaggio. Mantle 's bad luck with injuries in the Major Leagues began here. In the fifth inning of Game 2 at Yankee Stadium, Mays flied to deep right center. DiMaggio and Mantle converged on the ball, DiMaggio called Mantle off, and Mantle stutter - stepped, catching a cleat in a drain cover, and fell to the ground in a heap with a wrenched knee as DiMaggio made the catch. Mantle was done for this Series, but would come back to play many more. New York City became the first city to host an NBA Finals and a World Series in the same calendar year. AL New York Yankees (4) vs. NL New York Giants (2) Monte Irvin 's daring baserunning got the Giants off to a fast start in this New York -- New York series. He singled in the first inning, sped to third on Whitey Lockman 's RBI single, then stole home off Yankee starter Allie Reynolds. The Yankees cut the Giants ' lead to 2 -- 1 in the second when Gil McDougald doubled with one out off of Dave Koslo and scored on Jerry Coleman 's single. The scored remained that way until the sixth when Alvin Dark 's three - run home run gave the Giants a commanding 5 -- 1 lead. Koslo pitched a complete game to give the Giants a 1 -- 0 series lead. The first three batters Larry Jansen faced were Mickey Mantle, Phil Rizzuto and Gil McDougald, all of whom singled for a quick 1 - 0 Yankee lead. It could have been worse, but next batter Joe DiMaggio bounced into a 6 - 4 - 3 double play and Yogi Berra struck out. Next inning, Joe Collins 's home run extended the Yankees ' lead to 2 -- 0. Monte Irvin scored in the seventh, tagging and coming home on pinch - hitter Bill Rigney 's bases - loaded sacrifice fly, as the Giants got within 2 - 1. But winning pitcher Eddie Lopat, who pitched a complete game, helped himself to an insurance run with an RBI single in the eighth after Bobby Brown hit a leadoff single and moved to second on a groundout off of George Spencer. The Yankees ' 3 -- 1 win tied the series shifting to the Polo Grounds. The Giants struck first in Game 1 when Bobby Thomson hit a leadoff double and scored on Willie Mays 's single in the second, then a five - run fifth inning was the undoing of Yankee starter Vic Raschi. Eddie Stanky walked with one out, moved to third on an error, and scored on Al Dark 's single. After a Hank Thompson single, another error on Monte Irvin 's fielder 's choice allowed another run to score and put two on, then a Whitey Lockman three - run home run gave Giants starter Jim Hearn a comfortable 6 -- 0 lead. The Yankees scored a run in the eighth on a bases - loaded walk to Joe Collins, then in the ninth on Gene Woodling 's home run off of Sheldon Jones, who retired the next two batters to end the game and give the Giants a 2 -- 1 series lead. The Giants struck first in Game 4 when Al Dark doubled with one out in the first off of Allie Reynolds and scored on Monte Irvin 's single, but the Yankees tied the game in the second on Joe Collins 's RBI single with two on off of Sal Maglie. After a single and walk, Reynolds 's RBI single in the fourth put the Yankees up 2 -- 1. Joe DiMaggio 's first home run of the Series followed a Yogi Berra single in the fifth extended their lead to 4 -- 1. In the seventh, reliever Sheldon Jones allowed a single and walk, then an error on a pickoff attempt allowed one run to score before Gil McDougald 's RBI single made it 6 -- 1 Yankees. Reynolds allowed a one - out RBI single to Bobby Thomson in the ninth before getting Willie Mays to hit into the game - ending double play as the Yankees tied the series with a 6 -- 1 win. The Giants struck first in Game 5 when Al Dark singled with one out in the first and scored on Monte Irvin 's single aided by left fielder Gene Woodling 's error, but starter Eddie Lopat kept them scoreless for the rest of the game while the Yankees hammered Larry Jansen, Monty Kennedy and George Spencer. After two one - out walks in the third, Joe DiMaggio 's RBI single tied the game, then after an intentional walk loaded the bases, Gil McDougald 's grand slam off of Jansen put the Yankees up 5 -- 1. Next inning, Phil Rizzuto 's home run off of Kennedy after a walk extended their lead to 7 -- 1. In the sixth, Rizzuto singled off of Spencer before Yogi Berra 's single and Johnny Mize 's double scored a run each, making it 9 -- 1 Yankees. In the seventh, a bases - loaded walk to Rizzuto forced in a run, then Al Corwin threw a wild pitch that let another run score before DiMaggio 's two - run double capped the game 's scoring at 13 -- 1 Yankees, who were a win away from the World Series championship as the series returned to Yankee Stadium. The Yankees struck first in Game 6 on Gil McDougald 's bases - loaded sacrifice fly in the first off of Dave Koslo. The Giants tied the game in the fifth off of Vic Raschi when Willie Mays hit a leadoff single, moved two bases on a wild pitch and sacrifice fly, and scored on Eddie Stanky 's sacrifice fly. Playing right field in place of Mickey Mantle, Hank Bauer benefited from a tricky Yankee Stadium wind -- as well as the umpire 's generous call of a ball on Dave Koslo 's two - strike pitch -- to belt a bases - loaded triple in the sixth inning that would be the difference. Bauer also ensured that the lead held up. Trailing 4 -- 1 in the ninth, the Giants loaded the bases with no outs on three singles off of Johnny Sain. Enter reliever Bob Kuzava, acquired in June from the Washington Senators. After two sacrifice flies and the score now 4 -- 3, pinch hitter Sal Yvars hit a sinking liner to right. The stadium crowd gasped as Bauer momentarily lost the ball in the crowd 's white shirts and the shadows. But he relocated it and charged forward. Bauer, who played in nine World Series and always came through when it mattered most, slid on his knees to catch the ball inches off the ground to end the game and the 1951 World Series. Game 6 was the last baseball game ever played by Joe DiMaggio. 1951 World Series (4 -- 2): New York Yankees (A.L.) over New York Giants (N.L.)
where was roman catholicism concentrated in europe in 1590
Catholic Church in Europe - wikipedia The Catholic Church in Europe also known as Roman Catholic Church in Europe is part of worldwide Catholic Church in full communion with the Holy See in Rome, including represented Eastern Catholic missions. Demographically, Catholics are the largest religious group in Europe. About 35 % of the population of Europe today is Catholic, but only about a quarter of all Catholics worldwide reside in Europe. This is due in part to the movement and immigration at various times of largely Catholic European ethnic groups (such as the Irish, Italians, Poles, Portuguese, and Spaniards) to continents such as the Americas and Australia. Furthermore, Catholicism has been spread outside Europe through both historical Catholic missionary activity, especially in Latin America, and the past colonization and conversion of native people by Catholic European countries, specifically the Spanish, Portuguese, French colonial and Belgian colonial empires, in regions such as South America, the Caribbean, Central Africa and West Africa, and Southeast Asia. As the Vatican State is a theocracy, it can not become a member of the European Union. However, traditionally there are very strong ties of the Holy See with the neighbor country of the Vatican City Italy and also with the European Union. Since 1970 the European Union accredits an official representative from the Holy See (an Apostolic Nuncio) to the EU. Even though the Vatican City is not an official member of the European Union, it has adopted the Euro as its currency and has open borders with the Schengen Area. In 2016 Pope Francis was awarded with the Charlemagne prize. During his speech of thanks Pope Francis criticized a "crisis of solidarity '' in Europe and condemned "national self - interest, renationalization and particularism ''. The Council of the Bishops ' Conferences of Europe (Latin: Consilium Conferentiarum Episcoporum Europae) (CCEE) is a conference of the presidents of the 33 Roman Catholic episcopal conferences of Europe, the Archbishop of Luxembourg, the Archbishop of Monaco, Maronite Catholic Archeparch of Cyprus, the Roman Catholic Bishop of Chişinău, the Ruthenian Catholic Eparch of Mukacheve, and the Apostolic Administrator of Estonia. The CCEE Secretariat is located in St. Gallen, Switzerland. The Commission of the Bishops ' Conferences of the European Community (Latin: Commissio Episcopatuum Communitatis Europaeae; COMECE) is the association of Catholic Church episcopal conferences in member states of the European Union (EU) which officially represents those episcopal conferences at EU institutions. COMECE bishops are delegated by Catholic episcopal conferences in EU member states and has a permanent Secretariat in Brussels, Belgium. It was established in 1980 and replaced the European Catholic Pastoral Information Service (SIPECA, 1976 -- 1980). Discussions during the 1970s about creating an episcopal conferences ' liaison organization to the European Community led to the decision, on the eve of the European Parliament election, 1979, to establish COMECE. Fimcap Europe (International Federation of Catholic Parochial Youth Movements): Fimcap is an umbrella organization for catholic youth organizations, especially for youth organizations which are based at parish level. (See also: Fimcap Europe) MIJARC Europe (International Movement of Catholic Agricultural and Rural Youth): MIJARC Europe is a platform representing the catholic, agricultural and rural youth movements in Europe. CIDSE (International Cooperation for Development and Solidarity): CIDSE is an umbrella organization for Catholic development agencies from Europe and North America. World Movement of Christian Workers consists of Catholic workingmen and workingwomen. According to the Catholic tradition, Saint Peter, one of the Twelve Apostles of Jesus Christ and leader of the early church, was crucified and buried in Rome under Emperor Nero Augustus Caesar. On the place supposed to be the burial site of Saint Peter the Saint Peter 's Basilica was built. Rome is also the residence city of the Pope, the leader of the Catholic Church, who at the same time is also the Bishop of Rome. Until today the Pope rules over an ecclesiastical state, the Vatican City, which encompasses 44 hectares of the city area. Rome hosts also the Papal Major basilicas. Besides the Saint Peter 's Basilica there are three other Major basilicas: Archbasilica of Saint John Lateran, Basilica of Saint Paul Outside the Walls and Basilica di Santa Maria Maggiore. One of the most important and famous sites for pilgrimages for the Catholic Church is Santiago de Compostela in Galicia, Spain. The cathedral of the city hosts the shrine of Saint James, one of the Twelve Apostles of Jesus, and traditionally considered the first apostle to be martyred. Santiago de Compostela is the final destination of the Way of Saint James (Galician: O Camiño de Santiago). Assisi, a town in the Umbria region in Italy, hosts two more papal basilicas: the Basilica of San Francesco d'Assisi and the Basilica of Santa Maria degli Angeli. The Basilica of San Francesco d'Assisi is the mother church of the Order of Friars Minor, commonly known as the "Franciscan Order ''. Assisi is the town in which the founder of the order, Saint Francis, was born and died.
ronnie james dio songs rainbow in the dark
Rainbow in the Dark - wikipedia "Rainbow in the Dark '' was the second single released by Heavy Metal band Dio, taken from their Platinum - selling 1983 debut album, Holy Diver. Assisted by a popular MTV music video, it reached # 12 on Billboard 's Album Rock Tracks. It was numbered 13 on VH1 's "Top 40 Greatest Metal Songs ''. The song was covered by Corey Taylor, with support from Roy Mayorga, Satchel, Christian Martucci & Jason Christopher for the Ronnie James Dio: This Is Your Life - Tribute. The music video was filmed close to the time of the original studio recording of the song. The video 's location takes place in central London, with scenes alternating between shots of a man following a woman through a street lined with pornographic cinemas, and Ronnie James Dio singing from a rooftop. The implied malicious intent of the man is made evident during the guitar solo, when guitarist Vivian Campbell (and later bassist Jimmy Bain) appear and effectively "scare '' him away after he follows the woman into a moviehouse, with the woman kissing Campbell on the cheek in apparent gratitude. The video includes shots of tourist attractions such as Nelson 's Column, Westminster Bridge, Piccadilly Circus, and Soho.
it s the end of the world as we know it lyrics
It 's the End of the World as We Know It (and I Feel Fine) - wikipedia "It 's the End of the World as We Know It (And I Feel Fine) '' is a song by American rock band R.E.M., which first appeared on their 1987 album Document. It was released as a single in November 1987, reaching No. 69 in the US Billboard Hot 100 and later reaching No. 39 on the UK Singles Chart on its re-release in December 1991. The song originated from a previously unreleased song called "PSA '' ("Public Service Announcement ''); the two are very similar in melody and tempo. "PSA '' was itself later reworked and released as a single in 2003, under the title "Bad Day ''. In an interview with Guitar World magazine published in November 1996, R.E.M. guitarist Peter Buck agreed that "End of the World '' was in the tradition of Bob Dylan 's "Subterranean Homesick Blues ''. The track is known for its quick flying, seemingly stream of consciousness rant with a number of diverse references, such as a quartet of individuals with the initials "L.B. '': Leonard Bernstein, Leonid Brezhnev, Lenny Bruce, and Lester Bangs. In a 1990s interview with Musician magazine, R.E.M. 's lead singer Michael Stipe claimed that the "L.B. '' references came from a dream he had in which he found himself at a party surrounded by famous people who all shared those initials. "The words come from everywhere. '' Stipe explained to Q Magazine in 1992. "I 'm extremely aware of everything around me, whether I am in a sleeping state, awake, dream - state or just in day to day life, so that ended up in the song along with a lot of stuff I 'd seen when I was flipping TV channels. It 's a collection of streams of consciousness. '' The song was included on the 2001 Clear Channel memorandum of songs thought to be "lyrically questionable '' after the September 11 terrorist attacks. The song was played repeatedly for a 24 - hour period (with brief promos interspersed) to introduce the new format for WENZ 107.9 FM "The End '', a radio station in Cleveland, Ohio in 1992. When the station underwent a new format change in 1996, they again played the song in 24 - hour loop. There was a documentary film made about the station entitled The End of the World As We Knew It, released in 2009 which featured many of the former staffers and jocks. The song was featured in several satirical videos on YouTube, in connection with the prediction of radio pastor Harold Camping of Family Radio, that the world would end on May 21, 2011. Also, before the supposed Mayan apocalypse on December 21, 2012, sales for the song jumped from 3,000 to 19,000 copies for the week. Alternative radio station CFEX - FM in Calgary, Canada stunted by playing the song all day on December 21, 2012, interspersed with "Get to Know a Mayan '' and "Apocalypse Survival Tips '' segments. The music video was directed by James Herbert, who worked with the band on several other videos in the late 1980s. It depicts a young skateboarder, Noah Ray, in a cluttered room of an abandoned, half - collapsed farmhouse. As he rummages through the junk, which includes several band pictures and flyers, he shows off various toys and items to the camera and plays with a dog that wanders into the house. As the video ends, he goes shirtless and starts performing skateboard tricks while still inside the room. R.E.M.
top speed of an f 18 super hornet
McDonnell Douglas F / A-18 Hornet - wikipedia The McDonnell Douglas F / A-18 Hornet is a twin - engine supersonic, all - weather carrier - capable multirole combat jet, designed as both a fighter and attack aircraft (hence the F / A designation). Designed by McDonnell Douglas (now Boeing) and Northrop, the F / A-18 was derived from the latter 's YF - 17 in the 1970s for use by the United States Navy and Marine Corps. The Hornet is also used by the air forces of several other nations and, since 1986, by the U.S. Navy 's Flight Demonstration Squadron, the Blue Angels. The F / A-18 has a top speed of Mach 1.8 (1,034 knots, 1,190 mph or 1,915 km / h at 40,000 ft or 12,200 m). It can carry a wide variety of bombs and missiles, including air - to - air and air - to - ground, supplemented by the 20 - mm M61 Vulcan cannon. It is powered by two General Electric F404 turbofan engines, which give the aircraft a high thrust - to - weight ratio. The F / A-18 has excellent aerodynamic characteristics, primarily attributed to its leading - edge extensions. The fighter 's primary missions are fighter escort, fleet air defense, Suppression of Enemy Air Defenses (SEAD), air interdiction, close air support, and aerial reconnaissance. Its versatility and reliability have proven it to be a valuable carrier asset, though it has been criticized for its lack of range and payload compared to its earlier contemporaries, such as the Grumman F - 14 Tomcat in the fighter and strike fighter role, and the Grumman A-6 Intruder and LTV A-7 Corsair II in the attack role. The Hornet first saw combat action during the 1986 United States bombing of Libya and subsequently participated in the 1991 Gulf War and 2003 Iraq War. The F / A-18 Hornet provided the baseline design for the Boeing F / A-18E / F Super Hornet, its larger, evolutionary redesign. The U.S. Navy started the Naval Fighter - Attack, Experimental (VFAX) program to procure a multirole aircraft to replace the Douglas A-4 Skyhawk, the A-7 Corsair II, and the remaining McDonnell Douglas F - 4 Phantom IIs, and to complement the F - 14 Tomcat. Vice Admiral Kent Lee, then head of Naval Air Systems Command (NAVAIR), was the lead advocate for the VFAX against strong opposition from many Navy officers, including Vice Admiral William D. Houser, deputy chief of naval operations for air warfare -- the highest ranking naval aviator. In August 1973, Congress mandated that the Navy pursue a lower - cost alternative to the F - 14. Grumman proposed a stripped F - 14 designated the F - 14X, while McDonnell Douglas proposed a naval variant of the F - 15, but both were nearly as expensive as the F - 14. That summer, Secretary of Defense Schlesinger ordered the Navy to evaluate the competitors in the Air Force 's Lightweight Fighter (LWF) program, the General Dynamics YF - 16 and Northrop YF - 17. The Air Force competition specified a day fighter with no strike capability. In May 1974, the House Armed Services Committee redirected $34 million from the VFAX to a new program, the Navy Air Combat Fighter (NACF), intended to make maximum use of the technology developed for the LWF program. Though the YF - 16 won the LWF competition, the Navy was skeptical that an aircraft with one engine and narrow landing gear could be easily or economically adapted to carrier service, and refused to adopt an F - 16 derivative. On 2 May 1975, the Navy announced its selection of the YF - 17. Since the LWF did not share the design requirements of the VFAX, the Navy asked McDonnell Douglas and Northrop to develop a new aircraft from the design and principles of the YF - 17. On 1 March 1977, Secretary of the Navy W. Graham Claytor announced that the F - 18 would be named "Hornet ''. Northrop had partnered with McDonnell Douglas as a secondary contractor on NACF to capitalize on the latter 's experience in building carrier aircraft, including the widely used F - 4 Phantom II. On the F - 18, the two companies agreed to evenly split component manufacturing, with McDonnell Douglas conducting final assembly. McDonnell Douglas would build the wings, stabilators, and forward fuselage; while Northrop would build the center and aft fuselage and vertical stabilizers. McDonnell Douglas was the prime contractor for the naval versions, and Northrop would be the prime contractor for the F - 18L land - based version which Northrop hoped to sell on the export market. The F - 18, initially known as McDonnell Douglas Model 267, was drastically modified from the YF - 17. For carrier operations, the airframe, undercarriage, and tailhook were strengthened, folding wings and catapult attachments were added, and the landing gear widened. To meet Navy range and reserves requirements, McDonnell increased fuel capacity by 4,460 pounds (2,020 kg), by enlarging the dorsal spine and adding a 96 - gallon fuel tank to each wing. A "snag '' was added to the wing 's leading edge and stabilators to prevent an Aeroelastic flutter discovered in the F - 15 stabilator. The wings and stabilators were enlarged, the aft fuselage widened by 4 inches (102 mm), and the engines canted outward at the front. These changes added 10,000 lb (4,540 kg) to the gross weight, bringing it to 37,000 lb (16,800 kg). The YF - 17 's control system was replaced with a fully digital fly - by - wire system with quadruple - redundancy, the first to be installed in a production fighter. Originally, it was planned to acquire a total of 780 aircraft of three variants: the single seat F - 18A fighter and A-18A attack aircraft, differing only in avionics; and the dual - seat TF - 18A, which retained full mission capability of the F - 18 with a reduced fuel load. Following improvements in avionics and multifunction displays, and a redesign of external stores stations, the A-18A and F - 18A were able to be combined into one aircraft. Starting in 1980, the aircraft began to be referred to as the F / A-18A, and the designation was officially announced on 1 April 1984. The TF - 18A was redesignated F / A-18B. Northrop developed the F - 18L as a potential export aircraft. Since it was not strengthened for carrier service, it was expected to be lighter and better performing, and a strong competitor to the F - 16 Fighting Falcon then being offered to American allies. The F - 18L 's normal gross weight was lighter than the F / A-18A by 7,700 pounds (3,490 kg), via lighter landing gear, lack of wing folding mechanism, reduced part thickness in areas, and lower fuel - carrying capacity. Though the aircraft retained a lightened tailhook, the most obvious external difference was removed "snags '' on the leading edge of the wings and stabilators. It still retained 71 % commonality with the F / A-18 by parts weight, and 90 % of the high - value systems, including the avionics, radar, and electronic countermeasure suite, though alternatives were offered. Unlike the F / A-18, the F - 18L carried no fuel in its wings and lacked weapons stations on the intakes. It had three underwing pylons on each side instead. The F / A-18L version followed to coincide with the US Navy 's F / A-18A as a land - based export alternative. This was essentially an F / A-18A lightened by approximately 2,500 to 3,000 pounds (1,130 to 1,360 kg); weight was reduced by removing the folding wing and associated actuators, implementing a simpler landing gear (single wheel nose gear and cantilever oleo main gear), and change to a land - based tail - hook. The revised F / A-18L included wing fuel tanks and fuselage stations of the F / A-18A. Its weapons capacity would increase from 13,700 to 20,000 pounds (6,210 to 9,070 kg); largely due to the addition of a third underwing pylon and strengthened wingtips (11 stations in total vs 9 stations of the F / A-18A). Compared to the F - 18L, the outboard weapons pylons are moved closer to the wingtip missile rails. Because of the strengthened non-folding wing, the wingtip missile rails were designed to carry either the AIM - 7 Sparrow or Skyflash medium - range air - to - air missiles, in addition to the AIM - 9 Sidewinder as found on the F / A-18A. The F / A-18L was strengthened for a 9 g design load factor compared to the F / A-18A's 7.5 g factor. The partnership between McDonnell Douglas and Northrop soured over competition for foreign sales for the two models. Northrop felt that McDonnell Douglas would put the F / A-18 in direct competition with the F - 18L. In October 1979, Northrop filed a series of lawsuits charging that McDonnell was using Northrop technology developed for the F - 18L for foreign sales of the F / A-18 in violation of their agreement, and asked for a moratorium on foreign sales of the Hornet. McDonnell Douglas counter-sued alleging Northrop illegally used F / A-18 technology in its F - 20 Tigershark. A settlement was announced 8 April 1985 for all of the lawsuits. McDonnell Douglas paid Northrop $50 million for "rights to sell the F / A-18 wherever it could ''. Additionally, the companies agreed on McDonnell Douglas as the prime contractor with Northrop as the principal sub-contractor. As principal sub-contractor, Northrop will produce the rear section for the F / A-18 (A / B / C / D / E / F) while McDonnell Douglas will produce the rest with final assembly to be performed by McDonnell Douglas. At the time of the settlement Northrop had ceased work on the F - 18L. Most export orders for the F18 - L were captured by the F - 16 or the F / A-18. The Northrop F - 20 Tigershark did not enter production and although the program was not officially terminated until 17 November 1986 it was dead by mid-1985. During flight testing, the snag on the leading edge of the stabilators was filled in, and the gap between the leading - edge extensions (LEX) and the fuselage mostly filled in. The gaps, called the boundary layer air discharge (BLAD) slots, controlled the vortices generated by the LEX and presented clean air to the vertical stabilizers at high angles of attack, but they also generated a great deal of parasitic drag, worsening the problem of the F / A-18's inadequate range. McDonnell filled in 80 % of the gap, leaving a small slot to bleed air from the engine intake. This may have contributed to early problems with fatigue cracks appearing on the vertical stabilizers due to extreme Structural loads, resulting in a short grounding in 1984 until the stabilizers were strengthened. Starting in May 1988, a small vertical fence was added to the top of each LEX to broaden the vortices and direct them away from the vertical stabilizers. This also provided a minor increase in controllability as a side effect. F / A-18s of early versions had a problem with insufficient rate of roll, exacerbated by the insufficient wing stiffness, especially with heavy underwing ordnance loads. The first production F / A-18A flew on 12 April 1980. After a production run of 380 F / A-18As (including the nine assigned to flight systems development), manufacture shifted to the F / A-18C in September 1987. In the 1990s the U.S. Navy faced the need to replace its aging A-6 Intruders and A-7 Corsair IIs with no replacement in development. To answer this deficiency, the Navy commissioned development of the F / A-18E / F Super Hornet. Despite its designation, it is not just an upgrade of the F / A-18 Hornet, but rather, a new, larger airframe using the design concepts of the Hornet. Hornets and Super Hornets will serve complementary roles in the U.S. Navy carrier fleet until the Hornet A-D models are completely replaced by the F - 35C Lightning II. The Marines have chosen to extend the use of certain F / A-18s up to 10,000 flight hours, due to delays in the F - 35B variant. The F / A-18 is a twin engine, mid wing, multi-mission tactical aircraft. It is highly maneuverable, owing to its good thrust to weight ratio, digital fly - by - wire control system, and leading - edge extensions (LEX). The LEX allow the Hornet to remain controllable at high angles of attack. The trapezoidal wing has a 20 - degree sweepback on the leading edge and a straight trailing edge. The wing has full - span leading - edge flaps and the trailing edge has single - slotted flaps and ailerons over the entire span. Canted vertical stabilizers are another distinguishing design element, one among several other such elements that enable the Hornet 's excellent high angle of attack ability, including oversized horizontal stabilators, oversized trailing edge flaps that operate as flaperons, large full - length leading - edge slats and flight control computer programming that multiplies the movement of each control surface at low speeds and moves the vertical rudders inboard instead of simply left and right. The Hornet 's normally high angle of attack performance envelope was put to rigorous testing and enhanced in the NASA F - 18 High Alpha Research Vehicle (HARV). NASA used the F - 18 HARV to demonstrate flight handling characteristics at high angle - of - attack (alpha) of 65 -- 70 degrees using thrust vectoring vanes. F / A-18 stabilators were also used as canards on NASA 's F - 15S / MTD. The Hornet was among the first aircraft to heavily use multi-function displays, which at the switch of a button allow a pilot to perform either fighter or attack roles or both. This "force multiplier '' ability gives the operational commander more flexibility to employ tactical aircraft in a fast - changing battle scenario. It was the first Navy aircraft to incorporate a digital multiplexing avionics bus, enabling easy upgrades. The Hornet is also notable for having been designed to reduce maintenance, and as a result has required far less downtime than its heavier counterparts, the F - 14 Tomcat and the A-6 Intruder. Its mean time between failures is three times greater than any other Navy strike aircraft, and requires half the maintenance time. Its General Electric F404 engines were also innovative in that they were designed with operability, reliability and maintainability first. The engine, while unexceptional in rated performance, demonstrates exceptional robustness under various conditions and is resistant to stall and flameout. The F404 engine connects to the airframe at only 10 points and can be replaced without special equipment; a four - person team can remove the engine within 20 minutes. The engine air inlets of the Hornet, like that of the F - 16, are of a simpler "fixed '' design, while those of the F - 4, F - 14, and F - 15 have variable geometry or variable intake ramp air inlets. This is a speed limiting factor in the Hornet design. Instead, the Hornet uses bleed air vents on the inboard surface of the engine air intake ducts to slow and reduce the amount of air reaching the engine. While not as effective as variable geometry, the bleed air technique functions well enough to achieve near Mach number 2 speeds, which is within the designed mission requirements. A 1989 USMC study found that single - seat fighters were well suited to air - to - air combat missions while dual - seat fighters were favored for complex strike missions against heavy air and ground defenses in adverse weather -- the question being not so much as to whether a second pair of eyes would be useful, but as to having the second crewman sit in the same fighter or in a second fighter. Single - seat fighters that lacked wingmen were shown to be especially vulnerable. The F / A-18 provides automatic alerts via audio messages to the pilot. McDonnell Douglas rolled out the first F / A-18A on 13 September 1978, in blue - on - white colors marked with "Navy '' on the left and "Marines '' on the right. Its first flight was on 18 November. In a break with tradition, the Navy pioneered the "principal site concept '' with the F / A-18, where almost all testing was done at Naval Air Station Patuxent River, instead of near the site of manufacture, and using Navy and Marine Corps test pilots instead of civilians early in development. In March 1979, Lt. Cdr. John Padgett became the first Navy pilot to fly the F / A-18. Following trials and operational testing by VX - 4 and VX - 5, Hornets began to fill the Fleet Replacement Squadrons (FRS) VFA - 125, VFA - 106, and VMFAT - 101, where pilots are introduced to the F / A-18. The Hornet entered operational service with Marine Corps squadron VMFA - 314 at MCAS El Toro on 7 January 1983, and with Navy squadron VFA - 25 in March 1984, replacing F - 4s and A-7Es, respectively. Navy strike - fighter squadrons VFA - 25 and VFA - 113 (assigned to CVW - 14) deployed aboard USS Constellation from February to August 1985, marking the first deployment for the F / A-18. The initial fleet reports were complimentary, indicating that the Hornet was extraordinarily reliable, a major change from its predecessor, the F - 4J. Other squadrons that switched to F / A-18 are VFA - 146 "Blue diamonds '', and VFA - 147 "Argonauts ''. In January 1985, the VFA - 131 "Wildcats '' and the VFA - 132 "Privateers '' moved from Naval Air Station Lemoore, California to Naval Air Station Cecil Field, Florida, and became the Atlantic Fleet 's first F / A-18 squadrons. The US Navy 's Blue Angels Flight Demonstration Squadron switched to the F / A-18 Hornet in 1986, when it replaced the A-4 Skyhawk. The Blue Angels perform in F / A-18A, B, C, and D models at air shows and other special events across the US and worldwide. Blue Angels pilots must have 1,400 hours and an aircraft carrier certification. The two - seat B and D models are typically used to give rides to VIPs, but can also fill in for other aircraft in the squadron in a normal show, if the need arises. NASA operates several F / A-18 aircraft for research purposes and also as chase aircraft; these F / A-18s are based at the Armstrong Flight Research Center (formerly the Dryden Flight Research Center) in California. On 21 September 2012, two NASA F / A-18s escorted a NASA Boeing 747 Shuttle Carrier Aircraft (SCA) carrying the Space Shuttle Endeavour over portions of California to Los Angeles International Airport before being delivered to the California Science Center museum in Los Angeles. The F / A-18 first saw combat action in April 1986, when VFA - 131, VFA - 132, VMFA - 314, and VMFA - 323 Hornets from USS Coral Sea flew SEAD missions against Libyan air defenses during Operation Prairie Fire and an attack on Benghazi as part of Operation El Dorado Canyon. During the Gulf War of 1991, the Navy deployed 106 F / A-18A / C Hornets and Marine Corps deployed 84 F / A-18A / C / D Hornets. F / A-18 pilots were credited with two kills during the Gulf War, both MiG - 21s. On 17 January, the first day of the war, U.S. Navy pilots Lieutenant Commander Mark I. Fox and his wingman, Lieutenant Nick Mongilio were sent from USS Saratoga in the Red Sea to bomb an airfield in southwestern Iraq. While en route, they were warned by an E-2C of approaching MiG - 21 aircraft. The Hornets shot down the two MiGs with AIM - 7 and AIM - 9 missiles in a brief dogfight. The F / A-18s, each carrying four 2,000 lb (910 kg) bombs, then resumed their bombing run before returning to Saratoga. The Hornet 's survivability was demonstrated when a Hornet took hits in both engines and flew 125 mi (201 km) back to base. It was repaired and flying within a few days. F / A-18s flew 4,551 sorties with 10 Hornets damaged including three losses, one confirmed lost to enemy fire. All three losses were U.S. Navy F / A-18s, with two of their pilots lost. On 17 January 1991, Lieutenant Commander Scott Speicher of VFA - 81 was shot down and killed in the crash of his aircraft. An unclassified summary of a 2001 CIA report suggests that Speicher 's aircraft was shot down by a missile fired from an Iraqi Air Force aircraft, most likely a MiG - 25. On 24 January 1991, F / A-18A serial number 163121, from USS Theodore Roosevelt, piloted by Lt H.E. Overs, was lost due to an engine failure or loss of control over the Persian Gulf. The pilot ejected and was recovered by USS Wisconsin. On 5 February 1991, F / A-18A serial number 163096, piloted by Lieutenant Robert Dwyer was lost over the North Persian Gulf after a successful mission to Iraq; he was officially listed as killed in action, body not recovered. As the A-6 Intruder was retired in the 1990s, its role was filled by the F / A-18. The F / A-18 demonstrated its versatility and reliability during Operation Desert Storm, shooting down enemy fighters and subsequently bombing enemy targets with the same aircraft on the same mission. It broke records for tactical aircraft in availability, reliability, and maintainability. Both U.S. Navy F / A-18A / C models and Marine F / A-18A / C / D models were used continuously in Operation Southern Watch and over Bosnia and Kosovo in the 1990s. U.S. Navy Hornets flew during Operation Enduring Freedom in 2001 from carriers operating in the North Arabian Sea. Both the F / A-18A / C and newer F / A-18E / F variants were used during Operation Iraqi Freedom in 2003, operating from aircraft carriers as well from an air base in Kuwait. Later in the conflict USMC A+, C, and primarily D models operated from bases within Iraq. An F / A-18C was accidentally downed in a friendly fire incident by a Patriot missile when a pilot tried to evade two missiles fired at his plane and crashed. Two others collided over Iraq in May 2005. In January 2007, two Navy F / A-18E / F Super Hornets collided in midair and crashed in the Persian Gulf. The USMC plans to use the F / A-18 until the early 2030s. The F / A-18 has been purchased and is in operation with several foreign air services. Export Hornets are typically similar to U.S. models of a similar manufacture date. Since none of the customers operate aircraft carriers, all export models have been sold without the automatic carrier landing system, and Royal Australian Air Force further removed the catapult attachment on the nose gear. Except for Canada, all export customers purchased their Hornets through the U.S. Navy, via the U.S. Foreign Military Sales (FMS) Program, where the Navy acts as the purchasing manager but incurs no financial gain or loss. Canada is the largest Hornet operator outside of the U.S. The Royal Australian Air Force purchased 57 F / A-18A fighters and 18 F / A-18B two - seat trainers to replace its Dassault Mirage IIIOs. Numerous options were considered for the replacement, notably the F - 15A Eagle, the F - 16 Falcon, and the then new F / A-18 Hornet. The F - 15 was discounted because the version offered had no ground - attack capability. The F - 16 was considered unsuitable largely due to having only one engine. Australia selected the F / A-18 in October 1981. Original differences between the Australian and US Navy 's standard F / A-18 were the removed nose wheel tie bar for catapult launch (later re-fitted with a dummy version to remove nose wheel shimmy), addition of a high frequency radio, an Australian fatigue data analysis system, an improved video and voice recorder, and the use of ILS / VOR (Instrument Landing System / Very High Frequency Omnidirectional Range) instead of the carrier landing system. The first two aircraft were produced in the US, with the remainder assembled in Australia at Government Aircraft Factories. F / A-18 deliveries to the RAAF began on 29 October 1984, and continued until May 1990. In 2001, Australia deployed four aircraft to Diego Garcia, in an air defense role, during coalition operations against the Taliban in Afghanistan. In 2003, 75 Squadron deployed 14 F / A-18s to Qatar as part of Operation Falconer and these aircraft saw action during the invasion of Iraq. Australia had 71 Hornets in service in 2006, after four were lost to crashes. The fleet was upgraded beginning in the late 1990s to extend their service lives to 2015. They were expected to be retired then and replaced by the F - 35 Lightning II. Several of the Australian Hornets have had refits applied to extend their service lives until the planned retirement date of 2020. In addition to the F / A-18A and F / A-18B Hornets, Australia has purchased 24 F / A-18F Super Hornets, with deliveries beginning in 2009. In March 2015 six F / A-18As from No. 75 Squadron were deployed to the Middle East as part of Operation Okra, replacing a detachment of Super Hornets. Canada was the first export customer for the Hornet, replacing the CF - 104 Starfighter (air reconnaissance and strike), the McDonnell CF - 101 Voodoo (air interception) and the CF - 116 Freedom Fighter (ground attack). The Canadian Forces Air Command ordered 98 A models (Canadian designation CF - 188A / CF - 18A) and 40 B models (designation CF - 188B / CF - 18B). The original CF - 18 as delivered is largely identical to the F / A-18A and B models. In 1991, Canada committed 26 CF - 18s to the Gulf War, based in Qatar. These aircraft primarily provided Combat Air Patrol duties, although, late in the air war, began to perform air strikes on Iraqi ground targets. On 30 January 1991, two CF - 18s on CAP detected and attacked an Iraqi TNC - 45 patrol boat. The vessel was repeatedly strafed and damaged by 20mm cannon fire, but an attempt to sink the ship with an air - to - air missile failed. The ship was subsequently sunk by American aircraft, but the Canadian CF - 18s received partial credit for its destruction. In June 1999, 18 CF - 18s were deployed to Aviano AB, Italy, where they participated in both the air - to - ground and air - to - air roles in the former Yugoslavia. 62 CF - 18A and 18 CF - 18B aircraft took part in the Incremental Modernization Project which was completed in two phases. The program was launched in 2001 and the last updated aircraft was delivered in March 2010. The aims were to improve air - to - air and air - to - ground combat abilities, upgrade sensors and the defensive suite, and replace the datalinks and communications systems on board the CF - 18 from the F / A-18A and F / A-18B standard to the current F / A-18C and F / A-18D standard. In July 2010 the Canadian government announced plans to replace the remaining CF - 18 fleet with 65 F - 35 Lightning IIs, with deliveries scheduled to start in 2016. In November 2016, Canada announced plans to buy 18 Super Hornets as an interim solution while reviewing its F - 35 order. The plan for Super Hornets was later, in October 2017, put on hold due to a trade conflict with the United States over the Bombardier C - Series. Instead, Canada is seeking to purchase surplus Hornets from Australia or Kuwait. The Finnish Air Force (Ilmavoimat) ordered 64 F - 18C / Ds (57 C models, seven D models). All F - 18D were built at St Louis, and then all F - 18C were assembled in Finland. Delivery of the aircraft started in November 1995 until August 2000. The Hornet replaced the MiG - 21bis and Saab 35 Draken in Finnish service. The Finnish Hornets were initially to be used only for air defense, hence the F - 18 designation. The F - 18C includes the ASPJ (Airborne - Self - Protection - Jammer) jamming pod ALQ - 165. The US Navy later included the ALQ - 165 on their F / A-18E / F Super Hornet procurement. One fighter was destroyed in a mid-air collision in 2001. A damaged F - 18C, nicknamed "Frankenhornet '', was rebuilt into a F - 18D using the forward section of a Canadian CF - 18B that was purchased. The modified fighter crashed during a test flight in January 2010, due to a faulty tailplane servo cylinder. Finland is upgrading its fleet of F - 18s with new avionics, including helmet mounted sights (HMS), new cockpit displays, sensors and standard NATO data link. Several of the remaining Hornets are going to be fitted to carry air - to - ground ordnance such as the AGM - 158 JASSM, in effect returning to the original F / A-18 multi-role configuration. The upgrade includes also the procurement and integration of new AIM - 9X Sidewinder and AIM - 120C - 7 AMRAAM air - to - air missiles. This Mid-Life Upgrade (MLU) is estimated to cost between € 1 -- 1.6 billion and work is scheduled to be finished by 2016. After the upgrades the aircraft are to remain in active service until 2020 -- 2025. In October 2014 the Finnish broadcaster Yle announced that consideration was being given to the replacement of the Hornet. Over half of the fleet was upgraded by 1 June 2015. During that week the Finnish Air Force was to drop its first live bombs (JDAM) in 70 years, since World War II. The Kuwait Air Force (Al Quwwat Aj Jawwaiya Al Kuwaitiya) ordered 32 F / A-18C and eight F / A-18D Hornets in 1988. Delivery started in October 1991 until August 1993. The F / A-18C / Ds replaced A-4KU Skyhawk. Kuwait Air Force Hornets have flown missions over Iraq during Operation Southern Watch in the 1990s. They have also participated in military exercises with the air forces of other Gulf nations. Kuwait had 39 F / A-18C / D Hornets in service in 2008. Kuwait also participated in the Yemeni Civil War (2015 -- present). In February 2017, the Commander of the Kuwait Air Force revealed that the F / A-18s based at King Khalid Air Base had performed approximately 3,000 sorties over Yemen. The Royal Malaysian Air Force (Tentera Udara Diraja Malaysia) has eight F / A-18Ds. Delivery of the aircraft started in March 1997 until August 1997. The air force split their order between the F / A-18 and the Mikoyan MiG - 29 N. Three Hornets were employed together with five UK - made BAE Hawk 208 in an airstrike on the Royal Security Forces of the Sultanate of Sulu and North Borneo terrorist hideout on 5 March 2013, occupying part of Borneo, just before the joint forces of Malaysian Army and Royal Malaysia Police operatives launched an assault in the Operation Daulat. Malaysian Hornets were deployed for close air support to the no - fly zone in eastern Sabah. The Spanish Air Force (Ejército del Aire) ordered 60 EF - 18A model and 12 EF - 18B model Hornets (the "E '' standing for "España '', Spain), named respectively as C. 15 and CE. 15 by Spanish AF. Delivery of the Spanish version started on 22 November 1985 until July 1990. These fighters were upgraded to F - 18A+ / B+ standard, close to F / A-18C / D (plus version includes later mission and armament computers, databuses, data - storage set, new wiring, pylon modifications and software, new abilities as AN / AAS - 38B NITE Hawk targeting FLIR pods). In 1995 Spain obtained 24 ex-USN F / A-18A Hornets, with six more on option. These were delivered from December 1995 until December 1998. Before delivery, they were modified to EF - 18A+ standard. This was the first sale of USN surplus Hornets. Spanish Hornets operate as an all - weather interceptor 60 % of the time and as an all - weather day / night attack aircraft for the remainder. In case of war, each of the front - line squadrons would take a primary role: 121 is tasked with tactical air support and maritime operations; 151 and 122 are assigned to all - weather interception and air combat roles; and 152 is assigned the SEAD mission. Air refueling is provided by KC - 130Hs and Boeing 707TTs. Pilot conversion to EF - 18 is centralized in 153 Squadron (Ala 15). Squadron 462 's role is air defense of the Canary Islands, being responsible for fighter and attack missions from Gando AB. Spanish Air Force EF - 18 Hornets have flown Ground Attack, SEAD, combat air patrol (CAP) combat missions in Bosnia and Kosovo, under NATO command, in Aviano detachment (Italy). They shared the base with Canadian and USMC F / A-18s. Six Spanish Hornets had been lost in accidents by 2003. Over Yugoslavia, eight EF - 18s, based at Aviano AB, participated in bombing raids in Operation Allied Force in 1999. Over Bosnia, they also performed missions for air - to - air combat air patrol, close air support air - to - ground, photo reconnaissance, forward air controller - airborne, and tactical air controller - airborne. Over Libya, four Spanish Hornets participated in enforcing a no - fly zone. The Swiss Air Force purchased 26 C models and eight D models. Delivery of the aircraft started in January 1996 until December 1999. Three D models and one C model had been lost in crashes as of 2016. On 14 October 2015, a F / A-18D crashed in France during training with two Swiss Air Force Northrop F - 5s in the Swiss / French training area EURAC25; the pilot ejected safely. In late 2007, Switzerland requested to be included in the F / A-18C / D Upgrade 25 Program, to extend the useful life of its F / A-18C / Ds. The program includes significant upgrades to the avionics and mission computer, 20 ATFLIR surveillance and targeting pods, and 44 sets of AN / ALR - 67v 3 ECM equipment. In October 2008, the Swiss Hornet fleet reached the 50,000 flight hour milestone. The Swiss Air Force has also taken delivery of two F / A-18C full - scale mock - ups for use as ground crew interactive training simulators. Locally built by Hugo Wolf AG, they are externally accurate copies and have been registered as Boeing F / A-18C (Hugo Wolf) aircraft with tail numbers X-5098 and X-5099. These include a complex equipment fit, including many original cockpit components and instruments, allowing the simulation of fires, fuel leaks, nosewheel collapse and other emergency scenarios. X-5098 is permanently stationed at Payerne Air Base while X-5099, the first one built, is moved between air bases according to training demands. The F / A-18C and F / A-18D were considered by the French Navy (Marine Nationale) during the 1980s for deployment on their aircraft carriers Clemenceau and Foch and again in the 1990s for the later nuclear - powered Charles de Gaulle, in the event that the Dassault Rafale M was not brought into service when originally planned. Austria, Chile, Czech Republic, Hungary, Philippines, Poland, and Singapore evaluated the Hornet but did not purchase it. Thailand ordered four C and four D model Hornets but the Asian financial crisis in the late 1990s resulted in the order being canceled. The Hornets were completed as F / A-18Ds for the U.S. Marine Corps. The F / A-18A and F - 18L land - based version competed for a fighter contract from Greece in the 1980s. The Greek government chose F - 16 and Mirage 2000 instead. The F / A-18A is the single - seat variant and the F / A-18B is the two - seat variant. The space for the two - seat cockpit is provided by a relocation of avionics equipment and a 6 % reduction in internal fuel; two - seat Hornets are otherwise fully combat - capable. The B - model is used primarily for training. In 1992, the original Hughes AN / APG - 65 radar was replaced with the Hughes (now Raytheon) AN / APG - 73, a faster and more capable radar. A-model Hornets that have been upgraded to the AN / APG - 73 are designated F / A-18A+. The F / A-18C is the single - seat variant and the F / A-18D is the two - seat variant. The D - model can be configured for training or as an all - weather strike craft. The "missionized '' D model 's rear seat is configured for a Marine Corps Naval Flight Officer who functions as a Weapons and Sensors Officer to assist in operating the weapons systems. The F / A-18D is primarily operated by the U.S. Marine Corps in the night attack and Forward Air Controller (Airborne) (FAC (A)) roles. The F / A-18C and D models are the result of a block upgrade in 1987 incorporating upgraded radar, avionics, and the capacity to carry new missiles such as the AIM - 120 AMRAAM air - to - air missile and AGM - 65 Maverick and AGM - 84 Harpoon air - to - surface missiles. Other upgrades include the Martin - Baker NACES (Navy Aircrew Common Ejection Seat), and a self - protection jammer. A synthetic aperture ground mapping radar enables the pilot to locate targets in poor visibility conditions. C and D models delivered since 1989 also have improved night attack abilities, consisting of the Hughes AN / AAR - 50 thermal navigation pod, the Loral AN / AAS - 38 NITE Hawk FLIR (forward looking infrared array) targeting pod, night vision goggles, and two full - color (formerly monochrome) multi-function display (MFDs) and a color moving map. In addition, 60 D - model Hornets are configured as the night attack F / A-18D (RC) with ability for reconnaissance. These could be outfitted with the ATARS electro - optical sensor package that includes a sensor pod and equipment mounted in the place of the M61 cannon. Beginning in 1992, the F404 - GE - 402 enhanced performance engine, providing approximately 10 % more maximum static thrust became the standard Hornet engine. Since 1993, the AAS - 38A NITE Hawk added a designator / ranger laser, allowing it to self - mark targets. The later AAS - 38B added the ability to strike targets designated by lasers from other aircraft. Production of the C - and D - models ended in 2000. The last F / A-18C was assembled in Finland and delivered to the Finnish Air Force in August 2000. The last F / A-18D was delivered to the U.S. Marine Corps in August 2000. The single - seat F / A-18E and two - seat F / A-18F, both officially named Super Hornet, carry over the name and design concept of the original F / A-18 but have been extensively redesigned by McDonnell Douglas. The Super Hornet has a new, 25 % larger airframe, larger rectangular air intakes, more powerful GE F414 engines based on F / A-18's F404, and an upgraded avionics suite. Like the Marine Corps ' F / A-18D, the Navy 's F / A-18F carries a naval flight officer as a second crew member in a weapon systems officer (WSO) role. The Super Hornet is unofficially known as "Rhino '' in operational use. This name was chosen to distinguish the newer variants from the legacy F - 18A / B / C / D Hornet and avoid confusion during carrier deck operations. The Super Hornet is also operated by Australia. The EA - 18G Growler is an electronic warfare version of the two - seat F / A-18F, which entered production in 2007. The Growler is replacing the Navy 's EA - 6B Prowler and carries a Naval Flight Officer as a second crewman in an Electronic Countermeasures Officer (ECMO) role. These designations are not part of 1962 United States Tri-Service aircraft designation system. Data from U.S. Navy fact file, Frawley Directory, Great Book General characteristics Performance Armament Avionics Hornets make frequent appearances in action movies and military novels. The Hornet was featured in the film Independence Day and in 1998 's Godzilla. The Hornet has a major role in Jane 's US Navy Fighters (1994), Jane 's Fighters Anthology (1997) and Jane 's F / A-18 Simulator computer simulators.
who was the first woman nominated member of the rajya shabha
List of nominated members of Rajya Sabha - wikipedia As per the Fourth Schedule (Articles 4 (1) and 80 (2)) of the Constitution of India, 12 members are nominated by President for term of six years for their contributions to art, literature, science, and social services. This is a Current list of Members of the Rajya Sabha who have been nominated by the President. This is a Complete list of Members of the Rajya Sabha who have been nominated by the President.
how many anne of green gables books are there
Anne of Green Gables - wikipedia Anne of Green Gables is a 1908 novel by Canadian author Lucy Maud Montgomery (published as L.M. Montgomery). Written for all ages, it has been considered a children 's novel since the mid-twentieth century. It recounts the adventures of Anne Shirley, an 11 - year - old orphan girl who is mistakenly sent to Matthew and Marilla Cuthbert, a middle - aged brother and sister who had intended to adopt a boy to help them on their farm in the fictional town of Avonlea on Prince Edward Island. The novel recounts how Anne makes her way with the Cuthberts, in school, and within the town. Since its publication, Anne of Green Gables has sold more than 50 million copies and has been translated into at least 36 languages. Montgomery wrote numerous sequels, and since her death, another sequel has been published, as well as an authorized prequel. The original book is taught to students around the world. The book has been adapted as films, made - for - television movies, and animated and live - action television series. Musicals and plays have also been created, with productions annually in Canada since 1964 of the first musical production, which has toured in Canada, the United States, Europe, and Japan. In writing the novel, Montgomery was inspired by notes she had made as a young girl about a couple who were mistakenly sent an orphan girl instead of the boy they had requested yet decided to keep her. She drew upon her own childhood experiences in rural Prince Edward Island, Canada. Montgomery used a photograph of Evelyn Nesbit, which she had clipped from New York 's Metropolitan Magazine and put on the wall of her bedroom, as the model for the face of Anne Shirley and a reminder of her "youthful idealism and spirituality. '' Montgomery was also inspired by the "formula Ann '' orphan stories (called such because they followed such a predictable formula) which were popular at the time and distinguished her character by spelling her name with an extra "e ''. She based other characters, such as Gilbert Blythe, in part on people she knew. She said she wrote the novel in the twilight of the day, while sitting at her window and overlooking the fields of Cavendish. Anne Shirley, a young orphan from the fictional community of Bolingbroke, Nova Scotia (based upon the real community of New London, Prince Edward Island), is sent to live with Marilla and Matthew Cuthbert, siblings in their fifties and sixties, after a childhood spent in strangers ' homes and orphanages. Marilla and Matthew had originally decided to adopt a boy from the orphanage to help Matthew run their farm at Green Gables, which is set in the fictional town of Avonlea. Through a misunderstanding, the orphanage sends Anne instead. Anne is highly imaginative, eager to please and quite dramatic at times. However, she is defensive about her appearance, despising her red hair and pale, thin frame. She is often quite talkative, especially when it comes to describing her fantasies and dreams. At first, stern and sharp Marilla says Anne must return to the orphanage, but after much observation and considering, along with Matthew 's strong liking to Anne, she decides to let her stay. As a child of imagination, Anne takes much joy in life and adapts quickly, thriving in the close - knit farming village. Her imagination and talkativeness soon brighten up Green Gables. The book recounts Anne 's adventures in making a home: the country school where she quickly excels in her studies; her friendship with Diana Barry, the girl living next door (her best or "bosom friend '' as Anne fondly calls her); her budding literary ambitions; and her rivalry with her classmate Gilbert Blythe, who teases her about her red hair. For that, he earns her instant hatred, although he apologizes many times. As time passes, Anne realizes she no longer hates Gilbert but can not bring herself to speak to him. The book also follows Anne 's adventures with her new - found friends. Episodes include her play - time with her friends Diana, a calm girl named Jane Andrews and a good - natured but often hysterical girl called Ruby Gillis, and her run - ins with the unpleasant Pye sisters Gertie and Josie; and domestic mishaps such as dyeing her hair green while intending to dye it black, and accidentally getting Diana drunk by giving her what she thought was raspberry cordial but turned out to be currant wine. At sixteen, Anne goes to Queen 's Academy to earn a teaching license, along with Gilbert, Ruby, Josie, Jane, and several other students, excluding Diana much to Anne 's dismay. She obtains her license in one year instead of the usual two and wins the Avery Scholarship for the top student in English. Her attainment of this scholarship would allow her to pursue a Bachelor of Arts (B.A.) degree at the fictional Redmond College (based on the real Dalhousie University) on the mainland in Nova Scotia. Near the end of the book however, tragedy strikes when Matthew dies of a heart attack after learning that all of his and Marilla 's money has been lost in a bank failure. Out of devotion to Marilla and Green Gables, Anne gives up the scholarship to stay at home and help Marilla, whose eyesight is failing. She plans to teach at the Carmody school, the nearest school available, and return to Green Gables on weekends. In an act of friendship, Gilbert Blythe gives up his teaching position at the Avonlea School to work at the White Sands School instead, knowing that Anne wants to stay close to Marilla after Matthew 's death. After this kind act, Anne and Gilbert 's friendship is cemented, and Anne looks forward to what life will bring next. Green Gables household: Anne 's schoolmates: Avonlea 's locals: Others: Based on the popularity of her first book, Montgomery wrote a series of sequels to continue the story of her heroine Anne Shirley. They are listed chronologically below, by Anne 's age in each of the novels. The prequel, Before Green Gables (2008), was written by Budge Wilson with authorization of heirs of L.M. Montgomery. The province and tourist facilities have highlighted the local connections to the internationally popular novels. Anne of Green Gables has been translated into 36 languages. "Tourism by Anne fans is an important part of the Island economy ''. Merchants offer items based on the novels. The Green Gables farmhouse is located in Cavendish, Prince Edward Island. Many tourist attractions on Prince Edward Island have been developed based on the fictional Anne, and provincial licence plates once bore her image. Balsam Hollow, the forest that inspired the Haunted Woods and Campbell Pond, the body of water which inspired The Lake of Shining Waters, both described in the book, are located in the vicinity. In addition, the Confederation Centre of the Arts has featured the wildly successful Anne of Green Gables musical on its mainstage every summer for over five decades. The Anne of Green Gables Museum is located in Park Corner, PEI, in a home that inspired L.M. Montgomery. The novel has been very popular in Japan, where it is known as Red - haired Anne, and where it has been included in the national school curriculum since 1952. ' Anne ' is revered as "an icon '' in Japan, especially since 1979 when this story was broadcast as anime, Anne of Green Gables. Japanese couples travel to Prince Edward Island to have civil wedding ceremonies on the grounds of the Green Gables farm. Some Japanese girls arrive as tourists with red - dyed hair styled in pigtails, to look like Anne. In 2014, Asadora ' Hanako to Anne ' (Hanako Muraoka is the first translator in Japan) was broadcast and Anne became popular among old and young alike. A replica of the Green Gables house in Cavendish is located in the theme park Canadian World in Ashibetsu, Hokkaido, Japan. The park was a less expensive alternative for Japanese tourists instead of traveling to P.E.I. The park hosted performances featuring actresses playing Anne and Diana. The theme park is open during the summer season with free admission, though there are no longer staff or interpreters. The Avonlea theme park near Cavendish and the Cavendish Figurines shop have trappings so that tourists may dress like the book 's characters for photos. Souvenir shops throughout Prince Edward Island offer numerous foods and products based on details of the ' Anne Shirley ' novels. Straw hats for girls with sewn - in red braids are common, as are bottles of raspberry cordial soda. In the first book, Lucy Maud Montgomery established the cordial soda as the favorite beverage of Anne, who declares: "I just love bright red drinks! '' As one of the most familiar characters in Canadian literature, Anne of Green Gables has been parodied by several Canadian comedy troupes, including CODCO (Anne of Green Gut) and The Frantics (Fran of the Fundy).
the winds of war i - the winds rise
The Winds of War (miniseries) - wikipedia The Winds of War is a 1983 miniseries, directed and produced by Dan Curtis, that follows the book of the same name written by Herman Wouk. Just as in the book, in addition to the lives of the Henry and Jastrow families, much time in the miniseries is devoted to the major global events of this period. Adolf Hitler and the German military staff, with the fictitious general Armin von Roon as a major character, is a prominent subplot of the miniseries. The Winds of War also includes segments of documentary footage, narrated by William Woodson, to explain major events and important characters. It was followed by a sequel, War and Remembrance, in 1988, also based on a novel written by Wouk and also directed and produced by Curtis. The film follows the plot of Wouk 's novel closely, depicting events from March 1939 until the entry of the United States into World War II in December 1941. It tells the story of Victor "Pug '' Henry, and his family, and their relationships with a mixture of real people and fictional characters. Henry is a Naval Officer and friend of President Franklin Delano Roosevelt. Author Herman Wouk was very negative and skeptical about a motion picture adaptation of his beloved, and scrupulously researched, novel, since he was most displeased with several earlier adaptations of his novels. But in 1983, The Winds of War eventually became a successful mini-series, co-produced by Paramount Pictures and the ABC television network, on which it aired, and directed by Dan Curtis. The almost 15 - hour - long series was shown by ABC in seven parts over seven evenings, between February 6 and February 13, 1983, and attracted an average of 80 million viewers per night. The show was a success throughout the United States and received many accolades, including Golden Globe nominations and various Emmy wins and nominations. Won: Nominated:
never forget you by zara larsson and mnek
Never Forget You (Zara Larsson and MNEK song) - wikipedia "Never Forget You '' is a song by Swedish singer Zara Larsson and British singer MNEK. The song was released in the United Kingdom on 22 July 2015 as a digital download, as the second single from her second and international debut studio album, So Good. The song reached the top 10 in several countries, including The United Kingdom, Sweden, Denmark and Finland, and became both Larsson and MNEK 's first US entry, peaking at number 13 in May 2016 becoming both artist 's best rank on the chart. The single is certified 5 × Platinum in Sweden and 3 × Platinum in the US. The song was remixed by producer Kayzo. A music video to accompany the release of "Never Forget You '' was first released onto Larsson 's official Vevo account on 17 September 2015. It features a girl who meets a creature, or, makes up an imaginary friend. It shows the girl growing up and always thinking about and spending time with the creature (played by Chris Fleming) or imaginary friend. The video was filmed in Iceland and was directed by Richard Paris Wilson. As of August 2017, it has received over 340 million views. sales figures based on certification alone shipments figures based on certification alone sales + streaming figures based on certification alone
airtel digital tv star sports 1 channel number
Star Sports - Wikipedia Channel 301 (Star Sports 1) Channel 302 (Star Sports 2) Channel 310 (Star Sports 1 Hindi) Channel 311 (Star Sports 1 Tamil) Channel 809 (Star Sports HD1) Channel 810 (Star Sports HD2) Channel 811 (Star Sports 1 Hindi HD) Channel 303 (Star Sports Select 1) Channel 304 (Star Sports Select 2) Channel 813 (Star Sports Select 1 HD) Channel 814 (Star Sports Select 2 HD) Star Sports is a network of Indian sports channels owned by Star India, a subsidiary of 21st Century Fox. Notable events which are broadcast by Star Sports Network include: (2015 - 2023) (2018 - 2022) (2018 - 2023) (2016 - 2023) (2014 - 2020) (2017 - 2020) (2018 - 2023) (2016 - 2017) (2017 - 2018) (2018) (2018) Ultimate Table Tennis Star sports 1 Hotstar is the digital platform owned by Star India and is available in India. The website follows a freemium service model wherein it offers free as well as premium content. Other than what Star Sports broadcasts, it also webcasts some other tournaments played in India, domestic as well as international.
how many surface layers does the skin have
Human skin - wikipedia The human skin is the outer covering of the body. In humans, it is the largest organ of the integumentary system. The skin has up to seven layers of ectodermal tissue and guards the underlying muscles, bones, ligaments and internal organs. Human skin is similar to that of most other mammals. Though nearly all human skin is covered with hair follicles, it can appear hairless. There are two general types of skin, hairy and glabrous skin (hairless). The adjective cutaneous literally means "of the skin '' (from Latin cutis, skin). Because it interfaces with the environment, skin plays an important immunity role in protecting the body against pathogens and excessive water loss. Its other functions are insulation, temperature regulation, sensation, synthesis of vitamin D, and the protection of vitamin B folates. Severely damaged skin will try to heal by forming scar tissue. This is often discolored and depigmented. In humans, skin pigmentation varies among populations, and skin type can range from dry to oily. Such skin variety provides a rich and diverse habitat for bacteria that number roughly 1000 species from 19 phyla, present on the human skin. Skin has mesodermal cells, pigmentation, such as melanin provided by melanocytes, which absorb some of the potentially dangerous ultraviolet radiation (UV) in sunlight. It also contains DNA repair enzymes that help reverse UV damage, such that people lacking the genes for these enzymes suffer high rates of skin cancer. One form predominantly produced by UV light, malignant melanoma, is particularly invasive, causing it to spread quickly, and can often be deadly. Human skin pigmentation varies among populations in a striking manner. This has led to the classification of people (s) on the basis of skin color. The skin is the largest organ in the human body. For the average adult human, the skin has a surface area of between 1.5 - 2.0 square meters (16.1 - 21.5 sq ft.). The thickness of the skin varies considerably over all parts of the body, and between men and women and the young and the old. An example is the skin on the forearm which is on average 1.3 mm in the male and 1.26 mm in the female. The average square inch (6.5 cm2) of skin holds 650 sweat glands, 20 blood vessels, 60,000 melanocytes, and more than 1,000 nerve endings. The average human skin cell is about 30 micrometers in diameter, but there are variants. A skin cell usually ranges from 25 - 40 micrometers (squared), depending on a variety of factors. Skin is composed of three primary layers: the epidermis, the dermis and the hypodermis. Epidermis, "epi '' coming from the Greek meaning "over '' or "upon '', is the outermost layer of the skin. It forms the waterproof, protective wrap over the body 's surface which also serves as a barrier to infection and is made up of stratified squamous epithelium with an underlying basal lamina. The epidermis contains no blood vessels, and cells in the deepest layers are nourished almost exclusively by diffused oxygen from the surrounding air and to a far lesser degree by blood capillaries extending to the outer layers of the dermis. The main type of cells which make up the epidermis are Merkel cells, keratinocytes, with melanocytes and Langerhans cells also present. The epidermis can be further subdivided into the following strata (beginning with the outermost layer): corneum, lucidum (only in palms of hands and bottoms of feet), granulosum, spinosum, basale. Cells are formed through mitosis at the basale layer. The daughter cells (see cell division) move up the strata changing shape and composition as they die due to isolation from their blood source. The cytoplasm is released and the protein keratin is inserted. They eventually reach the corneum and slough off (desquamation). This process is called "keratinization ''. This keratinized layer of skin is responsible for keeping water in the body and keeping other harmful chemicals and pathogens out, making skin a natural barrier to infection. The epidermis contains no blood vessels, and is nourished by diffusion from the dermis. The main type of cells which make up the epidermis are keratinocytes, melanocytes, Langerhans cells and Merkel cells. The epidermis helps the skin to regulate body temperature. Epidermis is divided into several layers where cells are formed through mitosis at the innermost layers. They move up the strata changing shape and composition as they differentiate and become filled with keratin. They eventually reach the top layer called stratum corneum and are sloughed off, or desquamated. This process is called keratinization and takes place within weeks. The outermost layer of the epidermis consists of 25 to 30 layers of dead cells. Epidermis is divided into the following 5 sublayers or strata: Blood capillaries are found beneath the epidermis, and are linked to an arteriole and a venule. Arterial shunt vessels may bypass the network in ears, the nose and fingertips. About 70 % of all human protein - coding genes are expressed in the skin. Almost 500 genes have an elevated pattern of expression in the skin. There are less than 100 genes that are specific for the skin and these are expressed in the epidermis. An analysis of the corresponding proteins show that these are mainly expressed in keratinocytes and have functions related to squamous differentiation and cornification. The dermis is the layer of skin beneath the epidermis that consists of epithelial tissue and cushions the body from stress and strain. The dermis is tightly connected to the epidermis by a basement membrane. It also harbors many nerve endings that provide the sense of touch and heat. It contains the hair follicles, sweat glands, sebaceous glands, apocrine glands, lymphatic vessels and blood vessels. The blood vessels in the dermis provide nourishment and waste removal from its own cells as well as from the Stratum basale of the epidermis. The dermis is structurally divided into two areas: a superficial area adjacent to the epidermis, called the papillary region, and a deep thicker area known as the reticular region. The papillary region is composed of loose areolar connective tissue. It is named for its fingerlike projections called papillae, that extend toward the epidermis. The papillae provide the dermis with a "bumpy '' surface that interdigitates with the epidermis, strengthening the connection between the two layers of skin. In the palms, fingers, soles, and toes, the influence of the papillae projecting into the epidermis forms contours in the skin 's surface. These epidermal ridges occur in patterns (see: fingerprint) that are genetically and epigenetically determined and are therefore unique to the individual, making it possible to use fingerprints or footprints as a means of identification. The reticular region lies deep in the papillary region and is usually much thicker. It is composed of dense irregular connective tissue, and receives its name from the dense concentration of collagenous, elastic, and reticular fibers that weave throughout it. These protein fibers give the dermis its properties of strength, extensibility, and elasticity. Also located within the reticular region are the roots of the hairs, sebaceous glands, sweat glands, receptors, nails, and blood vessels. Tattoo ink is held in the dermis. Stretch marks often from pregnancy and obesity, are also located in the dermis. The subcutaneous tissue (also hypodermis and subcutis) is not part of the skin, and lies below the dermis of the cutis. Its purpose is to attach the skin to underlying bone and muscle as well as supplying it with blood vessels and nerves. It consists of loose connective tissue, adipose tissue and elastin. The main cell types are fibroblasts, macrophages and adipocytes (subcutaneous tissue contains 50 % of body fat). Fat serves as padding and insulation for the body. Human skin shows high skin color variety from the darkest brown to the lightest pinkish - white hues. Human skin shows higher variation in color than any other single mammalian species and is the result of natural selection. Skin pigmentation in humans evolved to primarily regulate the amount of ultraviolet radiation (UVR) penetrating the skin, controlling its biochemical effects. The actual skin color of different humans is affected by many substances, although the single most important substance determining human skin color is the pigment melanin. Melanin is produced within the skin in cells called melanocytes and it is the main determinant of the skin color of darker - skinned humans. The skin color of people with light skin is determined mainly by the bluish - white connective tissue under the dermis and by the hemoglobin circulating in the veins of the dermis. The red color underlying the skin becomes more visible, especially in the face, when, as consequence of physical exercise or the stimulation of the nervous system (anger, fear), arterioles dilate. There are at least five different pigments that determine the color of the skin. These pigments are present at different levels and places. There is a correlation between the geographic distribution of UV radiation (UVR) and the distribution of indigenous skin pigmentation around the world. Areas that highlight higher amounts of UVR reflect darker - skinned populations, generally located nearer towards the equator. Areas that are far from the tropics and closer to the poles have lower concentration of UVR, which is reflected in lighter - skinned populations. In the same population it has been observed that adult human females are considerably lighter in skin pigmentation than males. Females need more calcium during pregnancy and lactation and vitamin D which is synthesized from sunlight helps in absorbing calcium. For this reason it is thought that females may have evolved to have lighter skin in order to help their bodies absorb more calcium. The Fitzpatrick scale is a numerical classification schema for human skin color developed in 1975 as a way to classify the typical response of different types of skin to ultraviolet (UV) light: As skin ages, it becomes thinner and more easily damaged. Intensifying this effect is the decreasing ability of skin to heal itself as a person ages. Among other things, skin aging is noted by a decrease in volume and elasticity. There are many internal and external causes to skin aging. For example, aging skin receives less blood flow and lower glandular activity. A validated comprehensive grading scale has categorized the clinical findings of skin aging as laxity (sagging), rhytids (wrinkles), and the various facets of photoaging, including erythema (redness), and telangiectasia, dyspigmentation (brown discoloration), solar elastosis (yellowing), keratoses (abnormal growths) and poor texture. Cortisol causes degradation of collagen, accelerating skin aging. Anti-aging supplements are used to treat skin aging. Photoaging has two main concerns: an increased risk for skin cancer and the appearance of damaged skin. In younger skin, sun damage will heal faster since the cells in the epidermis have a faster turnover rate, while in the older population the skin becomes thinner and the epidermis turnover rate for cell repair is lower which may result in the dermis layer being damaged. Skin performs the following functions: The human skin is a rich environment for microbes. Around 1000 species of bacteria from 19 bacterial phyla have been found. Most come from only four phyla: Actinobacteria (51.8 %), Firmicutes (24.4 %), Proteobacteria (16.5 %), and Bacteroidetes (6.3 %). Propionibacteria and Staphylococci species were the main species in sebaceous areas. There are three main ecological areas: moist, dry and sebaceous. In moist places on the body Corynebacteria together with Staphylococci dominate. In dry areas, there is a mixture of species but dominated by b - Proteobacteria and Flavobacteriales. Ecologically, sebaceous areas had greater species richness than moist and dry ones. The areas with least similarity between people in species were the spaces between fingers, the spaces between toes, axillae, and umbilical cord stump. Most similarly were beside the nostril, nares (inside the nostril), and on the back. Reflecting upon the diversity of the human skin researchers on the human skin microbiome have observed: "hairy, moist underarms lie a short distance from smooth dry forearms, but these two niches are likely as ecologically dissimilar as rainforests are to deserts. '' The NIH has launched the Human Microbiome Project to characterize the human microbiota which includes that on the skin and the role of this microbiome in health and disease. Microorganisms like Staphylococcus epidermidis colonize the skin surface. The density of skin flora depends on region of the skin. The disinfected skin surface gets recolonized from bacteria residing in the deeper areas of the hair follicle, gut and urogenital openings. Diseases of the skin include skin infections and skin neoplasms (including skin cancer). Dermatology is the branch of medicine that deals with conditions of the skin. The skin supports its own ecosystems of microorganisms, including yeasts and bacteria, which can not be removed by any amount of cleaning. Estimates place the number of individual bacteria on the surface of one square inch (6.5 square cm) of human skin at 50 million, though this figure varies greatly over the average 20 square feet (1.9 m) of human skin. Oily surfaces, such as the face, may contain over 500 million bacteria per square inch (6.5 cm2). Despite these vast quantities, all of the bacteria found on the skin 's surface would fit into a volume the size of a pea. In general, the microorganisms keep one another in check and are part of a healthy skin. When the balance is disturbed, there may be an overgrowth and infection, such as when antibiotics kill microbes, resulting in an overgrowth of yeast. The skin is continuous with the inner epithelial lining of the body at the orifices, each of which supports its own complement of microbes. Cosmetics should be used carefully on the skin because these may cause allergic reactions. Each season requires suitable clothing in order to facilitate the evaporation of the sweat. Sunlight, water and air play an important role in keeping the skin healthy. Oily skin is caused by over-active sebaceous glands, that produce a substance called sebum, a naturally healthy skin lubricant. When the skin produces excessive sebum, it becomes heavy and thick in texture. Oily skin is typified by shininess, blemishes and pimples. The oily - skin type is not necessarily bad, since such skin is less prone to wrinkling, or other signs of aging, because the oil helps to keep needed moisture locked into the epidermis (outermost layer of skin). The negative aspect of the oily - skin type is that oily complexions are especially susceptible to clogged pores, blackheads, and buildup of dead skin cells on the surface of the skin. Oily skin can be sallow and rough in texture and tends to have large, clearly visible pores everywhere, except around the eyes and neck. Human skin has a low permeability; that is, most foreign substances are unable to penetrate and diffuse through the skin. Skin 's outermost layer, the stratum corneum, is an effective barrier to most inorganic nanosized particles. This protects the body from external particles such as toxins by not allowing them to come into contact with internal tissues. However, in some cases it is desirable to allow particles entry to the body through the skin. Potential medical applications of such particle transfer has prompted developments in nanomedicine and biology to increase skin permeability. One application of transcutaneous particle delivery could be to locate and treat cancer. Nanomedical researchers seek to target the epidermis and other layers of active cell division where nanoparticles can interact directly with cells that have lost their growth - control mechanisms (cancer cells). Such direct interaction could be used to more accurately diagnose properties of specific tumors or to treat them by delivering drugs with cellular specificity. Nanoparticles 40 nm in diameter and smaller have been successful in penetrating the skin. Research confirms that nanoparticles larger than 40 nm do not penetrate the skin past the stratum corneum. Most particles that do penetrate will diffuse through skin cells, but some will travel down hair follicles and reach the dermis layer. The permeability of skin relative to different shapes of nanoparticles has also been studied. Research has shown that spherical particles have a better ability to penetrate the skin compared to oblong (ellipsoidal) particles because spheres are symmetric in all three spatial dimensions. One study compared the two shapes and recorded data that showed spherical particles located deep in the epidermis and dermis whereas ellipsoidal particles were mainly found in the stratum corneum and epidermal layers. Nanorods are used in experiments because of their unique fluorescent properties but have shown mediocre penetration. Nanoparticles of different materials have shown skin 's permeability limitations. In many experiments, gold nanoparticles 40 nm in diameter or smaller are used and have shown to penetrate to the epidermis. Titanium oxide (TiO2), zinc oxide (ZnO), and silver nanoparticles are ineffective in penetrating the skin past the stratum corneum. Cadmium selenide (CdSe) quantum dots have proven to penetrate very effectively when they have certain properties. Because CdSe is toxic to living organisms, the particle must be covered in a surface group. An experiment comparing the permeability of quantum dots coated in polyethylene glycol (PEG), PEG - amine, and carboxylic acid concluded the PEG and PEG - amine surface groups allowed for the greatest penetration of particles. The carboxylic acid coated particles did not penetrate past the stratum corneum. Scientists previously believed that the skin was an effective barrier to inorganic particles. Damage from mechanical stressors was believed to be the only way to increase its permeability. Recently, however, simpler and more effective methods for increasing skin permeability have been developed. For example, ultraviolet radiation (UVR) has been used to slightly damage the surface of skin, causing a time - dependent defect allowing easier penetration of nanoparticles. The UVR 's high energy causes a restructuring of cells, weakening the boundary between the stratum corneum and the epidermal layer. The damage of the skin is typically measured by the transepidermal water loss (TEWL), though it may take 3 -- 5 days for the TEWL to reach its peak value. When the TEWL reaches its highest value, the maximum density of nanoparticles is able to permeate the skin. Studies confirm that UVR damaged skin significantly increases the permeability. The effects of increased permeability after UVR exposure can lead to an increase in the number of particles that permeate the skin. However, the specific permeability of skin after UVR exposure relative to particles of different sizes and materials has not been determined. Other skin damaging methods used to increase nanoparticle penetration include tape stripping, skin abrasion, and chemical enhancement. Tape stripping is the process in which tape is applied to skin then lifted to remove the top layer of skin. Skin abrasion is done by shaving the top 5 - 10 micrometers off the surface of the skin. Chemical enhancement is the process in which chemicals such as polyvinylpyrrolidone (PVP), dimethyl sulfoxide (DMSO), and oleic acid are applied to the surface of the skin to increase permeability. Electroporation is the application of short pulses of electric fields on skin and has proven to increase skin permeability. The pulses are high voltage and on the order of milliseconds when applied. Charged molecules penetrate the skin more frequently than neutral molecules after the skin has been exposed to electric field pulses. Results have shown molecules on the order of 100 micrometers to easily permeate electroporated skin. A large area of interest in nanomedicine is the transdermal patch because of the possibility of a painless application of therapeutic agents with very few side effects. Transdermal patches have been limited to administer a small number of drugs, such as nicotine, because of the limitations in permeability of the skin. Development of techniques that increase skin permeability has led to more drugs that can be applied via transdermal patches and more options for patients. Increasing the permeability of skin allows nanoparticles to penetrate and target cancer cells. Nanoparticles along with multi-modal imaging techniques have been used as a way to diagnose cancer non-invasively. Skin with high permeability allowed quantum dots with an antibody attached to the surface for active targeting to successfully penetrate and identify cancerous tumors in mice. Tumor targeting is beneficial because the particles can be excited using fluorescence microscopy and emit light energy and heat that will destroy cancer cells. Sunblock and sunscreen are different important skin - care products though both offer full protection from the sun. Sunblock -- Sunblock is opaque and stronger than sunscreen, since it is able to block most of the UVA / UVB rays and radiation from the sun, and does not need to be reapplied several times in a day. Titanium dioxide and zinc oxide are two of the important ingredients in sunblock. Sunscreen -- Sunscreen is more transparent once applied to the skin and also has the ability to protect against UVA / UVB rays, although the sunscreen 's ingredients have the ability to break down at a faster rate once exposed to sunlight, and some of the radiation is able to penetrate to the skin. In order for sunscreen to be more effective it is necessary to consistently reapply and use one with a higher sun protection factor. Vitamin A, also known as retinoids, benefits the skin by normalizing keratinization, downregulating sebum production which contributes to acne, and reversing and treating photodamage, striae, and cellulite. Vitamin D and analogs are used to downregulate the cutaneous immune system and epithelial proliferation while promoting differentiation. Vitamin C is an antioxidant that regulates collagen synthesis, forms barrier lipids, regenerates vitamin E, and provides photoprotection. Vitamin E is a membrane antioxidant that protects against oxidative damage and also provides protection against harmful UV rays. Several scientific studies confirmed that changes in baseline nutritional status affects skin condition. The Mayo Clinic lists foods they state help the skin: yellow, green, and orange fruits and vegetables; fat - free dairy products; whole - grain foods; fatty fish, nuts.
where does the eustachian tube connect to the throat
Eustachian tube - wikipedia The Eustachian tube, also known as the auditory tube or pharyngotympanic tube, is a tube that links the nasopharynx to the middle ear. It is a part of the middle ear. In adult humans the Eustachian tube is approximately 35 mm (1.4 in) long and 3 mm (0.12 in) in diameter. It is named after the sixteenth - century anatomist Bartolomeo Eustachi. In humans and other land animals the middle ear (like the ear canal) is normally filled with air. Unlike the open ear canal, however, the air of the middle ear is not in direct contact with the atmosphere outside the body. The Eustachian tube connects from the chamber of the middle ear to the back of the nasopharynx. Normally, the Eustachian tube is collapsed, but it gapes open both with swallowing and with positive pressure. When taking off in an airplane, the surrounding air pressure goes from higher (on the ground) to lower (in the sky). The air in the middle ear expands as the plane gains altitude, and pushes its way into the back of the nose and mouth. On the way down, the volume of air in the middle ear shrinks, and a slight vacuum is produced. Active opening of the Eustachian tube is required to equalize the pressure between the middle ear and the surrounding atmosphere as the plane descends. A diver also experiences this change in pressure, but with greater rates of pressure change; active opening of the Eustachian tube is required more frequently as the diver goes deeper into higher pressure. The Eustachian tube extends from the anterior wall of the middle ear to the lateral wall of the nasopharynx, approximately at the level of the inferior nasal concha. It consists of a bony part and a cartilaginous part. The bony part (1 / 3) nearest to the middle ear is made of bone and is about 12 mm in length. It begins in the anterior wall of the tympanic cavity, below the septum canalis musculotubarii, and, gradually narrowing, ends at the angle of junction of the squamous and the petrous parts of the temporal bone, its extremity presenting a jagged margin which serves for the attachment of the cartilaginous part. The cartilaginous part of the Eustachian tube is about 24 mm in length and is formed of a triangular plate of elastic fibrocartilage, the apex of which is attached to the margin of the medial end of the bony part of the tube, while its base lies directly under the mucous membrane of the nasal part of the pharynx, where it forms an elevation, the torus tubarius or cushion, behind the pharyngeal opening of the auditory tube. The upper edge of the cartilage is curled upon itself, being bent laterally so as to present on transverse section the appearance of a hook; a groove or furrow is thus produced, which is open below and laterally, and this part of the canal is completed by fibrous membrane. The cartilage lies in a groove between the petrous part of the temporal bone and the great wing of the sphenoid; this groove ends opposite the middle of the medial pterygoid plate. The cartilaginous and bony portions of the tube are not in the same plane, the former inclining downward a little more than the latter. The diameter of the tube is not uniform throughout, being greatest at the pharyngeal opening, least at the junction of the bony and cartilaginous portions, and again increased toward the tympanic cavity; the narrowest part of the tube is termed the isthmus. The position and relations of the pharyngeal opening are described with the nasal part of the pharynx. The mucous membrane of the tube is continuous in front with that of the nasal part of the pharynx, and behind with that of the tympanic cavity; it is covered with ciliated pseudostratified columnar epithelia and is thin in the osseous portion, while in the cartilaginous portion it contains many mucous glands and near the pharyngeal orifice a considerable amount of adenoid tissue, which has been named by Gerlach the tube tonsil. There are four muscles associated with the function of the Eustachian tube: The tube is opened during swallowing by contraction of the tensor veli palatini and levator veli palatini, muscles of the soft palate. The Eustachian tube is derived from the ventral part of the first pharyngeal pouch and second endodermal pouch, which during embryogenesis forms the tubotympanic recess. The distal part of the tubotympanic sulcus gives rise to the tympanic cavity, while the proximal tubular structure becomes the Eustachian tube. It helps transformation of sound waves. Under normal circumstances, the human Eustachian tube is closed, but it can open to let a small amount of air through to prevent damage by equalizing pressure between the middle ear and the atmosphere. Pressure differences cause temporary conductive hearing loss by decreased motion of the tympanic membrane and ossicles of the ear. Various methods of ear clearing such as yawning, swallowing, or chewing gum may be used to intentionally open the tube and equalize pressures. When this happens, humans hear a small popping sound, an event familiar to aircraft passengers, scuba divers, or drivers in mountainous regions. Devices assisting in pressure equalization include an ad hoc balloon applied to the nose, creating inflation by positive air pressure. Some people learn to voluntarily ' click ' their ears, together or separately, performing a pressure equalizing routine by opening their Eustachian tubes when pressure changes are experienced, as in ascending / descending in aircraft, mountain driving, elevator lift / drops, etc. Some are even able to deliberately keep their Eustachian tubes open for a brief period, and even increase or decrease air pressure in the middle ear. The ' clicking ' can actually be heard by putting one 's ear to another 's while performing the clicking sound. This voluntary control may be first discovered when yawning or swallowing, or by other means (above). Those who develop this ability may discover that it can be done deliberately without force even when there are no pressure issues involved. The Eustachian tube also drains mucus from the middle ear. Upper respiratory tract infections or allergies can cause the Eustachian tube, or the membranes surrounding its opening to become swollen, trapping fluid, which serves as a growth medium for bacteria, causing ear infections. This swelling can be reduced through the use of decongestants such as pseudoephedrine, oxymetazoline, and phenylephrine. Ear infections are more common in children because the tube is horizontal and shorter, making bacterial entry easier, and it also has a smaller diameter, making the movement of fluid more difficult. In addition, children 's developing immune systems and poor hygiene habits make them more prone to upper respiratory infections. Otitis media, or inflammation of the middle ear, commonly affects the Eustachian tube. Children under 7 are more susceptible to this condition, one theory being that this is because the Eustachian tube is shorter and at more of a horizontal angle than in the adult ear. Others argue that susceptibility in this age group is related to immunological factors and not Eustachian tube anatomy. Barotitis, a form of barotrauma, may occur when there is a substantial difference in air or water pressure between the outer and the inner ear -- for example, during a rapid ascent while scuba diving, or during sudden decompression of an aircraft at high altitude. Some people are born with a dysfunctional Eustachian tube that is much slimmer than usual. The cause may be genetic, but it has also been posited as a condition in which the patient did not fully recover from the effects of pressure on the middle ear during birth (retained birth compression). It is suggested that Eustachian tube dysfunction can result in a large amount of mucus accumulating in the middle ear, often impairing hearing to a degree. This condition is known as otitis media with effusion, and may result in the mucus becoming very thick and glue - like, a condition known as glue ear. A patulous Eustachian tube is a rare condition in which the Eustachian tube remains intermittently open, causing an echoing sound of the person 's own heartbeat, breathing, and speech. This may be temporarily relieved by holding the head upside down. Smoking can also cause damage to the cilia that protect the Eustachian tube from mucus, which can result in the clogging of the tube and a buildup of bacteria in the ear, leading to a middle ear infection. Recurring and chronic cases of sinus infection can result in Eustachian tube dysfunction caused by excessive mucus production which, in turn, causes obstruction to the openings of the Eustachian tubes. In severe cases of childhood inner ear infections and Eustachian tube blockage, ventilation can be provided by a surgical puncturing of the eardrum to permit air equalization, known as myringotomy. The eardrum would normally naturally heal and close the hole, so a tiny plastic rimmed grommet is inserted into the hole to hold it open. This is known as a tympanostomy tube. As a child grows, the tube is eventually naturally expelled by the body. Longer - lasting vent grommets with larger flanges have been researched, but these can lead to permanent perforation of the eardrum. Surgical implantation of permanent bypasses around the eardrum have also been studied, though these suffer from blockage issues, requiring frequent manual cleaning. These also can lead to inner ear infection due to the direct path from the external to the internal ear. Another approach to permitting middle ear drainage is the use of a prosthesis inserted into the Eustachian tube, to assist in holding the tube open. The prosthesis does not necessarily hold the tube completely open all the time, but may instead lightly brace the tissue walls of a narrow tube to assist in venting. One such example invented in 1978 is a silicone tube with a flange at the top, which keeps it from dislodging and sliding out into the throat. However, in a study in 1978 it was not found to be effective for long - term use, and would eventually become internally blocked with mucus or otitis media, or dislodged from its position. Debridement has also been studied as a solution to inability to open the Eustachian tube. Hypertrophy (excessive growth) of the cells that produce mucus can make the tube hard to open, and the procedure to reduce the growth is known as "microdebrider Eustachian tuboplasty ''. In the equids (horses) and some rodent - like species such as the desert hyrax, an evagination of the Eustachian tube is known as the guttural pouch and is divided into medial and lateral compartments by the stylohyoid bone of the hyoid apparatus. This is of great importance in equine medicine as the pouches are prone to infections, and, due to their intimate relationship to the cranial nerves (VII, IX, X, XI) and the internal and external carotid artery, various syndromes may arise relating to which is damaged. Epistaxis (nosebleed) is a very common presentation to veterinary surgeons and this may often be fatal unless a balloon catheter can be placed in time to suppress bleeding. Frontal section through left ear; upper half of section View of the inner wall of the tympanum (enlarged) The right membrana tympani with the hammer and the chorda tympani, viewed from within, from behind, and from above
who won the year daughtry was on idol
American Idol (season 5) - wikipedia The fifth season of reality television singing competition American Idol began on January 17, 2006, and concluded on May 24, 2006. Randy Jackson, Paula Abdul and Simon Cowell returned to judge, and Ryan Seacrest returned to host. It is the most successful season to date ratings-wise, and resulted in 18 contestants (including all of the top 10 and a few semifinalists) getting record deals -- nine of them with major labels. It was the first season with a male winner (Taylor Hicks) and a female runner - up (Katharine McPhee), which happened again in seasons 9, 10, 11, 13 and 15. It was also the first season of the series to be aired in high definition. Auditions were held in seven cities in the summer and fall of 2005. An audition was originally planned for Memphis, Tennessee but that was canceled due to the Hurricane Katrina relief effort that was taking place in the city, and replaced by Las Vegas, Nevada and Greensboro, North Carolina. Unlike season four, no guest judges were involved during the auditions. The season used the same rules as season four. One notable auditioner this season was Paula Goodspeed, a fervent fan of Paula Abdul, who auditioned in Austin. In 2008, Goodspeed made headlines when she committed suicide outside Abdul 's home. Abdul later claimed that she had objected beforehand to Goodspeed being at the audition because she knew Goodspeed and had been frightened by her past behavior, but the producers overrode her objection. The producers Ken Warwick and Nigel Lythgoe however denied being aware of her fears or that they would put her in danger. In Las Vegas, an auditioner Tora Woloshin gained a golden ticket to Hollywood but was disqualified just before she was due to go to Hollywood for unspecified reasons. She later appeared on the first season of X-Factor. The Hollywood semi-final rounds were held at the Orpheum Theatre in Los Angeles, California consisting of 175 contestants. The first round of semi-finals consisted of solo a cappella performance with each contestant choosing one song out of twelve that were given to each contestant two weeks in advance. Those who did not impress the judges were sent home the following day. After the singles round, the contestants were separated into four groups, with three groups going through (with 44 contestants chosen). In the Pasadena Civic Center, each were individually taken via elevator walking the infamous "mile '' to the judges station where the verdict if they would be chosen or not was announced. Twenty were cut and the final twenty - four (12 men and 12 woman) were selected. The live show portion of the semifinals began on February 21, 2006, with the names announced on February 15, 2006. There were three live shows each week for the three weeks of the semifinals. There were no format changes from season four which featured 12 male singers and 12 female singers with two of each being eliminated each week. The semifinalists were announced February 15, 2006. The following are semifinalists who did not reach the finals. Taylor Hicks is from Birmingham, Alabama. He is gray - haired and performed "A Change Is Gonna Come '' by Sam Cooke at his original audition in Las Vegas. At the first audition, the judges were surprised by his appearance. He performed Bill Withers ' "Ai n't No Sunshine '' in the Hollywood round. He is one of the eight winners in American Idol history to never be in the bottom three. He won the competition on May 24. At 29 years 229 days at the time of the finale, he is the oldest winner in American Idol history. Katharine McPhee is from Los Angeles, California. Her mother is a vocal coach. She auditioned in San Francisco with Billie Holiday 's "God Bless the Child '', and Randy Jackson said her audition was the best he 'd heard yet that season. In the Hollywood round she performed Dionne Warwick 's "I 'll Never Love This Way Again '', and "I Ca n't Help Myself '' for the group round, "My Funny Valentine '' for the last solo. At the end of the first semifinal round, Simon Cowell said that he had heard four very good singers that evening and that McPhee was the best among them. McPhee was the runner - up on American Idol as announced on the May 24 finale. Elliott Yamin was born in Los Angeles, California, and grew up in Richmond, Virginia. He started singing at the age of five and did not have any formal training. He auditioned for American Idol in Boston. In the Hollywood week, he performed Rascal Flatts ' Bless the Broken Road for the first solo, The Shoop Shoop Song for the group round. After his first semifinal performance, Simon Cowell said that he was potentially the best male vocalist in American Idol history, reprising his praise on Top 6 week after Yamin 's "A Song for You '', calling it a vocal masterclass; also, in the second round of the semifinals, Randy Jackson gave Yamin a standing ovation after his rendition of "Moody 's Mood for Love ''. Yamin finished in third place in one of the closest outcomes in Idol history where less than 1 % separated the votes of all top three contestants. Chris Daughtry is a former car service worker from McLeansville, North Carolina. During the audition round, he was profiled as a "Rocker Dad. '' He originally auditioned in Denver with Joe Cocker 's The Letter (originally by The Box Tops). During the Hollywood week, he performed Samantha Sang 's Emotion in the group round. He was eliminated at Top 4 in a surprise result. Bucky Covington is from Rockingham, North Carolina. He auditioned in Greensboro. He has an identical twin brother named Rocky. Lisa Tucker was 16 years old at the time of the show, and was the youngest finalist of this season. She is from Anaheim, California, and she auditioned in Denver with Whitney Houston 's One Moment in Time. Simon Cowell called her the "best 16 - year - old '' ever to audition on the show at the time of her original Denver audition. She was also a runner - up on Star Search but lost to Tiffany Evans. Kevin Covais was 16 years old at the time of the show, and is from Levittown, New York. For his audition in Boston, he sang "You Raise Me Up ''. In the Hollywood round he performed Shai 's If I Ever Fall in Love in the solo. Viewers gave him the nickname "Chicken Little ''. Melissa McGhee is from Tampa, Florida. She auditioned in Denver, Colorado. She sang "Ca n't Fight the Moonlight '' by LeAnn Rimes for her audition. She had not sung on camera until her first week in the top 24. A new feature this year, the show now uses a special song to make a tribute to an eliminated contestant 's journey on the show, as opposed to before when various different melodic music compositions were played. This year, the song used for an eliminated contestant 's flashback tribute was "Bad Day '' by Daniel Powter. On the finale, Carrie Underwood sang "Do n't Forget to Remember Me '' solo along with the song "Through the Rain '' with the 12 finalists. Also, the finalists performed two medleys: one medley was for the female finalists and the other for the male finalists. Several special guests performed with one of the top five Idols: Al Jarreau (Paris Bennett), Live (Chris Daughtry), Meat Loaf (Katharine McPhee), Mary J. Blige (Elliott Yamin) and Toni Braxton (Taylor Hicks). Clay Aiken performed with lookalike auditioner Michael Sandecki, who resembled Aiken c. his 2005 audition. Also, Prince performed without an Idol. Towards the end of the program, the finalists performed "That 's What Friends Are For '' with Dionne Warwick as well as other songs in the Burt Bacharach canon, with Burt Bacharach playing the piano. Several auditioners from the first round returned to accept "Golden Idol '' awards, and to sing. A parody of Brokeback Mountain (though there was no mention of homosexuality) called "Brokenote Mountain, '' featuring a group of three failed auditioners (Layne Johnson, Michael Evans, and Matthew Buckstein) was replayed from the Hollywood round. The trio "The Brokenote Cowboys '' then performed the Waylon Jennings & Willie Nelson song "Mammas Do n't Let Your Babies Grow Up to be Cowboys ''. In a pre-taped segment, finalist Kellie Pickler ate lunch with Wolfgang Puck at his brasserie as a way of making fun of Kellie 's admitted lack of culinary savvy. Finally, just before the results were announced, Hicks and McPhee performed "(I 've Had) The Time of My Life ''. The chairman of TeleScope Inc., the company which manages the American Idol results, came at the end of the show with the result card. 578 million votes were cast for the season with 63.5 million votes in the finale, and Taylor Hicks was named the winner, the second American Idol winner from the city of Birmingham, Alabama (the first being Ruben Studdard), and the fourth finalist with close ties to the city. Note: Bottom 2 indicates that the contestant was ' saved ' last. This may or may not indicate his or her actual vote rank. DialIdol is both the name of a computer program for Microsoft Windows and its associated website that began tracking contestants during season four and sprang to prominence at the start of season five. The program allows users to automatically vote for the American Idol contestants of their choice using their PC 's phone modem. The program then reports back to the main website, which keeps track of the results based on the percentage of calls for each contestant that result in a busy signal. Based on the data received, the website then predicts which contestants may be eliminated or may be in danger of being eliminated. As of May 25, 2006, its predictions for season five were 87 % accurate. This was the first season in which the free US public service website, Zabasearch.com, started to openly present voting results (starting with the top 12 and onward) that it claims are from Cingular and American Idol. It has experienced controversy over the fact that its results change throughout the day until (and often through) the results show. American Idol was the top - rated show for the 2005 -- 06 TV season and occupied the top two positions. The number of viewers for its Tuesday episodes averaged 31.17 million and for the Wednesday episodes 30.16 million. It is still the most - watched of all seasons with an overall average number of viewers of 30.6 million per episode. Click on "show '' below to see the rating details. This is the first season that a majority of finalists have major label recording contracts after Idol. Of them -- Taylor Hicks, Katharine McPhee, Elliott Yamin, Chris Daughtry, and Kellie Pickler are distributed by Sony BMG Music Entertainment; Bucky Covington by Universal Music Group; Ace Young and Mandisa by EMI. One other contestant that did not even make the top 24 (Brooke Barrettsmith) was also picked up by Sony BMG, and Universal also picked up Brianna Taylor who also did not make the top 24. Two finalists have a deal with an independent labels -- Paris Bennett and Lisa Tucker. The remaining two finalists are unsigned -- Kevin Covais, and Melissa McGhee. (Covais, however, has begun an acting career and McGhee has taken part in charity events for Idol Gives Back) Also, six semi-finalists have deals and albums with independent labels -- Ayla Brown, Gedeon McKinney, Heather Cox, Patrick Hall, Will Makar, Stevie Scott and David Radford. In addition, at least one contestant who was cut before the semifinals, Bobby Bullard, has also been signed and recorded with a small label. Taylor Hicks first post-Idol single, "Do I Make You Proud '', would debut at number one and be certified gold. Hicks ' album, Taylor Hicks, has sold 703,000 copies. He later parted with Arista Records. His follow - up album, "The Distance, '' was released March 10, 2009 on his own record label Modern Whomp Records. The fifth - season contestant with the most commercial success is fourth - place finisher Chris Daughtry, now lead singer of the band Daughtry. Their eponymous debut album has sold over 5 million copies to date -- surpassing former winners Studdard and Fantasia 's respective two - album totals -- and produced two top - ten singles. The album, which spent two weeks at number one in the US, is also the fastest - selling debut rock album in Soundscan history. As of November 2008: Runner - up Katharine McPhee 's debut album has sold 374,000 copies; she has two Top 40 Billboard hits. Also notable: sixth - place finisher Kellie Pickler, whose Small Town Girl reached number one on the Billboard Top Country Albums chart and was certified gold. To date it has sold over 815,000 copies. Third - place finisher Elliott Yamin 's eponymous debut album was certified gold and produced a platinum - selling single. Eighth - place finisher Bucky Covington 's self - titled debut album has sold over 400,000 copies and generated a top 20 and two top 10 hits on the Billboard Hot Country Singles chart. Ninth - place finisher Mandisa 's True Beauty album earned a Grammy Award nomination for Best Pop / Contemporary Gospel Album in 2007. Season five is the season from the first ten seasons of American Idol with the most number of finalists who have made it onto the Billboard charts. The compilation album for this season was performed by the top twelve finalists. In 2006, American Idol also became the most nominated unscripted show ever, and has several nominations in the 2006 Emmy Awards for season five:
when is the new episode of life is strange before the storm coming out
Life is Strange: Before the Storm - Wikipedia Life Is Strange: Before the Storm is an episodic graphic adventure video game developed by Deck Nine and published by Square Enix. The three episodes were released for Microsoft Windows, PlayStation 4, and Xbox One in late 2017. It is the second entry in the Life Is Strange series, set as a prequel to the first game, focusing on sixteen - year - old Chloe Price and her relationship with schoolmate Rachel Amber. Gameplay concerns itself mostly with the use of branching dialogues. Deck Nine began developing the game in 2016, using the Unity game engine. Ashly Burch from the original game did not voice Chloe Price in Before the Storm because of the SAG - AFTRA strike, but reprised her role in a bonus episode once the strike was resolved. British rock band Daughter wrote and performed the score. Life Is Strange: Before the Storm received mixed to generally favourable reviews, praising characters, themes, and story, while criticising aspects like plotholes, the main relationship, and player agency near the end of the game. Life Is Strange: Before the Storm is a graphic adventure played from a third - person view. The player assumes control of sixteen - year - old Chloe Price, three years before Life Is Strange. Unlike its predecessor, the game does not include time travel. Instead, Before the Storm features the ability "Backtalk '', which the player can use to call upon Chloe to get out of certain precarious situations; "Backtalk '' may also make a situation worse. A dialogue tree is used when a conversation or commentary is prompted. Occasional decisions will temporarily or permanently change outcomes. The environment can be interacted with and altered, including marking walls with graffiti. In her Oregon hometown of Arcadia Bay, sixteen - year - old Chloe Price sneaks into a house concert at an old mill. Conflict arises with two men inside, but she evades them when schoolmate Rachel Amber causes a distraction. The next day, Chloe and Rachel reunite at Blackwell Academy and decide to ditch class, stowing away on a cargo train and ending up at a lookout point. They people - watch through a viewfinder and see a man and woman kiss in the park, which upsets Rachel. They steal wine from local campers and take a walk to a scrapyard. Chloe confronts Rachel about her change in mood, but she refuses to answer. When Chloe meets Rachel later, she discloses that she witnessed her father, James, through the viewfinder, cheating on her mother. Rachel destroys a family photo in a burning trash bin, and in a fit of rage kicks it over, igniting a wildfire. The next day, Chloe and Rachel are reprimanded by Principal Wells for ditching school. Chloe hides out at the scrapyard where she finds an old truck in need of repair. She then receives a call from local drug dealer Frank Bowers, who arranges a meeting to discuss settling her debt with him. Chloe agrees to repay him by helping him steal money from her classmate Drew, who owes Frank a large sum. However, Chloe learns that Drew is being violently extorted by another drug dealer, Damon Merrick, and she must decide whether to pay off the dealer with the stolen money to protect Drew or keep it. Later, when a student is unable to participate in the school 's theater production of The Tempest due to road closures caused by the wildfire, Chloe reluctantly takes on the role opposite Rachel. After the play, they decide to leave Arcadia Bay with the truck from the scrapyard, and return to Rachel 's house to pack. There, following a confrontation, James reveals that the woman they saw him kissing was Rachel 's biological mother. Rachel is told that her biological mother, Sera, is a drug addict, and that on the day her father kissed her, he had rejected Sera 's plea to reunite with Rachel, after she adopted her away years before. Chloe vows to find Sera, against James ' wishes. Chloe contacts Frank, who agrees to meet her at the scrapyard. She repairs the truck there before Rachel arrives. They are ambushed by Frank and Damon, who stabs Rachel after he realises she is the district attorney 's daughter. Surviving the wound, Rachel recovers at the hospital. Chloe continues the search by investigating James ' office for clues about Sera, instead revealing that James has been in contact with Damon; Chloe uses James ' phone to convince Damon to disclose where Sera is located, and finds out that Damon has kidnapped her for ransom. She races to Damon to pay him off, but learns, when she reaches him, that James wanted him to kill Sera. Frank appears and fights Damon, after which Sera entreats Chloe to never tell Rachel about James ' actions. Back at the hospital, Chloe is faced with a choice: tell Rachel everything or protect her from the truth. Publisher Square Enix chose Deck Nine to develop the prequel to Life Is Strange after the developer 's proprietary StoryForge tools, made up of a screenwriting software and cinematic engine, had made an impression. Development began in 2016 with assistance from Square Enix London Studios, employing the Unity game engine. Rhianna DeVries, who initially did the game 's motion capture for Chloe Price, voiced the character, while the original voice actress Ashly Burch served as writing consultant. Burch did not reprise her role due to the SAG - AFTRA strike. With the strike over, Burch and Hannah Telle, who voiced Max Caulfield in Life Is Strange, returned for a bonus episode. The game went under different working titles during casting, according to Kylie Brown, who was cast as Rachel Amber (then codenamed Rebecca) in February 2017. The music was written and performed by the British indie folk band Daughter, and released as an official soundtrack by Glassnote Records on 1 September 2017. Instrumentation was employed to represent different sides of the lead character: piano for isolation, electric guitar for rebelliousness, and layered vocals for friendship. Daughter took the script and concept artwork as inspiration. The writers researched memoirs and psychology to understand Chloe 's grieving process. Prior to its official announcement, images had leaked online indicating that a prequel to Life Is Strange was in development. Square Enix revealed Life Is Strange: Before the Storm on 11 June during E3 2017, saying it would be released over three chapters starting on 31 August for Microsoft Windows, PlayStation 4, and Xbox One. The Deluxe Edition includes the bonus chapter "Farewell '' -- featuring Max Caulfield of the original game as a playable character -- three additional outfits, and Mixtape Mode, allowing players to customise playlists using the game 's soundtrack. The bonus episode will launch on 6 March 2018, the same day as the physical releases of the Limited and Vinyl Edition; the Limited Edition contains an art book and the soundtrack on CD, while the Vinyl Edition includes the latter on phonograph record, and if pre-ordered, figures of Chloe and Rachel -- both have content found in the Deluxe Edition, but add episode 1 of Life Is Strange. Following E3 2017, Life Is Strange: Before the Storm received one of GamesRadar 's Best of E3 awards and was nominated for Hardcore Gamer 's Adventure Game award. It was also nominated for "Best Simulation Game '' and "Best Family Game '' at the Gamescom 2017 Awards. The game was met with generally favourable reviews for the PC and "mixed or average '' for PlayStation 4, according to Metacritic. Critics praised the characters, themes, and story, but criticised plotholes, the main relationship, and player agency near the end of the game. It was nominated in the Games for Impact category at The Game Awards 2017. At the 2017 Golden Joystick Awards, it was nominated for Best Audio and Best Performer for Kylie Brown. It took first place for "Best Adventure '' at the Global Game Awards 2017, and won the award for Best Soundtrack and Most Touching Moment (The Tempest school play with Chloe and Rachel) in Game Informer 's 2017 Adventure Game of the Year Awards, while it took the lead for Best Adventure Game in their Reader 's Choice Best of 2017 Awards. Eurogamer ranked the game 16th on their list of the Top 50 Games of 2017. It was also nominated for the "Matthew Crump Cultural Innovation Award '' at the upcoming 2018 SXSW Gaming Awards, and for "Game, Franchise Adventure '', "Song Collection '', and "Writing in a Drama '' at the upcoming 2018 NAVGTR Awards. Critics mostly praised Episode 1: Awake for the character development of Chloe Price and Rachel Amber. Jeremy Peeples of Hardcore Gamer found Chloe 's behaviour "endearing '' and noted that her personality was portrayed with multiple layers. Sam Loveridge at GamesRadar wrote that Rachel was the more authentic character because of her more "grounded '' dialogue. Despite disparaging Chloe for being "her same tiresomely combative self '' early on, Metro saw her fleshed out by enduring the loss of her father. Game Informer 's Kimberley Wallace thought the younger version of Chloe brought a "naivety and vulnerability '' worthy of sympathy. Conversely, the relationship between the leads was said to have formed "at unnatural speed. '' Peeples and Loveridge favoured the "Backtalk '' gameplay feature, while Metro and Wallace cared little for it. Reviewers found Episode 2: Brave New World to either be better than the first instalment or one of the series ' greatest. Metro praised how choices made a difference with a "profound effect '' on character arc, while Ozzie Mejia of Shacknews relished in Chloe 's "genuine growth '' contrasting her "fiery spirit ''. Similarly, Game Informer 's Joe Juba appreciated the continued comprehension of the character, declaring this "its biggest strength ''. However, Juba 's biggest complaint echoed that of Wallace, in reference to the "forced '' manner with which Chloe and Rachel become friends. Brett Makedonski, writing for Destructoid, thought that the character exposition was done "to great effect. '' Most noted was The Tempest school play sequence, which Metro said was their favorite, Mejia called "one of the funniest '' in Before the Storm, and Juba observed as the culmination of all past choices. Metro criticised the script for being "uneven '' and disapproved of the voice acting. Mejia felt the ending could have been done away with, saying it was "a bit drawn out, '' though he was impressed with "Backtalk ''. Metro and Peeples agreed that Episode 3: Hell Is Empty was the most emotional of the season. The writing was lauded as authentic and genuine, with Wallace noting that it showed new sides to minor characters; conversely, Makedonski criticised that an ancillary character with little development became a primary antagonist. Although Metro observed some inconsistent dialogue and voice acting, Peeples said the cast showed "incredible chemistry ''. Metro saw the relationship between Rachel and Chloe as "the least compelling '' aspect, Wallace thought their "tender moments '' were the best parts of the episode, and Makedonski said "their struggles, their mutual escapism, and their sacrifices '' provided more than enough investment. Reviewers generally approved of how the game concluded. Wallace declared it "gripping and satisfying '' and Metro called it "every bit as earth - shattering as the first game. ''
the kingdom of god is within you and without you
The Kingdom of God is Within You - wikipedia The Kingdom of God Is Within You (Russian: Царство Божие внутри вас (Tsarstvo Bozhiye vnutri vas)) is a non-fiction book written by Leo Tolstoy. A philosophical treatise, the book was first published in Germany in 1894 after being banned in his home country of Russia. It is the culmination of thirty years of Tolstoy 's thinking, and lays out a new organization for society based on a literal Christian interpretation. The Kingdom of God is Within You is a key text for Tolstoyan proponents of nonviolence, of nonviolent resistance, and of the Christian anarchist movement. The title of the book is taken from Luke 17: 21. In the book Tolstoy speaks of the principle of nonviolent resistance when confronted by violence, as taught by Jesus Christ. When Christ says to turn the other cheek, Tolstoy asserts that Christ means to abolish violence, even the defensive kind, and to give up revenge. Tolstoy rejects the interpretation of Roman and medieval scholars who attempted to limit its scope. "How can you kill people, when it is written in God 's commandment: ' Thou shalt not murder '? '' Tolstoy took the viewpoint that all governments who waged war are an affront to Christian principles. As the Russian Orthodox Church was -- at the time -- an organization merged with the Russian state and fully supporting state 's policy, Tolstoy sought to separate its teachings from what he believed to be the true gospel of Christ, specifically the Sermon on the Mount. Tolstoy advocated nonviolence as a solution to nationalist woes and as a means for seeing the hypocrisy of the church. In reading Jesus ' words in the Gospels, Tolstoy notes that the modern church is a heretical creation: "Nowhere nor in anything, except in the assertion of the Church, can we find that God or Christ founded anything like what churchmen understand by the Church. '' Tolstoy presented excerpts from magazines and newspapers relating various personal experiences, and gave keen insight into the history of non-resistance from the very foundation of Christianity, as being professed by a minority of believers. In particular, he confronts those who seek to maintain status quo: "That this social order with its pauperism, famines, prisons, gallows, armies, and wars is necessary to society; that still greater disaster would ensue if this organization were destroyed; all this is said only by those who profit by this organization, while those who suffer from it -- and they are ten times as numerous -- think and say quite the contrary. '' Mohandas Gandhi wrote in his autobiography The Story of My Experiments with Truth (Part II, Chapter 15) that this book "overwhelmed '' him and "left an abiding impression. '' Gandhi listed Tolstoy 's book, as well as John Ruskin 's Unto This Last and the poet Shrimad Rajchandra (Raychandbhai), as the three most important modern influences in his life. Reading this book opened up the mind of the world - famous Tolstoy to Gandhi, who was still a young protester living in South Africa at the time. In 1908 Tolstoy wrote, and Gandhi read, A Letter to a Hindu, which outlines the notion that only by using love as a weapon through passive resistance could the native Indian people overthrow the colonial British Empire. This idea ultimately came to fruition through Gandhi 's organization of nationwide nonviolent strikes and protests during the years 1918 -- 1947. In 1909, Gandhi wrote to Tolstoy seeking advice and permission to republish A Letter to a Hindu in his native language, Gujarati. Tolstoy responded and the two continued a correspondence until Tolstoy 's death a year later in 1910. The letters concern practical and theological applications of nonviolence, as well as Gandhi 's wishes for Tolstoy 's health. Tolstoy 's last letter to Gandhi "was one of the last, if not the last, writings from his pen. ''
what is the werewolf's name in wizards of waverly place
List of Wizards of Waverly Place characters - wikipedia The following is the list of characters of Disney Channel original series Wizards of Waverly Place. The series centers on the fictional characters of the Russo family, which includes Alex (Selena Gomez), her older brother Justin (David Henrie) and their younger brother Max (Jake T. Austin). The three Russo siblings are wizards in training and live with their Italian - American father Jerry (David DeLuise), a former wizard, and their Mexican - American mother, Theresa (Maria Canals Barrera) who is a mortal. Alex 's best friend Harper (Jennifer Stone) also found out about the Russos ' wizard powers in season 2. The siblings have to keep their secret safe while living in the mortal world. When they become adults, the three siblings will have a wizard competition to decide who will become the family wizard of their generation and keep his or her wizard powers. Harper used to have a crush on Justin, but now is in love with Justin 's best friend, Zeke, who finds out about the Russo 's wizard powers in season 4. Alex Russo (Selena Gomez) and Justin Russo (David Henrie) are the only characters to appear in every episode of the series. Alexandra Margarita Russo (Selena Gomez) is the only female out of the three Russo siblings, and the middle child. She is sly, rebellious, outgoing, and usually underachieves when it comes to school. She often gets into trouble because of her constant schemes (usually involving magic). She is part Latina and part Italian. Even though she is infamously known for doing the worst in wizard studies and school, and relentlessly torments her older brother Justin, Alex becomes much more mature throughout the series. Although she 's somewhat lazy, Alex is smart, loyal, and caring. She goes through a long term relationship with Mason Greyback, a purebred werewolf. She becomes the Russo family wizard at the end of the series. Usually, her mom calls her "m'ija '', which is Spanish for "my daughter ''. Justin Vincenzo Pepé Russo (David Henrie) is Alex 's and Max 's brother, and the oldest of the three Russo siblings. He is very smart, and is often considered a nerd. He is in numerous clubs and has learned over 5000 spells. He continues his wizard studies in a Monster Hunting course. For his knowledge in wizardry, he takes after his father Jerry, who originally won the Wizard Family Competition against Kelbo and Megan. Unlike Alex, he is an overachiever when it comes to school. He is fairly athletic, and very respectful to his parents and other adults. Justin is responsible, sensible, kindhearted and hardworking, but can be a little sarcastic to Alex and Max. He revealed to Alex that he is jealous of her because he felt she is blessed with magic skills that he ca n't live up to, and it 's reinforced when he refers to Alex as "daddy 's little princess '' (a unique, and often favoring, bond between a father and daughter). This is what drives him to be a better wizard. Alex consoles him by stating that he is her rock and foundation, and that she is actually jealous of his academic achievements, even though she openly admits to not caring about school. He has a long - term relationship with Juliet Van Heusen, a teenage vampire whose family runs the "Late Night Bite '' (local competition for the Waverly Sub Station), which prompted a feud between the Russos and the Van Heusens. At first, their love is a forbidden romance, which Alex points out by referencing William Shakespeare 's Romeo and Juliet, but then they eventually start dating openly. Justin reveals to Alex that Juliet is his true love, after he felt heartbroken about being unable to see her due to the feuding families. Justin gets heartbroken again when Juliet was forced to break up with him due to her getting scratched by Mason the werewolf, as she stated that when that happens a vampire 's appearance will revert to their true age. Later in the series, Alex tells the world that they are wizards and her and Justin 's levels move down, so he makes a class with students who got kicked out of WizTech and later falls in love with an angel named Rosie. At first, it seems that Justin won the Wizard Competition after being the first to cross the finish line, and he was about to be granted his full powers, when he says that he ca n't accept it because he did n't really win; Alex would have won if she had n't come to back to help him, and he says that she deserves to be the Russo family wizard. Because of his honesty, Professor Crumbs allows both Justin and Alex to keep their powers and for Justin to take over as Headmaster of Wiz Tech. Later in the series he reconciles with Juliet after she regains her youthful appearance. Maximilian Alonzo Ernesto "Max '' Russo (Jake T. Austin) is the youngest of the three Russo siblings. Alex and Justin 's magic are more advanced than Max 's (as shown by the wands). Although he is an underachiever in the beginning, he advances more during the series. At first, he does not seem to be very bright, but this is later revealed to be due to a lack of focus and a short attention span, as he becomes a A+ model student when their school introduces uniforms, claiming it helps him focus. Alex and Justin are embarrassed and annoyed by his lack of intelligence, but they do occasionally take advantage of it, for example: When they tried to lure in a Genie back to her lamp, Alex suggested that after the Genie outsmarted them, they might be able to out - dumb her, and immediately asked Max what he 'd do to get her back to grant him a wish. The Genie eventually allows him a wish, to which he responded with describing his appearance, which prompted the Genie to unintentionally reveal that there is a reset - button. He once revealed to Juliet, that he uses "randomness '' to allow others to inadvertently reveal their true thoughts and feelings to him, though he later admitted that he still has no idea what is going on around him. He is very fond of skateboards. In the episode "Three Maxes and a Little Lady '', Justin and Alex fall behind in the Wizard Competition, and Max gets a wish, which he uses to wish for the Wizard Competition to begin that day. However, both Alex and Justin use magic to turn into Max and reverse The Wish, to no avail. Max eventually decides to bump back the Wizard Competition; however, Alex and Justin cross spells and turn Max into Maxine, a girl version of him. In "Who Will be the Family Wizard '', Max 's powers are taken away, but Jerry and Teresa decide to give him The Sub Station, much to his happiness. Personality wise, Max takes after his Uncle Kelbo. Max is played by Jake T. Austin (and Bailee Madison as Maxine) in season four. Harper Rosanne Evans Finkle (Jennifer Stone) is best friends with Alex Russo, and always used to have a crush on her brother, Justin Russo, but now is in a relationship with Zeke Beakerman, Justin 's nerdy best friend. Harper is a typical teenager, being self - expressive, but a little insecure at times. Unlike Alex, she is a hard - working student and a fairly positive person, but still connects well with Alex. Jennifer Stone does not appear as Harper in "You Ca n't Always Get What You Carpet '' (which was actually the first episode produced, but the sixth episode broadcast -- the episode was aired out of production order). Harper is known for her outrageous fashion ensembles that she designs herself, most of which are "food - themed '', such as one outfit seen in the season one episode "Credit Check '' which resembled a cupcake. No one ever tells her that her fashion designs are (for the most part) ridiculous, but it is implied that everyone else thinks it. Not only does she create her own clothes, but she also makes her own jewelry as seen in the episode "Art Museum Piece '', when she starts a booth selling her necklaces made of macaroni, glitter, and knobs from Alex 's room. Despite her unseemly fashion choices, she believes it is better to stand out as an individual, and is confident that she will excel at whatever she tries, no matter what people think, and has a habit of laughing out loud when the atmosphere gets tense. She was oblivious of the fact that Alex, Max and Justin are wizards until the episode "Harper Knows '' when Alex finally told her because she felt guilty for not telling the truth to her best friend. This was also the first time Harper experienced the use of magic when Alex gave her a charmed costume with super powers. She is often paranoid of Alex 's schemes, especially after she finds out about magic, but reluctantly gets involved anyway. Despite Alex 's controlling and manipulative behavior, Harper always sees the good side in Alex and is willing to be accepting of her due to their friendship, to the point where she was the only person who believed that Alex would n't participate in a plot that, if successful, would prevent young wizards losing their powers in the Competition. She has known Alex since kindergarten and is always there when Alex needs support. In season three it is revealed that Harper is a cheerleader, which is the only secret Harper has ever kept from Alex, largely due to Alex 's typical disdain and mocking attitude towards cheerleaders. Harper 's parents are featured in "Wizards vs Finkles '' as traveling showpeople from Romania. They ask Harper to rejoin them for shows but Harper refuses, happy with her life with the Russos while Alex, fascinated with their lifestyle, accept in Harper 's place. In the end the commitment and hard work drives Alex away, with Harper 's parents leaving for Youngstown, Ohio, for a new gig on good terms with Harper and the rest of the Russos. In "Meet the Werewolves '' it is revealed that Harper was born in Nebraska, backstage in a night club, most likely at one of her parent 's shows. In "Wizards vs. Vampires: Tasty Bites '', when Alex is so fed up with her family 's new healthy living, she says, "I 'm so desperate, I 'm thinking about going to Harper 's house '', meaning that Harper 's house is wild. In "Alex Does Good '', Harper blurts out, "It 's about time I got some appreciation, Mom! '', claiming that her mom does n't appreciate her. Harper moves in with Alex and the Russos in season three, most likely because of her home. (Although it was also motivated by her father 's recent transfer to Pittsburgh and her unwillingness to leave Alex). In the episode "Future Harper '', it 's revealed that Harper becomes a famous author, who writes books based on the Russos ' wizard adventures, and writes under the name "H.J. Darling '' (parody of J.K. Rowling). Alex, Justin, and Max find this out when they go to confront the author of the books (seven books in the series Charmed and Dangerous) that mysteriously mimic their lives. As the older Harper (Rachel Dratch) tells them, she writes her books in the present day because in the future, wizards and magic in general have been outed, and books about magic are no longer so interesting. She does n't tell the Russo kids who it was that revealed the existence of wizards, other than it happens to be one of the three; the Alex and Justin suspect it to be Max, to which he is first appalled but then agrees (though in a later episode the Russos are questioned by the government and they trick Justin into revealing that wizards exist, though this turned out to be a false exposure); and that she came back in time with the aid of one of the most powerful wizards of all time. It is unknown if she ever marries someone with the last name Darling, or if she just uses the name H.J. Darling as a way to keep her identity safe in the present day. In the episode "Third Wheel '', after it is discovered Stevie is a wizard, Alex starts hanging out with her more, neglecting Harper. Later in the episode, Harper confronts Alex on how she has been feeling left out. This leads to a heated argument, where Harper reveals her insecurities about how Alex is always taking her for granted and yet she never holds it against her. During this argument, they end up destroying the Homecoming float, which Justin tricked Harper into helping him build by disguising himself as Alex. Mr. Laritate witnesses this and threatens to have Alex expelled. For the sake of their friendship, Harper takes all the blame. Mr. Laritate believes her, considering that Alex never tells the truth, and punishes Harper with one week of detention. Later, Alex somehow sneaks into the detention room and apologizes for what happened. Forgiving her, Harper explains that she only did it because even if Stevie is Alex 's new best friend, it 's not going to keep her from being a friend to Alex. Alex compassionately says that her relationship with Harper is much stronger than it is with Stevie, by saying that Stevie is her friend, but Harper is her sister. When Stevie later appears to recruit Alex to aid her in a plot to destroy the power transference technology and thus prevent any younger wizards from losing their powers in the competition, Harper continues to believe that Alex will come through and do the right thing even when Justin is convinced that Alex has gone evil. Alex warmly thanks Harper for her faith in her after her true agenda is revealed when she freezes Stevie and allows Stevie 's brother Warren to inherit his full powers. At times, Jerry and Theresa like to take credit for Harper being their daughter, as in the episode "Marathoner Helper ''. Justin then says, "C'mon, she 's not your daughter. '' And as Alex walks in eating a hot dog with cereal he says, "That 's your daughter. '' She usually shouts "See ya in PE '' and screams when she is scared or nervous. In "Helping Hand '', she screams while laughing stating she is scared and happy. "Crazy 10 - Minute Sale '' Theresa Magdalena Margarita Russo (née Larkin), played by Maria Canals Barrera, is the mother of Alex, Justin and Max Russo and co-owner of the Waverly Sub Station, with her husband Jerry; and is also the slightly more serious one of the couple. Theresa is Mexican American (as a Latina, Canals Barrera has Spanish ancestry through her Cuban parents). Theresa Russo is a typical mother. She 's fussy, caring and can be pretty embarrassing. Unlike her children, she is a mortal. She dislikes magic, probably because she was once possessed by a wizard 's love spell for twelve years. Theresa lives on Waverly Place in Manhattan with her husband (and former wizard) Jerry Russo (David DeLuise) and her children: middle daughter Alex Russo (Selena Gomez), eldest son Justin Russo (David Henrie), and youngest child Max Russo (Jake T. Austin). She owns a sandwich shop called the Waverly Sub Station with her husband in which her children work. She is overprotective of Justin and Max, but Max especially. She tries to relate with her kids but it does n't always work. Theresa stated that she is a "Proud Latina '' and tries to get her children to learn about her and their Mexican heritage. In season one 's "You Ca n't Always Get What You Carpet '', Theresa states that she once played guitar in an all - girl mariachi band -- though when Justin says "I did n't know you played the guitar '', Jerry says "she does n't '' -- implying that Theresa is not a very good guitar player. This is shown in season 3. At times, Theresa can be controlling of her kids and their lives; for example, in "Quinceañera '', she completely takes control of the plans for Alex 's Quinceañera fifteenth birthday party and turns it into the party that she wants, attempting to manipulate Alex into letting her. She refuses to listen to Alex 's objections all because she herself never got to have a Quinceañera of her own. This eventually gets so irritating to Alex that she casts a spell to switch bodies with her mother, partially so Theresa would be able to have the Quinceañera that she never got to have. Theresa is the reason why Jerry could not keep his wizard powers. Since wizards are not allowed to marry non-wizards, Jerry chose to give up his powers to his younger brother Kelbo in order to marry Theresa, as was revealed in season one 's "Alex in the Middle '' -- previously unbeknownst to Justin, Alex and Max. However, Jerry says he does not regret marrying her. It has been mentioned several times that Theresa does n't like magic, and that she feels left out due to the rest of her family all being wizards at some point. She also finds it difficult to live in a house full of wizards, especially when her children (and sometimes, Jerry) misuse magic in the house, most notably when the boys of the family want to play indoors and either directly or indirectly end up destroying a prized lamp she owns. An example of this is in season one 's "Art Museum Piece '', when a spell which causes Theresa to go through objects wears off -- and she accidentally walks into the front door and she exclaims "I hate living with wizards! ''. She has also been shown to have very little concern for major issues in the magical world, which do n't concern her. She also can be annoyed by the magic futility, was visible in "All bout You - niverse '', when Alex broken a magical mirror (and thus trapping herself in a parallel universe), Theresa find out that fixing a magical mirror is no different from fixing a normal mirror, and she remarks "Magic is lame ''. Despite this, when Alex and Justin 's antics cause Max to fail the wand quiz in "My Tutor, Tutor '', Theresa lends a sympathetic ear to his problem (even supporting his plans to get revenge), and even told Alex to use her magic to make their haunted house scarier in "Halloween ''. In "Positive Alex '', Theresa and Jerry realized that Alex had put a spell on herself to become positive, and they agree to do nothing against it, because they believe that there 's nothing wrong in being positive, though they quickly change their minds when the newly optimistic Alex turns out to be unbearably annoying. Though normally, Theresa is a kind and caring mother, but she has a prevalent selfish side. For example, in "Retest '', when Alex, Max, and Justin were upset over the possibility of losing their powers, Theresa relished the thought of having a normal family, going so far as to suggest turning the wizard lair into a den and, although she has no intention in hurting them, Theresa often hints this fact in a very unkind form. Similarly, in "Wizards vs. Vampires on Waverly Place '', she openly refused to let Justin date Juliet at first and freely stated she cared more about her business than her children 's happiness, which even Alex was appalled and disgusted by. Alex then cut into the argument as Justin was losing it badly against their parents and fixed everything up, ending with Justin able to date Juliet and their parents happy. In "Marathon Helper '', Theresa was too shamed and upset with her family 's lack of physical attributes that she said none of her own children are members of her family, openly added Harper is her only daughter, since Harper had taken it on herself to be involved in healthy activities, and later, she goes farther, hurting Justin 's feelings by she saying she was not proud of his academic success. Both in "Wizards vs. Vampires on Waverly Place 2 '' and in "Wizards of Waverly Place: The Movie, '' she openly remarks since she is the Russos ' mother, she does n't need to care about their opinions and in more than one occasion, she arrogantly expressed the belief that she is immune to the Wizard Rules, since that she is n't (unfortunately, the "Wizard '' Rules apply to all normal and supernatural beings that is aware of the Wizard World.). In the episode "Alex 's Logo '', she was referred to as popular in school, though it turns out she was the first girl admitted to a formerly all - boys school. In "Uncle Ernesto, '' Alex encourages her to invite her brother on her birthday, however, it goes wrong and she ends up with a crashed birthday party. Geraldo Pepé "Jerry '' Russo is the father of Alex, Justin and Max Russo and co-owner of the Waverly Sub Station, with his wife Theresa (whom he accidentally insults on occasion). He is also a former wizard, who chose to give up his powers to marry his wife Theresa, a mortal, due to a rule forbidding wizards to marry mortals. Jerry is Italian American (DeLuise also has Italian ancestry). Jerry Russo is the average father. He is stern, protective and is annoyed by Alex and Max constantly, and even by Justin occasionally. Unlike his children, he does not have powers any more; nonetheless, his children inherited their powers from him, with Max having a fascination of the magic normal humans can do such as microwave popcorn, Alex being a natural at magic yet using it recklessly, and Justin continually trying to improve his magic through knowledge. Jerry lives on Waverly Place, a street in downtown Manhattan with his mortal wife Theresa Russo (Maria Canals Barrera) and his children: middle daughter Alex Russo (Selena Gomez), eldest son Justin Russo (David Henrie), and youngest child Max Russo (Jake T. Austin). He owns a sandwich shop called the Waverly Sub Station with Theresa, where his children work, and, in the season one episode "New Employee '' so did Alex 's best friend Harper Finkle (Jennifer Stone), although they only did it so they could be together more often. It is mentioned in the season two episode "Make It Happen '', that Jerry 's original career plan was to be a bull rider. His most popular catchphrase is "But - he - you - she - ALEX! '' Jerry is the typical over-protective father towards his daughter. It was first shown in season one 's "First Kiss '' when Alex claimed to have had her first kiss. He said "No! She 's my little girl! What 's his name? I 'll make him cry! '' (it is almost immediately revealed that Alex lied about having had her first kiss, and when Justin teased her for it, she kissed the first random boy she deemed good enough to kiss in school the next day, who was Matt, giving him a kiss that lasted a few seconds long and catching him off guard by her blunt, surprising way -- she had slammed his locker shut when he was using it, right before pulling him by the shirt to direct his face towards hers to be able to kiss -- and when he finally recovered from it, he turned around and ran in the direction of Alex, calling out her name to presumably ask her on a date), and continued in "You Ca n't Always Get What You Carpet '' from that same season, when he did n't want to teach her how to ride the carpet because he was afraid she would grow up and leave home. He also was seen putting babyish wallpaper on her walls. When Max is accidentally transformed into a little girl in "Daddy 's Little Girl, '' Jerry calls the new Max "Maxine '' and dotes on her, which does not go unnoticed by a jealous Alex, who tries to get her father 's attention to no avail. However, when he becomes aware of Alex 's jealousy and sadness, Jerry apologizes to her, explaining that he only did so because Maxine reminds him of Alex when she was little and reassuring her that she will always be his little girl. He normally is n't okay with her dating, such as in the first - season episode "Potion Commotion '' when Alex said "That 's my new boyfriend! '' about her crush Brad Sherwood (Shane Lyons), he said "I 'll pick your new boyfriend! '' But later on when Brad complimented Jerry, he said "I like your new boyfriend! '' When Alex began dating Dean (Daniel Samonas), he liked him at first until Alex revealed Dean wanted to kiss her in season two 's "Alex 's Brother, Maximan ''. Jerry said "I never cared for that boy!, '' when he had just said the other day he liked Dean. When Alex pointed that out he said "That 's before I knew he wanted to kiss you! He was using me to get to you! And I fell for it! '' In "Alex Charms a Boy '' Jerry is no longer overprotective because when he saw Alex and Mason kiss he smiled instead of freaking out, because he knew it was an honest and true relationship. He can be somewhat of a cheapskate; for example, in "Taxi Dance '', he is unable to remember the number of the hospital room where Justin was born when he remembers his own lottery ticket number from that same day, and being so cheap that his antics resulted in Theresa giving birth to Alex in a taxicab instead of in a hospital. Notably, before this, Theresa told the kids that Jerry had tried to make her walk to the hospital while she was in labor, to which Jerry replied, "I offered to push you in a shopping cart! ''. More notable, in "Western Show '', when Max 's increase of intelligence and maturity make him apply for various colleges, instead of showing happiness for the success of his son, he was upset with the possibility of having to pay for his learning. Additionally, in "Quinceañera, '' he is reduced to tears when Alex offers to tell him how much her Quinceañera cost. Jerry is proud of his magical ancestry and teaches his children about the proper uses of magic in "Wizard Training Class '' on Tuesdays and Thursdays, in the lair magically disguised as a food locker in the restaurant. However, his children often disobey his magic rules (Alex, most often being the case), and he has to punish them. He wants his kids to be great wizards, but also wants them to be able to live a life without using their powers since they may not have them in the future, as only one of them will get to keep his / her powers as an adult. Jerry has an younger brother named Kelbo (Jeff Garlin), who is seen in the season one episode "Alex in the Middle '', who is the wizard in Jerry 's family because Jerry had to give him the powers to marry Theresa. Jerry 's children consider Kelbo to be more fun than Jerry, and Alex even decides to have Kelbo be her magic teacher, and is revealed to be irresponsible with magic (almost as much as Alex is sometimes) as is shown when Kelbo fails to stop a package of sea chimps that spouts a huge gush of water eventually flooding the wizard lair and turning Kelbo and Alex into sea chimps. Jerry ends up being the one who saves them both, and he reveals a secret that he has kept from Alex, Max and Justin: that he gave up his powers in order to marry Theresa, due to a rule forbidding wizards and mortals from marrying, thus relinquishing his powers to Kelbo in order to marry her. In "Retest '' it is revealed that Jerry and Kelbo have a sister, Megan. Megan holds a grudge against Jerry for giving Kelbo his powers instead of her. She lives in an art studio in Paris where she paints pictures. She is similar to Alex because she, like Alex, loves Art and hates hard work, though, unlike Alex, she is unwilling or unable to apologize for her actions. It is most likely their parents are still alive because of references from Alex and Max. Such as in "Future Harper '', when Alex is telling Harper about a wizard adventure about her and her brothers, Harper already knows the ending and tells Alex she already told her the story. Alex replies "Oh my gosh I 'm starting to sound like Grandma ''. Jerry also is a fan of the New York Mets and New York Jets, he and Theresa accidentally made fools of themselves in front of Max 's date 's parents, because Max got confused between the New York Mets and the New York Metropolitan Opera (The Met) while trying to find common interest between them and his date 's parents, not realizing that they keep forgetting about the Intermission they usually believed the stories to "end abruptly in the middle ''. In several episodes of the series (more so in season one, than in other seasons), Jerry often makes a remark that accidentally insults Theresa, and quickly tries to correct his statement. Such an example is in season one 's "Movies '', in which Theresa tells Alex that no matter how old she gets that she is not too old to spend time with her family, Jerry responds "Exactly, just look at your mother ''. Alex then points out that Jerry accidentally called Theresa "old '', and he attempts and fails to change the subject. In Wizards of Waverly Place: The Movie, Jerry gives Justin the family wand and the spell book which Alex steals accidentally wishing her parents never met or fell in love, but it was resolved -- ironically by Alex herself, the reckless one out of his three children, who also surprisingly won the Premature Family Wizard competition against Justin, as Justin was the favorite out of the three to win it but lost to Alex, who then gives up her powers so they could compete for their powers when they are old enough. Jerry has made more than one mention of past friends of his youth, including "Ponyboy '' and John Bender; both names of rebellious teen characters in classic 80 's pop - culture films, The Outsiders and The Breakfast Club, respectively. Aside from the names, no other specific connections or further implications seem to have been made connecting Jerry 's youth and the events in these stories. It is revealed in "Wizard for a Day '', that he does not need his magic powers anymore, because when Alex gave him Merlin 's hat for his birthday, granting the wearer magic powers for 24 hours, all he wished for was a sandwich, whereas he could 've just as easily went downstairs to make a sandwich himself in the Sub Station. Feeling underwhelmed, Alex used the hat to convert the Sub Station into a milkshake bar from Jerry 's youth. After seeing this, he reminisced about it for a few moments before asking Alex to change it back, though he changed his mind when he saw how many customers it attracted. He later told Alex that he does n't need big exciting gifts, as he his satisfied enough with small and simple ones (referring to Justin 's gift, a magic pencil sharpener made from a soup - can). The following is a list of recurring characters in the Disney Channel sitcom Wizards of Waverly Place. Ezekiel ' Zeke ' Beakerman (Dan Benson) is Justin 's best friend. Zeke is in the same grade as Justin. He attends Tribeca Prep and is in advanced chemistry along with Justin. He is also part of an Alien Language League; he and Justin often "speak alien '' to each other. Zeke 's first appearance was in "Movies '', but strangely he was known as Zack Rosenblack. They later changed his name to Zeke. He is very impulsive and easygoing. He and Justin are also on the same Quiz Bowl team in "Smarty Pants ''. Zeke and Justin have tried out for several sports teams, but have apparently failed at all of them. In "Fashion Week '', they somehow succeed at talking to models. Zeke got the role of Peter Pan in the school play in "Fairy Tale ''. In the episode "Wizard For a Day '', he is upset when the real aliens do not understand his "alien speak ''. Zeke and Harper go to "Zombie Prom '' together. He is also on the cheer squad, as revealed in "Positive Alex ''. He runs against Justin in the campaign for Student Body President in "Detention Election '', but later he votes for Justin. He continues to assist Justin as student body president. Zeke continues his relationship with Harper in "Alex Russo, Matchmaker? '' In "Wizards Unleashed '', Zeke and Harper take care of the sub station and share their first kiss. In "Alex Russo, Matchmaker '', Zeke is revealed to have hydrophobia and in season 4 he finds out the Russos are wizards. He assists the Russos when possible, though his assistance usually turns more troublesome than helpful. He is present during the Russo siblings ' competition and ceremony at the series finale. Mr. Herschel Laritate (Bill Chott) is the principal of Tribeca Prep High School, which is mentioned when he calls Alex Russo into the principal 's office in "Alex Does Good ''. Humorously, it is shown in the same episode that Alex has been sent to the principal 's office way too often (implying Alex is a delinquent student) to the point that the two have somewhat of regular routines down, such as serving drink and snacks to each other and complementing the office 's new mural of horses. His last name, Laritate, is a pun of Larry Tate from Bewitched. He loves anything related to the Old West, as seen in his daily life, such as wears cowboy hat and bolo tie, uses terms from that era when speaking, and even decorates his office in that theme. When he was relieved from his position as principal in "Western Show '', he choose to work as part - time employee in ' Wild Billy 's Western Roundup '. He teaches History class, Marriage and Family class, and taught the Art class once as temporary teacher. Mr. Laritate also acts as the adviser of several activities at the school including the World School Summit at the U.N., Happy Helpers Club, Quiz Bowl, sport activities, and school plays. His alma mater is Clementine College as mentioned in "Fairy Tale '', but later on he confessed that he does not have a college degree in "Franken - Girl ''. Nevertheless, he claimed to be able to provide a strong recommendation letter for Justin Russo to the said college. Mr. Laritate considers Justin as a model student. On the contrary, he has an on - and - off relationship with Justin 's sister, Alex, in that he is often disappointed in Alex 's lack of work ethic and unwillingness to do well in school. In season two 's "Do n't Rain on Justin 's Parade '', Mr. Laritate calls Alex an evil genius, much to Alex 's pleasure. However, it is evident that he cares about her through his many efforts to reach out to her. In "Alex 's Logo '', he is basically Alex 's only friend because everybody became mad at Alex after she said what she really thought of them while giving a speech under truth spell. After Alex complains about everybody still being mad at her, even though it has been a week since the speech, Mr. Laritate then tries to help Alex by doing more embarrassing acts in the hope that everybody will turn their attention to him and forget what Alex has done. Unfortunately, he also incidentally embarrasses Alex on the microphone while doing so, resulting other students calling her the principal 's BFF (Best Friend Forever). He gives all the students that then threw T - shirts at him detention. He also gave Alex, who did not throw a T - shirt, detention, in order to justify his "student to principal '' relationship. However, she smiles at Mr. Laritate 's and says "thank you '', because she knows its just his another attempt to defuse tension between her and other students. It was also revealed in this episode that Mr. Laritate has somewhat a difficult relationship with his mother. In season 4, Alex crashes Mr. Laritate 's car during her driving lesson and he sees how grown up she has become when she confesses the truth to him. Mr. Laritate personally hands high school diploma to Alex and Harper Finkle during a makeshift graduation ceremony of Tribeca Prep class of 2011 in "Wizards vs Asteroids ''. He appeared last time in "Get along Little Zombie '' in which he was temporarily turned into a zombie after he got bitten by one during his visit to Alex and Harper 's apartment. Alex and Mason Greyback turned him back into human again using ' Zombie bites treatment ' lotion. It is shown at the end of the episode that he is, apparently, willing to keep a close relationship to the Russo family. Mason Greyback (Gregg Sulkin; seasons 3 - 4) is first mentioned in Season 2 episode Future Harper when Future Harper tells Alex that she seems irritable and asks if she broke up with Mason yet, implying that she will date a boy named Mason in the future. Mason is first fully introduced as a student in Alex 's art class, and as an English boy who has a crush on Alex. They later begin to date. Mason loves to paint. He gives Alex a necklace that lights up when the person who wears it is in love with the person who put it on them. It is revealed that he in fact is a werewolf but does n't turn Alex into a werewolf when they kiss, because he is a purebred and only mutts turn people into werewolves when they kiss, referencing Justin 's werewolf Wizface - date from the season 2 episode "Beware Wolf ''. He helps in the search for Juliet after picking up her scent from her dental floss. It is revealed that he and Juliet used to date (which also reveals Mason does n't age) and he admits impulsively that he never stopped loving her, which breaks Alex 's heart. She throws away the magic necklace he gave her (which lights up if the wearer is in love with the person who put it on them). He later tells Max that, like dogs, werewolves are incredibly loyal and impulsive, which explains why he could n't contain himself when he saw Juliet, though his feelings for Alex are still present. He tries to prove his love for Alex by going back to the mummy 's tomb to find the necklace. Justin, Max, and Juliet come to rescue Alex, even though Alex volunteered to go and was n't in any real danger. Max likes Mason, but Justin hates him for breaking Alex 's heart. When Justin tries to make Alex forcedly leave the tomb, Mason, being fiercely protective, turns into a werewolf, and starts fighting Justin and Juliet. He is attacking Justin when Juliet bites him, causing him to turn into a wolf permanently. Before his transformation, Alex puts the necklace on him and it lights up. Alex knows now that he truly loves her, but he leaves because as a wolf, he has no control. He leaves the mummy 's tomb as he transforms into a wolf. In Wizards Unleashed, it is discovered that Mason has been captured by wizards and is being used in TV advertisements. Upon discovering this, Alex manages to retrieve him, and uses magic to revert the transformation, but only succeeds in leaving him with pointy ears, claws, and mass abounds of fur. Though Alex initially has trouble accepting Mason in this new state, she eventually gets over it and Justin manages to fully restore his humanity by playing a magical jawharp at the end of the episode. In "Wizards Exposed, '' the wizards get caught, and Mason gets caught while he was "admiring his hat. '' In "Alex Gives Up, '' they are forced to break up due to a rule that says that werewolves and non-wizards can not date because the werewolf always ends up eating the human due to their anger management issues. At first, Chancellor Rootie Tootie Tootie appealed the rule, but then Alex, jealous of Lisa Cucuy, who was hitting on Mason, and makes Mason extremely angry by embarrassing him in front of her brother, her parents, and the Cucuy 's. Mason then transforms into his half - werewolf form and Chancellor Tootie Tootie, seeing he is incapable of controlling his anger, takes back his appeal, therefore forcing Alex and Mason to break up. In "Journey to the Center of Mason '', Mason eats Dean Moriarty (Alex 's ex-boyfriend; they broke up when Dean had to move to Ohio) because he is jealous that Alex is spending time with Dean and not him. Alex, Justin, and Max go inside of Mason. While in Mason, Alex, Justin, and Max go into his brain. Alex sees that all Mason does is think about her. When Alex, Justin, and Max come out of him with Dean, Dean leaves because Alex does n't want him back. After he leaves, Alex and Mason get back together because Alex decided to rejoin the Wizard Competition so she and Mason could be together. In the episode, Dean only refers to Mason as "London Bridges ''. Mason has the deepest love for Alex and has nicknamed her ' Brown Eyes ', ' Little Meatball ', and ' Love '. In "Wizard of the Year '', Alex breaks up with him for the third time because of his mistrust of her and a monster tamer named Chase. In the episodes of "Wizards of Apartment 13B '', Alex and Harper move into an apartment on the thirteenth floor where coincidentally Mason is living at. Mason stops at nothing to win Alex back; using Harper 's training wand to open Alex 's heart to him. Though this turns out for the worse as Alex begins to flirt with an ugly ogre and Felix. In "Ghost Roommate '', Mason goes on a date with Lucy, a ghost who is living with Alex and Harper. Alex is jealous that Mason and Lucy went on a date and proceed to find Lucy 's long lost true love. Alex is then stuck on the Bermuda Triangle, where magic ca n't be used. Mason comes to her rescue using Justin 's hideous Bermuda shorts. In "Get Along, Little Zombie '', Mason is still determined to ask Alex out on a date and even breaking the elevator to spend some time with Alex; exposing the magical thirteenth floor to non-magical residents. Soon, Mr. Laritate discovers the thirteenth floor and gets bitten by a zombie; subsequently gets turned into a zombie. Mason and Alex go on a wild goose chase trying to find Mr. Laritate and end up in a square dance. Alex then realizes that she wants Mason back while Mason was trying to say that maybe he and Alex should just be friends. At the end of the episode, it is shown that Mason and Alex do get back together. In "Who Will be the Family Wizard '', Mason comes to the wizard competition to cheer on Alex. He and Juliet meet once again and tell each other no hard feelings from the events that took place in "Wizards vs. Werewolves ''. Also when Alex loses the competition, he and Alex look deeply saddened due to her losing and ponder breaking up. However, Justin realizes how much Alex needs to win to be happy, and so reveals she would have won if she had n't gone back for him and he did n't really win like everyone thought he did. Alex becomes the Russo family wizard and is able to stay with Mason. In "The Wizards Return: Alex vs. Alex '', Mason gets jealous over the evil wizard, Dominic, who flirts with Alex. Mason breaks Alex out of jail before she can be exiled, and he throws Dominic off the Leaning Tower of Pisa. Juliet van Heusen (Bridgit Mendler; seasons 2 - 4) is a teenage vampire who is Justin Russo 's main love interest. When she was born, her parents let her have a soul so she could socialize in the human world. This also makes her more compassionate than most vampires, who are usually heartless and cunning. She is characterized as a beautiful girl with blonde hair and rather pale skin. She wears vanilla - scented perfume to hide her true vampire scent of death and decay in an attempt to avoid detection by monster hunters. She is also a health - conscious girl who has certain diet on her menu. Juliet 's development as a vampire is very slow, as her fangs have just started to grow at a late age for a vampire. She is also shown to not want to drink blood, and even tries to persuade her parents to abstain from drinking blood. As a very unusual character for a vampire, this trait is used for comic effect, especially when she empties a berliner with her fangs. At first, Juliet and Justin were not allowed to date by their respective parents due to their restaurants ' rivalry. However, the two fight for their love and soon are allowed to date. In "Night at the Lazerama '', Juliet accompanies Justin on his monster - hunt for a mummy. Unfortunately, they become trapped in a glass display near a glass roof, which would cause Juliet to burn up in the morning. Since the mummy is the only one that able to take Juliet out from the glass display, Justin decides that the only way to save Juliet 's life is to turn her into the mummy 's slave. Nevertheless, he promises to save her one day. The mummy then takes Juliet to Transylvania, where she is forcibly imprisoned in suspended animation behind Hieroglyphics, though she never gives up hope that Justin will save her. In "Wizards vs. Werewolves '', Justin indeed saves her, and then it is revealed that she once dated Mason Greyback, Alex 's werewolf boyfriend, 300 years ago. This fact makes Justin rather jealous towards Mason, leads to a fight between the two. Juliet intervenes when Justin and Mason fight, and Mason scratches Juliet and, in turn, Juliet bites Mason, resulting in Mason turning into a wolf and Juliet is stripped of her vampire powers. This causes her to rapidly age into an old woman, since she is truly 2,193 years old. Because of this, Justin and Juliet are forced to end their relationship, both of them heartbroken. In "Moving On '', Juliet appears again, happy to witness that Justin apparently has forgotten her and moved on. However, Justin and Juliet reunite towards the end of season 4, since, actually, none of them has stopped loving each other. Juliet gets her vampire powers and youth back when Gorog (the leader of dark angels) gives them to her in an effort to tempt Justin to join the Dark Side. However, the Russos defeat Gorog, releasing Juliet from his influence. Juliet comes to support Justin at the Russo family 's wizard competition, where she and Mason meet and both acknowledge that no hard feelings left from the events that took place in "Wizards vs. Werewolves ''. When Justin takes Professor Crumbs ' place as headmaster of WizTech, Juliet says she 's even happier about that. Dean Moriarty (Daniel Samonas; seasons 2 and 4) is Alex 's boyfriend in the second season. He makes temporary tattoos in the boys bathroom and is interested in cars. In episode "Racing '' he charmed Alex 's parents with his car skills, though he revealed to Alex that he only did it to make a good first impression by being polite. It is also revealed in that episode, that he was raised by his Uncle, as he exclaimed proudly "my Uncle raised me with manners ''. Harper and Justin are n't too fond of him and Jerry seems the same way later on, though this is partly due to Jerry being an overprotective father after learning that he is dating Alex. He can never remember Harper 's name. In the episode "Alex 's Brother Maximan '' it was shown he likes roller skating and playing with the claw game and winning stuffed animals. The only thing Alex does n't like about him is that he 's horrible at showing his feelings, as shown in "Saving Wiz Tech Part 1 '', but it changed in "Saving Wiz Tech Part 2 ''. He does not appear to be academically inclined, as Alex noticed a false good grade on his wooden assignment / gift for her (it was a D changed to a B), and he also stated that he does not "test well '', meaning he does n't see himself as book - smart. He moved away sometime after that, so Alex visited him in his dreams in "Wizards vs Vampires: Dream Date ''; she then saw how they were growing apart, so they broke up. But he returned in Season 4 "Journey to the Center of Mason '' because he wanted to get back together, though she denied it because she likes Mason, and Mason eats him out of jealousy. Dean is later rescued by Alex, Justin, and Max, and learns that Alex has chosen Mason and agrees to back off. Professor Crumbs (Ian Abercrombie) is the headmaster of WizTech (a reference to Albus Dumbledore). In "Wizard School '', he is often seen holding a muffin. (The muffin 's crumbs fall into his beard, hence the name "Crumbs ''.) Professor Crumbs is never seen without his incredibly long beard, even when he was turned into a guinea pig in the episode "Report Card ''. The only episode where he has been seen without his beard is in part two of season two 's "Saving WizTech '' when Ronald Longcape, Jr. (Chad Duell) took the beard from him, revealing that it was a fake beard all this time. After he was defeated, Crumbs took his beard back. Crumbs is shown how to socialize by Max in "Saving WizTech '' and has done some childish things, such as spitting over the edge of the Tower of Evil. In season two 's "Saving WizTech '', he states that he is 850 years old. He speaks with a British accent, which Alex sometimes makes fun of. Another thing that Alex makes fun of is the fact that he has a smelly beard revealed in Wizards Exposed. In the season 3 finale, he appears to be captured by a government agency with the rest of the wizard elite. However, the season 4 premiere reveals it was all an illusion Crumbs created to "test '' Justin and Alex. As each broke it by revealing to government agents and reporters (all also illusions), Crumbs forces them both to start at level 1 of their wizard lessons; however, after seeing how remarkably unintelligent Max was, he regretted his choice. When Justin and Max change him back into a kid, in their attempt to trick him into revealing how to undo a mutant spell they cast on Max that turned him into a little girl, it 's revealed that everyone in Crumbs ' family has a beard even as a kid. In the four - part episode revolving around Floor 13B, Professor Crumbs was surprised about a wizard floor that he did not know of and ended up discovering that Gorog was behind it. Professor Crumbs was captured by Gorog alongside the other inhabitants and was later rescued by the Russos upon Gorog 's defeat. In the end of the series finale, Crumbs announces Alex as the Family Wizard and that he is retiring, and makes Justin Russo (David Henrie) the new headmaster of WizTech, thus also granting Justin his full magic powers. Hugh Normous (Josh Sussman; seasons 1 - 2 and 4) was Alex 's Wiz Tech friend he first appeared in "Wizard School. '' His name is a pun of the word "enormous '', it is also an irony considering that he is n't a real giant, as pointed out by Alex when he introduced himself and she responded with "Not Really ''. At first, he believed himself to be a giant, since his parents are giants, but that he 's the runt of the litter, so he carries around miniature everyday objects to boost his self - esteem; humorously, he also wears underpants and glasses too small for himself. He later appeared in season 2 episode Saving Wiz Tech where he showed Alex his friend Ronald Longcape Jr he later appears in Harper Knows and Hugh 's not Normous where he discovers that he is adopted and that he is not a real giant. He eventually meets his birth parents and goes to live with them, but he does n't resent his adoptive parents. Hugh is a giant in the sense of, when he met his birth parents, he was markedly taller than both of them, with both of them commenting on his larger size. Presumably, after discovering this, he stopped his habit of carrying miniature objects and started wearing regular - sized clothes. He does not appear in season 3 or the movie. He returned in Ghost Roommate, still thinking he is a giant. Kelbo Russo (Jeff Garlin; seasons 1 - 3) is the uncle of Alex, Justin and Max, brother of Jerry and Megan and brother - in - law of Theresa. Unlike Jerry, he is very fun and carefree. Kelbo is a full wizard (Jerry had originally won the competition, but gave up his powers to marry a mortal) who often uses his powers very childishly, and often seems just as irresponsible with them as Alex is with her own. As a recurring gag, in every episode he appears in during the post credits scenes, he makes a prank call to someone. He first appears in "Alex In the Middle '', and temporarily becomes Alex 's wizard tutor. He reappears in "Retest '', it is requested that he, Jerry and their sister Megan re-do their wizard competition, but when Megan refuses, Kelbo hires a magical fish - lawyer to help them, but after he informs that there is nothing that he can do, Kelbo decides to move to Atlantis for good. He appears once again in "Dude Looks Like Shakira '', and reveals that he is Shakira, but also added that what he has been doing is against the wizard rules, and can no longer control his transformations. The Russos eventually cure him of his illness (though Shakira 's career - end is n't mentioned afterwards). Despite his immaturity, naivety and clueless behavior, Kelbo proves to be a very loving and caring uncle, which was visible in his second appearing, and he, unlike Jerry, has no hard feelings for Megan, in fact, he was genuinely happy to see her again. Also in "Dude Looks Like Shakira '', Kelbo, in a rare moment of maturity, attempts to confess his crime to the wizard council, even knowing the consequences, in order to free his family from legal issues. Prank calls include: Is your refrigerator running (to Theresa in "Alex in the Middle ''), Is your refrigerator running (to Megan in "Retest ''), This is the wizards council. We 'll be over to break you hands for breaking wizard rules, etc. (to Justin in "Dude looks like Shakira ''). It 's also shown that he really hates the taste of cinnamon. He did not appear in season 4. Monotone Woman (Amanda Tepe; seasons 1 - 2) has played numerous characters during the show 's first season, as well as one guest appearance in season two. She appears as a random character who speaks with a monotone. Jobs she has held include department store manager, waitress, frozen yogurt store manager, hotel maitre 'd, dog show security guard, the information desk lady at Volcano Land (only job in the magic world), art museum security guard, and hot dog vendor. Tepe 's only appearance on the show during the second season is in the episode "Wizards vs. Vampires: Dream Date '' as the hot dog vendor person. Her name in "New Employee '' is Amanda, according to her name tag, but in "Art Museum Piece '', her name is Elaine according to Blue Boy. She does not appear in the third and fourth season. Characters only appear in Wizards of Waverly Place: The Movie. Characters only appear in The Wizards Return: Alex vs. Alex.
which are the biggest states in the united states
List of U.S. States and territories by area - wikipedia This is a complete list of the states of the United States and its major territories ordered by total area, land area, and water area. The water area numbers include inland waters, coastal waters, the Great Lakes, and territorial waters. Glaciers and intermittent bodies of water are counted as land area. All divisions presented below are as configured by the United States Census Bureau. All regions presented below are as configured by the United States Census Bureau. U.S. states by total area U.S. states by land area U.S. states by water area U.S. states by water percentage Alaska is the largest state by total area, land area, and water area. It is the 19th largest country subdivision in the world. The area of Alaska is 7001180000000000000 ♠ 18 % of the area of the United States and 7001210000000000000 ♠ 21 % of the area of the contiguous United States The second largest state, Texas, is only 7001400000000000000 ♠ 40 % of the total area of the largest state, Alaska Rhode Island is the smallest state by total area and land area San Bernardino County is the largest county in the contiguous U.S. and is larger than each of the nine smallest states; it is larger than the four smallest states combined. Michigan is second (after Alaska) in water area, and first in water percentage Florida is mostly a peninsula, and has the third largest water area and seventh largest water area percentage
where did the saying fight fire with fire come from
Shouting fire in a crowded theater - wikipedia "Shouting fire in a crowded theater '' is a popular metaphor for speech or actions made for the principal purpose of creating unnecessary panic. The phrase is a paraphrasing of Oliver Wendell Holmes, Jr. 's opinion in the United States Supreme Court case Schenck v. United States in 1919, which held that the defendant 's speech in opposition to the draft during World War I was not protected free speech under the First Amendment of the United States Constitution. The paraphrasing does not generally include (but does usually imply) the word falsely, i.e., "falsely shouting fire in a crowded theater '', which was the original wording used in Holmes 's opinion and highlights that speech that is dangerous and false is not protected, as opposed to speech that is dangerous but also true. Holmes, writing for a unanimous Court, ruled that it was a violation of the Espionage Act of 1917 (amended by the Sedition Act of 1918), to distribute flyers opposing the draft during World War I. Holmes argued this abridgment of free speech was permissible because it presented a "clear and present danger '' to the government 's recruitment efforts for the war. Holmes wrote: The most stringent protection of free speech would not protect a man falsely shouting fire in a theater and causing a panic. (...) The question in every case is whether the words used are used in such circumstances and are of such a nature as to create a clear and present danger that they will bring about the substantive evils that Congress has a right to prevent. The First Amendment holding in Schenck was later partially overturned by Brandenburg v. Ohio in 1969, which limited the scope of banned speech to that which would be directed to and likely to incite imminent lawless action (e.g. a riot). The test in Brandenburg is the current Supreme Court jurisprudence on the ability of government to proscribe speech after that fact. Despite Schenck being limited, the phrase "shouting fire in a crowded theater '' has since come to be known as synonymous with an action that the speaker believes goes beyond the rights guaranteed by free speech, reckless or malicious speech, or an action whose outcomes are obvious. People have indeed falsely shouted "Fire! '' in crowded public venues and caused panics on numerous occasions, such as at the Royal Surrey Gardens Music Hall of London in 1856, a theater in New York 's Harlem neighborhood in 1884, and in the Italian Hall disaster of 1913, which left 73 dead. In the Shiloh Baptist Church disaster of 1902, over 100 people died when "fight '' was misheard as "fire '' in a crowded church causing a panic and stampede. In contrast, in the Brooklyn Theatre fire, the actors falsely initially claimed that the fire was part of the performance in an attempt to avoid a panic. Instead, their actions only delayed the evacuation and made the resulting panic far more severe. Finan writes that Justice Holmes began to doubt his decision due to criticism received from free - speech activists. He also met the legal scholar Zechariah Chafee and discussed his Harvard Law Review article "Freedom of Speech in War Times ''. According to Finan, Holmes 's change of heart influenced his decision to join the minority and dissent in the Abrams v. United States case. Abrams was deported for issuing flyers saying the US should not intervene in the Russian Revolution. Holmes and Brandeis said that "a silly leaflet by an unknown man '' should not be considered illegal. Chafee argued in Free Speech in the United States that a better analogy in Schenk might be a man who stands in a theatre and warns the audience that there are not enough fire exits. In his introductory remarks to a 2006 debate in defense of free speech, writer Christopher Hitchens parodied the Holmes judgement by opening "Fire! Fire, fire... fire. Now you 've heard it '', before condemning the famous analogy as "the fatuous verdict of the greatly over-praised Justice Oliver Wendell Holmes. '' Hitchens argued that the socialists imprisoned by the court 's decision "were the ones shouting fire when there really was a fire in a very crowded theatre indeed... (W) ho 's going to decide? ''
when did the original lost in space first air
Lost in Space - wikipedia Lost in Space is an American science fiction television series created and produced by Irwin Allen. The series follows the adventures of a pioneering family of space colonists who struggle to survive in a strange and often hostile universe after their ship is sabotaged and thrown off course. The show ran for three seasons, with 83 episodes airing between 1965 and 1968. The first season was filmed in black and white, with the second and third seasons filmed in color. Although the original concept (depicted in the pilot episode "No Place to Hide '', not aired until 1997) centered on the Robinson family, many later storylines focused primarily on Dr. Zachary Smith, played by Jonathan Harris. Smith and Robot B - 9 were both absent from the unaired pilot, as the addition of their characters was only decided upon once the series had been commissioned for production. Originally written as an utterly evil (if careless) saboteur, Smith gradually became the troublesome, self - centered, incompetent character who provided the comic relief for the show and caused much of the conflict and misadventures. In the unaired pilot, what caused the group to become lost in space was a chance encounter with a meteor storm, but in the first aired episode it was Smith 's unplanned presence on the ship that sent it off course into the meteor field, and his sabotage that caused the Robot to accelerate the ship into hyperdrive. Smith is thus the key to the story. On October 16, 1997, 32 years in the future from the perspective of viewers in 1965, the United States is about to launch one of history 's great adventures: man 's colonization of space. The Jupiter 2 (called Gemini 12 in the unaired pilot episode), a futuristic saucer - shaped spacecraft, stands on its launch pad undergoing final preparations. Its mission is to take a single family on a five - and - a-half - year journey (altered from 98 years in the unaired pilot) to a planet orbiting the nearest star, Alpha Centauri (the pilot show had referred to the planet itself as Alpha Centauri but this error was corrected for the series), which space probes have revealed possesses ideal conditions for human life. All of this is presented as a news report of a real space expedition, with news commentators informing us of the mission 's backstory. The Robinson family, supposedly selected from two million volunteers for this mission, consisted of Professor John Robinson, played by Guy Williams, his wife, Maureen, played by June Lockhart, their children, Judy (Marta Kristen), Penny (Angela Cartwright), and Will (Billy Mumy). They are accompanied by their pilot, U.S. Space Corps Major Donald West (Mark Goddard), who is trained to fly the ship when the time comes for the eventual landing. Initially the Robinsons and Major West will be in freezing tubes for the voyage, with the tubes opening when the spacecraft approached its destination. Unless there is a problem with the ship 's navigation or guidance system during the voyage, Major West was only to take the controls during the final approach and landing on the destination planet, while the Robinsons were to strap themselves into contour couches on the lower deck for the landing. Other nations are racing to colonize space, and they would stop at nothing, not even sabotage, to thwart the United States 's effort. It turns out that Dr. Zachary Smith (Jonathan Harris), Alpha Control 's doctor, and later supposedly a psychiatrist and environmental control expert, is moonlighting as a secret agent for one of those competing nations. After disposing of a guard who catches him on board the spacecraft, Smith reprograms the Jupiter 2 's B - 9 environmental control robot (voiced by Dick Tufeld) to destroy critical systems on the spaceship eight hours after launch. Smith, however, unintentionally becomes trapped aboard at launch and his extra weight throws the Jupiter 2 off course, causing it to encounter a meteor storm. This, plus the robot 's rampage that causes the ship to prematurely engage its hyperdrive, causes the expedition to become hopelessly lost in the infinite depths of outer space. Smith 's selfish actions and laziness frequently endanger the expedition. After the first half of the first season, Smith 's role assumes less sinister overtones, although he continues to display many character flaws. In "The Time Merchant '' Smith shows he actually does care about the Robinsons, when he travels back in time to the day of the Jupiter 2 launch, with the hope of changing his own fate by not boarding the ship, so allowing the Robinsons to start the mission as originally planned. However, he learns that without his weight altering the ship 's course, the Jupiter 2 will be destroyed by an uncharted asteroid. So he sacrifices his chance to stay on his beloved Earth, electing to re-board the ship, thus saving the lives of the family and continuing his role amongst them as the reluctant stowaway. The fate of the Robinsons, Major Don West, and Dr. Smith is never resolved in the series, as the series 's unexpected cancellation left the Jupiter 2 and her crew on the junk - pile at the end of season 3. The astronaut family of Dr. John Robinson, accompanied by an Air Force / Space Corps pilot and a robot, set out in the year 1997 from an overpopulated Earth in the spaceship Jupiter 2 to travel to a planet circling the star Alpha Centauri with hopes of colonizing it. The Jupiter 2 mission is sabotaged by Dr. Zachary Smith -- an agent for an unnamed foreign government -- who slips aboard the spaceship and reprograms the robot to destroy the ship and crew. However, when he is trapped aboard, his excess weight alters the craft 's flight path and places it directly in the path of a massive meteor storm. Smith manages to save himself by prematurely reviving the crew from suspended animation. The ship survives, but consequent damage caused by Smith 's earlier sabotage of the robot leaves them lost in space. In the third episode the Jupiter 2 crashlands on an alien world, later identified by Will as Priplanus, where they spend the rest of the season and survive a host of adventures. Smith, whom Allen had originally intended to write out, remains through all three seasons, as a source of comedic cowardice and villainy, exploiting the eternally forgiving nature of Professor Robinson. Smith was liked by the trusting Will, and tolerated by the women, but he was disliked by both the Robot and the suspicious Major Don West. At the start of the second season the repaired Jupiter 2 launches into space once more, to escape the destruction of Priplanus following a series of cataclysmic earthquakes, but in the fourth episode the Robinsons crash - land on a strange new world, to become planet - bound again for another season. This replicated the format of the first season, but now the focus of the series was more on humor than on action / adventure, as evidenced by the extreme silliness of Dr. Smith amidst a plethora of unlikely aliens who began appearing on the show, often of a whimsical fantasy - oriented nature. One of these colorful visitors even turned out to be Smith 's own cousin, intent on swindling him out of a family inheritance with the assistance of a hostile gambling machine. A new theme tune was recorded by Warren Barker for the second season, but it was decided to retain the original. In the third season, a major format change was introduced, to bring the series back to its roots as solid adventure, by allowing the Robinsons to travel to more planets. The Jupiter 2 was now allowed to freely travel space, visiting a new world each week, as the family attempt to return to Earth or to reach their original destination in the Alpha Centauri system. A newly built "Space Pod '', that mysteriously appeared as though it had always been there, provided a means of transportation between the ship and passing planets, as well as being a plot device to launch various escapades. This season had a dramatically different opening credits sequence and a new theme tune -- which, like the original, was composed by John Williams -- as part of the show 's new direction. During its three - season run, many actors made guest appearances, including familiar actors and / or actors who went on to become well - known. Among those appearing in Lost in Space episodes: Joe E. Tata, Kevin Hagen, Alan Hewitt, Warren Oates, Don Matheson, Kurt Russell, Ford Rainey, Wally Cox, Grant Sullivan, Norman Leavitt, Tommy Farrell, Mercedes McCambridge, Lyle Waggoner, Albert Salmi, Royal Dano, Strother Martin, Michael J. Pollard, Byron Morrow, Arte Johnson, Fritz Feld, John Carradine, Al Lewis, Hans Conried, Dennis Patrick, Michael Rennie among many others. Future Hill Street Blues stars, Daniel J. Travanti (billed as "Danny Travanty '') and Michael Conrad, made guest appearances on separate episodes. While Mark Goddard was playing Major West, he had a guest appearance as well. Jonathan Harris, although a permanent cast member, was listed in the opening credits as "Special Guest Star '' of every episode of Lost in Space. In 1962, the first appearance of a space - faring Robinson family occurred in a comic book published by Gold Key Comics. The Space Family Robinson, who were scientists aboard Earth 's "Space Station One '', are swept away in a cosmic storm in the comic 's second issue. These Robinsons were scientist father Craig, scientist mother June, early teens Tim (son) and Tam (daughter), along with pets Clancy (dog) and Yakker (parrot). Space Station One also boasted two spacemobiles for ship - to - planet travel. The television show launched three years later, in 1965, and during its run CBS and 20th Century Fox reached an agreement with Gold Key Comics that allowed the use of the name "Robinson '' on the TV show; in return, the comic was allowed to append "-- Lost In Space '' to its title, with the potential for the TV show to propel additional sales of the comic. Following the format of Allen 's first television series, Voyage to the Bottom of the Sea, unlikely fantasy - oriented adventure stories were dreamed up that had little to do with either serious science or serious science fiction. The stories had little realism, with, for instance, explosions happening randomly, merely to cover an alien 's arrival or departure, or sometimes merely the arrival of some alien artifact. Props and monsters were regularly recycled from other Irwin Allen shows, as a matter of budgetary convenience, and the same alien would appear on Voyage one week and Lost in Space the next. A sea monster outfit that had featured on Voyage would get a spray paint job for its Lost in Space appearance, while space monster costumes would be reused on Voyage as sea monsters. The clear round plastic pen holder used as a control surface in the episode "The Derelict '' turned up regularly throughout the show 's entire run. Spacecraft models, too, were routinely re-used. The foreboding derelict ship from season 1 was redressed to become the Vera Castle in season 3, which, in turn, was reused in several episodes (and flipped upside down for one of them). The Fuel Barge from season 2 became a Space Lighthouse in season 3, with a complete re-use of the effects footage from the earlier story. The derelict ship was used again in season 3, with a simple color change. Likewise the alien pursuer 's ship in "The Sky Pirate '', which itself was an Earth ship lifted from the 1958 film War of the Satellites, was re-used in the episode "Deadliest of the Species ''. Moreover, the footage of Hapgood 's ship launching into space in episode 6 of season 1 was re-used for virtually every subsequent launch in the following three years, no matter what shape the ship it supposedly represented had had on the ground. By the end of the first season, the character of Smith is permanently established as a bungling, self - serving, greedy, manipulative coward. These character traits are magnified in subsequent seasons. His haughty bearing, and ever - present alliterative repartee, were staples of the character. While he and Major West repeatedly clashed over his goldbricking, or because of some villainy he had perpetrated, the Robot was usually the preferred victim of his barbed and acerbic wit. Despite Harris being credited as a "Special Guest Star '' on every episode, Smith became the pivotal character of the series. Harris was the last actor cast, with the others all having appeared in the unaired pilot. He was informed that he would "have to be in last position '' in the credits. Harris voiced discomfort at this, and (with his continuation beyond the first few episodes still in doubt) suggested appearing in the last position as "Special Guest Star ''. After having "screamed and howled '', Allen agreed. The show 's writers expected that Smith would indeed be a temporary villain, who would only appear in the early episodes. Harris, on the other hand, hoped to stay longer on the show, but he found his character as written very boring, and feared it would quickly bore the audience too. Harris "began rewriting his lines and redefining his character '', by playing Smith in an attention - getting, flamboyant style, and ad - libbing his scenes with colorful, pompous dialogue. Allen quickly noticed this, and liked it. As Harris himself recalled, Allen said, "I know what you 're doing. Do more of it! '' Mumy recalls how, after he had learned his own lines, Harris would ask to rehearse with him using his own dialogue. "He truly, truly single - handledly created the character of Dr. Zachary Smith that we know, '' said Mumy. "This man we love - to - hate, a sniveling coward who would cower behind the little boy, ' Oh, the pain! Save me, William! ' That 's all him! '' Lost in Space is remembered, in part, for the Robot 's oft - repeated lines such as "Warning! Warning! '' and "It does not compute ''. Smith 's frequent put - downs of the Robot were also popular, and Jonathan Harris was proud to talk about how he used to lie in bed at night dreaming them up for use on the show. "You Bubble - headed Booby! '', "Cackling Cacophony '', "Tin Plated Traitor '', "Blithering Blatherskyte '', and "Traitorous Transistorized Toad '' are but a few alongside his trademark lines: "Oh, the pain... the pain! '' and "Never fear, Smith is here! '' One of Jonathan Harris 's last roles was providing the voice of the illusionist praying mantis "Manny '' in Disney 's A Bug 's Life, where Harris used "Oh, the pain... the pain! '' near the end of the film. The catchphrase "Danger, Will Robinson! '' originates with the series, but was only ever used once in it, during season 3, episode 11: "The Deadliest of the Species '', when the Robot warns young Will Robinson about an impending threat. It was also used as the slogan of the 1998 movie, whose official website had the address www.dangerwillrobinson.com. In 1962, Gold Key comics (formerly Dell Comics), a division of Western Publishing Company, began publishing a series of comic books under the title Space Family Robinson. The story was largely inspired by The Swiss Family Robinson but with a space - age twist. The movie and television rights to the comic book were then purchased by noted television writer Hilda Bohem (The Cisco Kid), who created a treatment under the title, Space Family 3000. In July 1964, science fiction writer and filmmaker Ib Melchior began pitching a treatment for a feature film, also under the title Space Family Robinson. There is debate as to whether or not Allen was aware of the Melchior treatment. It is also unknown whether Allen was aware of the comic book or the Hilda Bohem treatment. As copyright law only protects the actual expression of a work, and not titles, general ideas or concepts, in 1964 Allen moved forward with his own take on Space Family Robinson, with characters and situations notably different from either the Bohem or the Melchior treatments (none of the three treatments contained the characters of Smith or the Robot). Intended as a follow up to his first successful television venture, Voyage to the Bottom of the Sea, Allen quickly sold his concept for a television series to CBS. Concerned about confusion with the Gold Key comic book, CBS requested that Allen come up with a new title. Nevertheless, Hilda Bohem filed a claim against Allen and CBS Television shortly before the series premiered in 1965. A compromise was struck as part of a legal settlement. In addition to an undisclosed sum of money, Western Publishing would be allowed to change the name of its comic book to Lost in Space. There were no other legal challenges to the title until 1995, when New Line Cinema announced their intention to turn Lost in Space into a big budget motion picture. New Line had purchased the screen rights from Prelude Pictures (which had acquired the screen rights from the Irwin Allen Estate in 1993). At that time, Melchior contacted Prelude Pictures and insisted that Lost in Space was directly based upon his 1964 treatment. Melchior was aided in his efforts by Ed Shifres, a fan who had written a book entitled Space Family Robinson: The True Story (later reprinted with the title, Lost in Space: The True Story). The book attempts to show how Allen allegedly plagiarized Melchior 's concept, with two outlines presented side - by - side. To satisfy Melchior, Prelude Pictures hired the 78 - year - old filmmaker as a consultant on their feature film adaptation. This accommodation was made without the knowledge or consent of the Irwin Allen Estate or Space Productions, the original copyright holder of Lost in Space. Melchior 's contract with Prelude also guaranteed him 2 % of the producer 's gross receipts, a provision that was later the subject of a suit between Melchior and Mark Koch of Prelude Pictures. Although an Appellate Court ruled partly in Melchior 's favor, on November 17, 2004, the Supreme Court of California denied a petition by Melchior to further review the case. No further claim was made and Space Productions now contends that Allen was the sole creator of the television series Lost in Space. Melchior died on March 14, 2015 at the age of 97. In 1965 Allen filmed a 50 - minute black - and - white series pilot for Fox, which is usually known as "No Place to Hide '' (although this title does not actually appear). After CBS accepted the series, the characters Smith and the Robot were added, the spaceship, originally named the Gemini 12, was redesigned by adding a second deck, interior equipment, and altering some consoles slightly, and was rechristened the Jupiter 2. For budget considerations, a good part of the footage included in the pilot episode was reused, being carefully worked into the early series episodes. CBS was also offered Star Trek at around the same time, but turned it down in favor of Lost in Space. In an ironic twist of fate, in 2006, CBS gained the television rights to the Star Trek franchise. The Lost in Space television series was originally named Space Family Robinson. Allen was apparently unaware of the Gold Key comic of the same name and similar theme. Both were space versions of the novel Swiss Family Robinson with the title changed accordingly. Gold Key Comics did not sue Allen 's production company or 20th Century Fox for copyright infringement, as Allen was expected to license the rights for comic book adaptations of his TV properties (as he already had with Voyage to the Bottom of the Sea), but instead changed the title of the comic to Lost in Space to take advantage of the series ' prominence. The first season emphasized adventure and chronicled the daily adventures that a pioneer family might experience if marooned on an alien world. The first half of season 1 included dealing with the planet 's unusual orbit, resulting in extremes of cold and heat, and the Robinson party trekking around the rocky terrain and stormy inland oceans of Priplanus in the Chariot to avoid those extremes, encountering dangerous native plants and animals, and a few reasonably plausible off - world visitors. However, midway through the first season, following the two - parter "The Keeper '', the format changed to a "Monster of the week '' style, and the stories slipped, almost un-noticeably, from straight adventure to stories based on fantasy and fairy tales, with Will even awakening a sleeping princess with a kiss in episode 27 of season 1, "Lost Civilization. '' Excepting special effect shots (which were reused in later seasons), the first season was filmed in black and white, while the second and third seasons were filmed in color. Beginning in January 1966, ABC scheduled Batman in the same timeslot. To compete, Lost in Space Season 2 imitated Batman 's campy style humor to compete against that show 's enormous success. Bright outfits, over-the - top action, outrageous villains (Space Cowboys, Pirates, Knights, Vikings, Wooden Dragons, Amazon Feministas and even Bespectacled Beagles) came to the fore, in outlandish stories. The Robinsons were cloned by a colorful critter in an interestingly plodding story, and even attacked by a civilization of "Little Mechanical Men '' resembling their Robot who wanted him for their leader in another tale. To make matters worse, stories giving all characters focus were sacrificed, in favor of a growing emphasis on Smith, Will, and the Robot -- and Smith 's change in character was not appreciated by the other actors. According to Billy Mumy, Mark Goddard and Guy Williams both disliked the shift away from serious science fiction. The third season had more straight adventure, with the Jupiter 2 now functional and hopping from planet to planet, but the episodes still tended to be whimsical and to emphasize humor, including fanciful space hippies, more pirates, off - beat inter-galactic zoos, ice princesses and Lost in Space 's beauty pageant. One of the last episodes, "The Great Vegetable Rebellion '' with actor Stanley Adams as Tybo, the talking carrot, took the show into pure fantasy. (Called "the most insipid and bizarre episode in television history '', Kristen recalls that Goddard complained that "seven years of Stanislavski '' method acting had culminated in his talking to a carrot.) A writer Irwin had employed to script a planned crossover between Lost in Space and Land of the Giants noted that Irwin seemed scared of the vegetable story, for it got pushed further and further to the back of the schedule, as the filming of season 3 progressed. The episode 's writer, Peter Packer, who was one of the most prolific writers for Lost in Space and had penned some of the series best episodes, apologized to Harris for the script, stating he had n't "one more damned idea in my head ''. During filming, Guy Williams and June Lockhart were written out of the next two episodes, on full pay, for laughing so much during the production. Astute viewers will notice the smirks as the actors, notably Mark Goddard, tried to contain themselves during the filming of the story. "The Great Vegetable Rebellion '' appears to have directly inspired the "Rules of Luton '' episode on Space 1999 season 2. In 1977 TV Guide listed "The Great Vegetable Rebellion '' at position # 76 in its list of the ' 100 Best Episodes of All Time '. During the first two seasons, episodes concluded in a "live action freeze '' anticipating the following week, with the cliff - hanger caption, "To be continued next week! Same time -- same channel! '' For the third season, the episode would conclude, immediately followed with a vocal "teaser '' from the Robot (Dick Tufeld), warning viewers to "Stay tuned for scenes from next week 's exciting adventure! '', which would highlight the next episode, followed by the closing credits. After cancellation, the show was successful in reruns and in syndication for many years, appearing on the USA Network in the mid-to - late 1980s, most recently on FX (TV channel), Syfy, and ALN (TV network). It is currently available on Hulu streaming video, and is seen Saturday nights on MeTV. There are fan clubs in honor of the series and cast all around the world, and many Facebook group pages connecting fans of the show to each other and to the show 's stars. There is even a weekly podcast dedicated to profiling every episode of the series (initiated over fifty years after the series first aired). There was little ongoing plot continuity between episodes, except in larger goals: for example, to get enough fuel to leave the planet. However, there were some arcs in the show that allowed a small amount of continuity. The first half of the first season had an overall arc, as the Robinsons slowly got used to Smith, and as the robot began his journey to being a thinking and self - aware character. There were four sets of arc stories: Additionally, there were two arcs of linked stories: There were also sequels to some episodes ("Return from Outer Space '' follows from events in "The Sky is Falling ''), and some recurring characters including Space Pirate Tucker, the android Verda, the Green Dimension girl Athena, and Farnum B (each of whom appears in two episodes), and Zumdish (who appears in three episodes). In an odd moment of referential continuity "Two Weeks in Space '' re-used the "Space Music '' angle first shown in "Kidnapped in Space '', while the "Celestial Department Store Ordering Machine '' appeared in two episodes. In early 1968, while the final third - season episode "Junkyard in Space '' was in production, the cast and crew were informally made to believe the series would return for a fourth season. Allen had ordered new scripts for the coming season. A few weeks later, however, CBS announced a list of television series they were renewing for the 1968 -- 69 season, and Lost in Space was not included. Although CBS programming executives failed to offer any reasons why Lost in Space was cancelled, there are at least five suggested reasons offered by series executives, critics and fans, any one of which could be considered sufficient justification for cancellation given the state of the broadcast network television industry at the time. As there was no official final episode, the exploring pioneers never made it to Alpha Centauri nor found their way back to Earth. The show had sufficient ratings to support a fourth season, but it was expensive. The budget per episode for Season One was $130,980, and for Season Three, $164,788. In that time, the actors ' salaries increased; in the case of Harris, Kristen and Cartwright, their salaries nearly doubled. Part of the cost problems may have been the actors themselves: director Richardson saying of Williams ' demanding closeups of himself: The interior of the Jupiter 2 was the most expensive set for a television show at the time, about $350,000. (More than the set of the USS Enterprise a couple of years later.) According to Mumy and other sources, the show was initially picked up for a fourth season, but with a cut budget. Reportedly, 20th Century Fox was still recovering from the legendary budget overruns of Cleopatra, and thus slashed budgets across the board in its film and television productions. Allen claimed the series could not continue with a reduced budget. During a negotiating conference regarding the series direction for the fourth season with CBS chief executive Bill Paley, Allen stormed out (of the meeting) when told that the budget was being cut by 15 % from Season Three, his action thereby sealing the show 's cancellation. Robert Hamner, one of the show 's writers, states (in Starlog, # 220, November 1995) that Paley despised the show so much that the budget dispute was used as an excuse to terminate the series. Years later, Paley stated this was incorrect and that he was a fan of "the Robot ''. The Lost in Space Forever DVD cites declining ratings and escalating costs as the reasons for cancellation. Even Irwin Allen admitted that the Season 3 ratings showed an increasing percentage of children among the total viewers, meaning a drop in the "quality audience '' that advertisers preferred. A contributing factor, at least, was that June Lockhart and director Don Richardson were no longer excited about the show. Lockhart said in response to being told about the cancellation by Perry Lafferty, the head of CBS programming, "I think that 's for the best at this point, '' although she goes on to say that she would have stayed if there had been a fourth season. Lockhart immediately joined the cast of CBS ' Petticoat Junction upon Lost in Space 's cancellation. Richardson had been tipped off that the show was likely to be cancelled, was looking for another series, and had decided not to return to Lost in Space, even if it continued. Guy Williams grew embittered with his role on the show as it became increasingly "campy '' in Seasons 2 and 3 while centering squarely on the antics of Harris ' Dr. Smith character. Whether Williams would have returned for a fourth season or not was n't revealed, but he never acted again after the series, choosing instead to retire to Argentina. While Lost in Space was still reasonably successful, the show was unexpectedly cancelled in 1968 after 83 episodes. The cast members were informed, somewhat rudely, by a brief item in Variety magazine. Such abrupt cancellations would be a habit of CBS, with the same sudden endings for Gilligan 's Island and The Incredible Hulk. The final prime - time episode to be broadcast over CBS was a cast and crew favorite, a repeat from the second season, "A Visit to Hades '', on September 11, 1968. In the 1990s Kevin Burns, a long time fan of Irwin Allen 's works, produced two documentaries. In 1995, Kevin Burns produced a documentary showcasing the career of Irwin Allen. Hosted by Bill Mumy and June Lockhart in a recreation of the Jupiter 2 exterior set and utilizing the "Celestial Department Store Ordering Machine '' as a temporal conduit to show information and clips on Allen 's history. Clips from Allen 's various productions as well as pilots for his unproduced series were presented along with new interviews with cast members of Allen 's shows. Mumy and Lockhart complete their presentation and enter the Jupiter 2, following which Jonathan Harris appears in character as Smith and instructs the Robot once again to destroy the ship as per his original instructions "... and this time get it right, you bubble - headed booby ''. In 1998, Burns produced a television special about the series hosted by John Larroquette and Robot B - 9 (performed by actor Bob May and voice actor Dick Tufeld). The special was hosted within a recreation of the Jupiter 2 upper deck set. The program ends with Laroquette mockingly pressing a button on the Amulet from "The Galaxy Gift '' episode, disappearing and being replaced by Mumy and Harris as an older Will Robinson and Zachary Smith. They attempt one more time to return to Earth but find that they are "Lost in Space... Forever! '' The crew had a variety of methods of transportation seen in the series. Their spaceship is the two - deck, nuclear powered Jupiter 2 flying saucer spacecraft (in the unaired pilot episode, the ship was named the Gemini 12 and consisted of a single deck. The version seen in the series was modified with a lower level and landing legs, among other changes, but footage of the Gemini 12 from the unaired pilot was still reused in early episodes. The Gemini 12 was designed by William Creber, while Robert Kinoshita redesigned it for the series as the Jupiter 2. On the lower level were the atomic motors (which use "deutronium '' for fuel), the living quarters (featuring Murphy beds), galley, laboratory, and the robot 's "magnetic lock ''. On the upper level were the guidance control system and suspended animation "freezing tubes '' necessary for non-relativistic interstellar travel. The two levels were connected by both an electronic glide tube elevator and a fixed ladder. The Jupiter 2 explicitly had artificial gravity. Entrances / exits to the ship were via the main airlock on the upper level, or via the landing struts from the lower deck, and, according to one season 2 episode, a back door. The spacecraft was also intended to serve as home to the Robinsons once it had landed on the destination planet orbiting Alpha Centauri. In one third - season episode, "Space Creature '', the power core of the Jupiter 2 is shown as absolutely enormous; however, this may have been due to the plot of the episode, involving the travellers ' fears being amplified to supply nourishment for an alien that fed on fear. Even the lower deck could not fit into the Jupiter 2 as seen from the outside -- the Jupiter 2 's levels were actually located on separate soundstages, but made to seem connected on the series, masking the scale discrepancies. Shown only in season 1 episode 3, "Island in the Sky '', "Para-jet '' thrusters, a pair of units worn on the forearms, allowed a user to descend to a planet from the orbiting spaceship and, perhaps, back again to the ship. The "Chariot '' was an all - terrain, amphibious tracked vehicle that the crew used for ground transport when they were on a planet. As stated in episode 3 of season 1, the Chariot existed in a dis - assembled state during flight, to be re-assembled once on the ground. The Chariot was actually an operational cannibalized version of the Thiokol Snowcat Spryte, with a Ford 170 - cubic - inch (3 L) inline - 6, 101 horsepower engine with a 4 - speed automatic transmission including reverse. Test footage filmed of the Chariot for the first season of the series can be seen on YouTube. Most of the Chariot 's body panels were clear -- including the roof and its dome - shaped "gun hatch ''. Both a roof rack for luggage and roof mounted "solar batteries '' were accessible by exterior fixed ladders on either side of the vehicle. It had dual headlights and dual auxiliary area lights beneath the front and rear bumpers. The roof also had swivel - mounted, interior controllable spotlights located near each front corner, with a small parabolic antenna mounted between them. The Chariot had six bucket seats (three rows of two seats) for passengers. The interior featured retractable metallised fabric curtains for privacy, a seismograph, a scanner with infrared capability, a radio transceiver, a public address system, and a rifle rack that held four laser rifles vertically near the inside of the left rear corner body panel ("Island in the Sky ''). The then new and exciting invention called a jet pack (specifically, a Bell Rocket Belt) was used occasionally by Prof Robinson or Major West. Finally, the "Space Pod '' was a small mini-spacecraft first shown in the third and final season, modeled on the Apollo Lunar Module. The Pod was used to travel from its bay in the Jupiter 2 to destinations either on a nearby planet or in space, and the pod apparently had artificial gravity and an auto - return mechanism. For self - defense, the crew of the Jupiter 2 -- including Will on occasion -- had an arsenal of laser guns at their disposal, both slings - carried rifles and holstered pistols. (The first season 's personal issue laser gun was a film prop modified from a toy semi-automatic pistol made by Remco. The laser had only been invented 5 years before.) The crew also employed a force field around the Jupiter 2 for protection while on alien planets. The force shield generator was able to protect the campsite but in one season 3 episode was able to shield the entire planet. For communication, the crew used small transceivers to communicate with each other, the Chariot, and the ship. In "The Raft '', Will improvised several miniature rockoons in an attempt to send an interstellar "message in a bottle '' distress signal. In season 2 a set of relay stations was built to further extend communications while planet - bound. Their environmental control Robot B - 9 ran air and soil tests, was extremely strong, able to discharge strong electrostatic charges from his claws, could detect threats with his scanner and could produce a defensive smoke screen. The Robot could detect faint smells (in "One of Our Dogs is Missing '') and could both understand speech as well as speak (including alien languages). The Robot claimed the ability to read human minds by translating emitted thought waves back into words ("Invaders From The Fifth Dimension '', S01E08). The Jupiter 2 had some unexplained advanced technology that simplified or did away with mundane tasks. The "auto - matic laundry '' took seconds to clean, iron, fold, and package clothes in clear plastic bags. Similarly, the "dishwasher '' would clean, wash, and dry dishes in just seconds. Some technology reflected recent real - world developments. Silver reflective space blankets, a then new invention developed by NASA in 1964, were used in "The Hungry Sea '' (air date: October 13, 1965) and "Attack of the Monster Plants '' (air date: December 15, 1965). The crew 's spacesuits were made with aluminum - coated fabric, like NASA 's Mercury spacesuits, and had Velcro fasteners, which NASA first used during the Apollo program (1961 -- 1972). While the crew normally grew a hydroponic garden on a planet as an intermediate step before cultivating the soil of a planet, they also had "protein pills '' (a complete nutritional substitute for whole foods) in cases of emergency ("The Hungry Sea '' (air date: October 13, 1965) and "The Space Trader '' (air date: March 9, 1966)). Although it retains a following, the science - fiction community often points to Lost in Space as an example of early television 's perceived poor record at producing science - fiction. The series ' deliberate fantasy elements, a trademark of Irwin Allen productions, were perhaps overlooked as it drew comparisons to its supposed rival, Star Trek. However, Lost in Space was a mild ratings success, unlike Star Trek, which received very poor ratings during its original network television run. The more cerebral Star Trek never averaged higher than 52nd in the ratings during its three seasons, while Lost in Space finished season one with a rating of 32nd, season two in 35th place, and the third and final season in 33rd place. Lost in Space also ranked third as one of the top five favorite new shows for the 1965 -- 1966 season in a viewer TVQ poll (the others were The Big Valley, Get Smart, I Dream of Jeannie and F Troop). Lost in Space was the favorite show of John F. Kennedy, Jr. while growing up in the 1960s. Star Trek creator Gene Roddenbery insisted that the two shows could not be compared. He was more of a philosopher, while understanding that Irwin Allen was a storyteller. When asked about Lost in Space, Roddenberry acknowledged: "That show accomplishes what it sets out to do. Star Trek is not the same thing. '' Lost in Space received a 1966 Emmy Award nomination for Cinematography - Special Photographic Effects but did not win, and again in 1968 for Achievement in Visual Arts & Makeup but did not win. In 2005, it was nominated for a Saturn Award for Best DVD Retro Television Release, but did not win. In 2008, TVLand nominated and awarded the series for Awesomest Robot. The open and closing theme music was written by John Williams, the composer behind the Star Wars theme music, who was listed in the credits as "Johnny Williams ''. The original pilot and much of Season One reused Bernard Herrmann 's eerie score from the classic sci - fi film The Day the Earth Stood Still (1951). For Season Three, the opening theme was revised (again by Williams) to a more exciting and faster tempo score, accompanied by live action shots of the cast, featuring a pumped - up countdown from seven to one to launch each week 's episode. Seasons 1 and 2 had animated figures "life - roped '' together drifting "hopelessly lost in space '' and set to a dizzy and comical score that Bill Mumy once described in a magazine interview as "sounding like a circuit board ''. Much of the incidental music in the series was written by Williams who scored his four TV episodes with the movie soundtrack quality that not only helped him gain credibility and a boost towards his later success but also gave Lost in Space its distinctive musical style that is instantly recognizable by all of the show 's fans. Other notable film and television composers included Alexander Courage (composer of the Star Trek theme), who contributed six scores to the series. His most recognizable ("Wild Adventure '') included his key theme for "Lorelei '' composed for organ, woodwinds, and harp -- thus cementing this highly recognizable theme with Williams ' own "Chariot '' and the main theme for the series. The dramatic music of the show served the serious drama episodes well while highlighting the whimsy of later episodes when the deadly serious tunes were used to underscore comical and laughable fantasy threats. There have been a number of Lost in Space soundtrack CDs released. Despite never reaching the 100 episodes desired for daily stripping in syndication, Lost in Space was picked up for syndication in most major U.S. markets. By 1969, the show was declared to be the # 1 syndicated program (or close to it) in markets such as Houston, Milwaukee, Miami and even New York City, where it was said that the only competition to Lost in Space was I Love Lucy. The program did n't have the staying power throughout the 1970s of its supposed rival, Star Trek. Part of the blame was placed on the first season of Lost in Space being in black - and - white, while a majority of American households at the time had a color television receiver. By 1975, many markets began removing Lost in Space from daily schedules or moving it to less desirable time slots. The series experienced a revival when Ted Turner acquired it for his growing TBS "superstation '' in 1979. Viewer response was highly positive, and it became a TBS mainstay for the next five years. Notes: In 1998, New Line Cinema produced a film adaptation. It included numerous nods, homages and cameos related to the series, including: Additional cameo appearances from the original series were considered, but did not make it to the film: Harris was offered a cameo appearance as the Global Sedition leader who hires, then betrays, Smith. He turned down the role, which eventually went to Edward Fox, and is even reported to have said "I play Smith or I do n't play. '' Harris appeared on an episode of Late Night with Conan O'Brien, mentioning that he was offered a role: "Yes, they offered me a part in the new movie -- six lines! '' The film used a number of ideas familiar to viewers from the original show. Smith reprogramming the robot and its subsequent rampage ("Reluctant Stowaway ''), near miss with the sun ("Wild Adventure ''), the derelict spaceship ("The Derelict ''), discovery of the Blawp and the crash ("Island in the Sky '') and an attempt to change history by returning to the beginning ("The Time Merchant ''). Also a scene - stealing ' Goodnight ' homage to the Waltons was included. Something fans of the original always wanted to see happen was finally realized when Don gives Smith a crack at the end of the movie: "That felt good! '' In late 2003, a new television series, with a somewhat changed format, was in development in the U.S. It originally was intended to be closer to the original pilot with no Smith, but including a robot, had an additional older Robinson child called David, and Penny was an infant. The pilot (titled, "The Robinsons: Lost in Space '') was commissioned by The WB Television Network. It was directed by John Woo and produced by Synthesis Entertainment, Irwin Allen Productions, Twentieth Century Fox Television and Regency Television. The Jupiter 2 interstellar flying - saucer spacecraft of the original series was changed to a non-saucer planet - landing craft, deployed from a larger inter-stellar mothership. In this adaptation John Robinson was a retiring war hero of an alien invasion and had decided to take his family to another colony elsewhere in space. However the ship is attacked by the aliens, David is lost amidst it all, and the Robinsons, along with Don, are forced to escape in the small Jupiter 2 "Space Pod '' of the mothership. The series, presumably, would have revolved around the family trying to recover David from the aliens. It was not among the network 's series pick - ups confirmed later that year. Looking back at the pilot when the 2018 Netflix reboot was aired, Neil Calloway of Flickering Myth said "you 're hardly on the edge of your seat. '' and "You start to wonder where the $2 million went, and then you question why something directed by John Woo is so pedestrian. '' The producers of the new Battlestar Galactica show bought the show 's sets. They were redesigned the next year and used for scenes on the Battlestar Pegasus. Dick Tufeld reprised his role as voice of the robot for the third time. On October 10, 2014, it was announced that Legendary TV was developing a new reboot of Lost in Space for Netflix with Dracula Untold screenwriters Matt Sazama and Burk Sharpless attached to write. On June 29, 2016, Netflix ordered the series with 10 episodes. The series hit Netflix on April 13, 2018. The Robot also appears in the series in a modified form. Before the television series appeared, a comic book named Space Family Robinson was published by Gold Key Comics, written by Gaylord Du Bois and illustrated by Dan Spiegle. (Du Bois did not create the series, but he became the sole writer of the series once he began chronicling the Robinsons ' adventures with "Peril on Planet Four '' in issue # 8, and he had already written the Captain Venture second feature beginning with "Situation Survival '' in issue # 6). Due to a deal worked out with Gold Key, the title of the comic later incorporated the Lost in Space sub-title. The comic book featured different characters and a unique H - shaped spacecraft rather than one of a saucer shape. In 1991 Bill Mumy provided "Alpha Control Guidance '' for a Lost in Space revival in comic book form Lost in Space comic book for Innovation Comics, writing six of the issues. The first officially licensed comic to be based on the TV series (as the previous Gold Key version was based on the 1962 concept), the series was set several years after the show. The kids were now teenagers, with Will at 16 years old, and the stories attempted to return the series to its straight adventure roots with one story even explaining the camp / farce episodes of the series as fanciful entries in Penny 's Space Diary. Complex adult - themed story concepts were introduced and even included a love triangle developing between Penny, Judy and Don. Mark Goddard wrote a "Don '' story, something he never had in the original series (with the possible exception of "the Flaming Planet '', which might qualify as a "Don '' episode). The Jupiter 2 had various interior designs in the first year as artists were obviously not familiar with the original layout of the ship while Michal Dutkiewicz got it ' spot - on '. The first year had an arc ultimately leading the travelers to Alpha Centauri with Smith contacting his former alien masters along the way. Aeolis 14 Umbra were furious with Smith for not having succeeded in his mission to prevent the Jupiter 2, built with technology from a crashed ship of their race, from reaching the star system they had claimed as their own. The year ended with Smith caught out for his traitorous associations and imprisoned in a freezing tube for the Jupiter 's final journey to the Promised Planet. Year two was to be Mumy 's own full season story of a complex adventure following the Robinson 's arrival at their destination and capture by the Aoleans. Innovation folded in 1993 with the story only halfway through and it was n't until 2005 that Mumy was able to present his story to Lost in Space fandom as a complete graphic novel via Bubblehead Publishing. The theme of an adult Will Robinson was also explored in the film and in the song "Ballad of Will Robinson '' -- written and recorded by Mumy. In 1998 Dark Horse Comics published a three - part story chronicling the Robinson Clan as depicted in the film. In 2006 Bill Mumy and Peter David co-wrote Star Trek: The Return of the Worthy, a three - part story that was essentially a crossover between Lost in Space and Star Trek with the Enterprise crew encountering a Robinson - like expedition amongst the stars, though with different characters. In 2016, American Gothic Press published a six - issue miniseries titled Irwin Allen 's Lost in Space, the Lost Adventures, based on unfilmed scripts from the series. The scripts "The Curious Galactics '' and "Malice in Wonderland '' were written by Carey Wilber. The first script was adapted as issues 1 -- 3 of the series, with the adapted script written by Holly Interlandi and drawn by Kostas Pantaulas, with Patrick McEvoy doing coloring and covers. The second script was adapted as issues 4 -- 6 of the series, again adapted by Interlandi, with McEvoy providing pencil art, coloring and covers. In 1967, a novel based on the series, with significant changes to the personalities of the characters and the design of the ship, was published by Pyramid Books, and written by Dave Van Arnam and Ted White (as "Ron Archer ''). A scene in the book correctly predicts Richard Nixon winning the Presidency after Lyndon Johnson. In the 1972 -- 1973 television season, ABC produced The ABC Saturday Superstar Movie, a weekly collection of 60 - minute animated movies, pilots and specials from various production companies, such as Hanna - Barbera, Filmation, and Rankin - Bass -- Hanna - Barbera Productions contributed animated work based on such television series as Gidget, Yogi Bear, Tabitha, Oliver Twist, Nanny and the Professor, The Banana Splits, and Lost in Space. Dr. Smith (voiced by Jonathan Harris) was the only character from the original program to appear in the special, along with the Robot (who was named Robon and employed in flight control rather than a support activity). The spacecraft was launched vertically by rocket, and Smith was a passenger rather than a saboteur. The pilot for the animated Lost in Space series was not picked up as a series, and only this episode was produced. This cartoon was included in the Blu - ray release on September 15, 2015. 20th Century Fox has released the entire series on DVD in Region 1. Several of the releases contain bonus features including interviews, episodic promos, video stills and the original un-aired pilot episode. All episodes of Lost in Space were remastered and released on a Blu - ray disc set on September 15, 2015 (the 50th anniversary of the premiere on the CBS TV Network). The Blu - ray disc set includes a cast table reading of the final episode, written by Bill Mumy, which brings the series to a close by having the characters return to earth.
where can coal be found in south africa
Coal in South Africa - Wikipedia South Africa produces in excess of 255 million tonnes of coal (2011 estimate) and consumes almost three quarters of that domestically. Around 77 % of South Africa 's energy needs are directly derived from coal and 92 % of coal consumed on the African continent is produced in South Africa. The use of coal in South Africa dates back to the iron age (300 -- 1880 AD), when charcoal was used to melt iron and copper, but large - scale exploitation of coal did not occur until the mid-19th century. South African Reserves by Coalfields The largest coal deposits in South Africa are to be found in the Ecca deposits, a stratum of the Karoo Supergroup, dating from the Permian period, between 280 and 250 Ma. The Ecca Group is extensive, covering around two thirds of South Africa (much of it covered by slightly younger rocks - see diagram on the left). Only the northern and north - eastern portion of these Ecca deposits is coal - bearing, but it nevertheless contains more than a third of all coal reserves in the Southern Hemisphere. Notable coalfields are: South Africa is one of the seven largest coal - producing and one of the top five coal - exporting countries in the world. More than a quarter of coal mined in South Africa is exported, most of which leaves the country via Richards Bay. Coal is South Africa 's third largest source of foreign exchange; platinum being the largest and gold second. Around 15 % of the country 's GDP (2000 estimate) is spent on energy and 77 % of that is derived from coal. In 2004, the coal and lignite mining industry generated a gross income of R 39 billion and directly employed 50,000 people. The Witbank Coalfield accounts for 40 % of South Africa 's coal production. The five largest coal mining companies account for around 85 % of all production. They are Anglo American plc, South32 's South Africa Energy Coal, Sasol Mining, Glencore Xstrata, and Exxaro. Open - pit mining account for roughly half of South African coal mining operations, the other half being sub-surface. Electricity generation accounts for 43 % of all coal consumed in South Africa (1997 estimate). Many of the country 's coal - fired power station are located in close proximity to a coal mine and are supplied with fuel directly from the mine. The Grootegeluk open cast mine on the Waterberg Coalfield in Limpopo is one of the largest in the country and feeds the Matimba Power Station with about 14.6 million tons of coal a year via a conveyor system. The mine is also contracted to supply the new Medupi Power Station. Around 35 % of liquid fuel used in South Africa is derived from coal mined by Sasol Mining at the Secunda CTL plants. In 1995 around a million lower - income households in South Africa depended on coal as their primary energy source for cooking, lighting and heating. This number has been decreasing steadily during the first decade of the 21st century due to the expansion of electricity supply to lower - income households and rural regions. Environmentalists in South Africa and abroad have criticized the decision of the World Bank 's approval for a $3.75 billion loan to build the world 's fourth - largest coal - fired power in South Africa. The plant will increase the demand for coal mining and production. Protesters are urging the bank to stop supporting the development of coal plants and other large emitters of greenhouse gas and polluting operations from coal mining. Usage of coal and liquid fuel derived from coal accounts for around 86 % of the 113 million tons of carbon dioxide emissions South Africa produces annually (2006 estimate) and represents around 40 % of Africa 's total coal derived CO emissions. The largest contributor to coal - derived air pollution is household coal usage (65 %), followed by industry (30 %) and electricity generation (5 %). Some coal mines have been abandoned by their owners, mainly due to companies ceasing to exist. Many of these mines, such as the Transvaal and Delagoa Bay Collieries (T&DB) outside Witbank, have not been rehabilitated prior to being abandoned and are a major source of water and air pollution. It is estimated that clean - up and rehabilitation of the T&DB Collieries will cost around R100 million. Coal seam fires were common, but controlled, at T&DB Collieries during the mine 's operation, but the fires have been left to burn out of control since the mine was closed in 1953, to the extent that in 1995 flames could be seen above ground.
i know pronounce you chuck and larry imdb
I Now Pronounce You Chuck & Larry - wikipedia I Now Pronounce You Chuck & Larry is a 2007 American comedy film directed by Dennis Dugan. It stars Adam Sandler and Kevin James as the title characters Chuck and Larry, respectively. The film was released in the United States on July 20, 2007. This is Sandler 's first role in a Universal Pictures film since Bulletproof in 1996. Chuck Levine, a womanizing bachelor, and Larry Valentine, a widower struggling to raise his two children, are two veteran New York City firefighters. During a routine sweep of a burned building, a segment of floor collapses on Chuck, but Larry saves his life. Chuck vows to repay Larry in any way possible. Experiencing an epiphany from the incident, Larry tries to increase his life insurance policy, but he runs into difficulties naming his children as primary beneficiaries. He is told he should remarry so his new spouse can be the primary beneficiary; however, no one specifies whom he has to marry. Inspired by a newspaper article about domestic partnerships, Larry asks Chuck to enter a civil union with him. Although Chuck declines at first, he is reminded of his debt to Larry and finally agrees, entering a domestic partnership and becoming Larry 's primary beneficiary in the event of his death. To their dismay, however, investigators arrive to inquire about their abrupt partnership, suspecting fraud. Chuck and Larry decide to enlist the help of lawyer Alex McDonough, who suggests they get married and move in together to prove they are committed; Chuck reluctantly agrees. The pair travel to Niagara Falls in Canada for a quick same sex marriage at a wedding chapel and begin living together. At a gay benefit costume party, the partygoers are confronted by homophobic protesters. Chuck is provoked into punching their leader, and the incident is picked up by the local news. With their apparent homosexuality and marriage revealed, Chuck and Larry are heckled, and their fellow FDNY firefighters refuse to work with them. Their only ally is Fred G. Duncan, an angry, intimidating firefighter who comes out to Chuck. Chuck becomes romantically interested in Alex after the two spend time together, but finds himself unable to get close to her because she thinks he is gay. In another meeting at her apartment, Chuck and Alex are making charm bracelets. They soon kiss, but Alex, still believing Chuck is gay and married, is shocked and immediately distances herself from Chuck. Meanwhile, city agent Clinton Fitzer arrives to investigate the couple, and the strain on both Larry and Chuck causes them to fight. Larry learns about the kiss and confronts Chuck about it, asserting that Chuck 's absence is jeopardizing their ability to maintain the ruse of their relationship. During the argument, Larry reveals that he is still in love with his deceased wife, Paula, and Chuck responds that he needs to move on for the sake of his children. Later that evening, a petition circulates to have Chuck and Larry thrown out of the firehouse. Upon discovering it, a hurtful Larry confronts the crew about personal embarrassments on the job that Chuck and Larry helped them overcome. Afterwards, Chuck and Larry reconcile their differences. Eventually, numerous women publicly testify to having slept with Chuck in the recent past, and the couple is called into court to defend their marriage against charges of fraud. They are defended by Alex, and their fellow firefighters arrive in support, having realized all that Chuck and Larry have done for them over the years. Fitzer interrogates both men, and eventually demands the pair to kiss to prove that their relationship is physical. Before they do so, Chuck and Larry are interrupted by FDNY Captain Phineas J. Tucker, who reveals their marriage to be a sham and that they are both straight. He then offers to be arrested as well, since he knew about the false relationship but failed to report it. This prompts each of the other firefighters to claim a role in the wedding in a show of solidarity. Chuck, Larry, and the other firefighters are sent to jail, but they are quickly released after negotiating a deal to provide photos for an AIDS research benefit calendar, and Chuck and Larry keep their benefits. Two months later, Duncan and Alex 's brother, Kevin, are married in Niagara Falls at the same chapel as Chuck and Larry. At the wedding party, Larry moves on from the death of his wife and talks to a new woman, while Alex agrees to a dance with Chuck. Producer Tom Shadyac had planned this film as early as 1999. I Now Pronounce You Joe and Benny, as the film was then titled, was announced as starring Nicolas Cage and Will Smith with Shadyac directing. In the official trailer, the song "Grace Kelly '' by British pop star, Mika, was included. Rotten Tomatoes, a review aggregator, reports that 14 % of 157 reviews surveyed critics gave the film a positive review; the rating is 3.6 / 10. The site 's critical consensus reads, "Whether by way of inept comedy or tasteless stereotypes, I Now Pronounce You Chuck & Larry falters on both levels. '' On Metacritic, the film has a score of 37 out of 100 based on 33 critics, indicating "generally unfavorable reviews ''. Audiences polled by CinemaScore gave the film an average grade of "B + '' on an A+ to F scale. USA Today called it "a movie that gives marriage, homosexuality, friendship, firefighters, children and nearly everything else a bad name. '' The Wall St Journal calls it "an insult to gays, straights, men, women, children, African - Americans, Asians, pastors, mailmen, insurance adjusters, firemen, doctors -- and fans of show music. '' The New York Post called it not an insult to homosexuality but to comedy itself. The Miami Herald was slightly less critical, calling the film "funny in the juvenile, crass way we expect. '' Nathan Lee from the Village Voice wrote a positive review, praising the film for being "tremendously savvy in its stupid way '' and "as eloquent as Brokeback Mountain, and even more radical. '' Controversial critic Armond White championed the film as "a modern classic '' for its "ultimate moral lesson -- that sexuality has absolutely nothing to do with who Chuck and Larry are as people ''. Chuck & Larry grossed $34,233,750 and ranked # 1 at the domestic box office in its opening weekend, higher than the other opening wide release that weekend, Hairspray, and the previous weekend 's # 1 film, Harry Potter and the Order of the Phoenix. By the end of its run, the film had grossed $120,059,556 domestically and $66,012,658 internationally for a worldwide total of $186,072,214. The film received eight Golden Raspberry Award nominations including Worst Picture, Worst Actor (Adam Sandler), Worst Supporting Actor (both Kevin James and Rob Schneider), Worst Supporting Actress (Jessica Biel), Worst Director (Dennis Dugan), Worst Screenplay and Worst Screen Couple (Adam Sandler with either Kevin James or Jessica Biel), but failed to win any. The film was screened prior to release for the Gay & Lesbian Alliance Against Defamation (GLAAD). GLAAD representative Damon Romine told Entertainment Weekly magazine: "The movie has some of the expected stereotypes, but in its own disarming way, it 's a call for equality and respect ''. According to Alexander Payne, the writer of an initial draft of the film, Sandler took many liberties with his screenplay, "Sandler - izing '' the film, in his own words. At some point, he did not want his name attached to the project. Critics have also said the character played by Rob Schneider is a racist caricature and he was also criticized for donning Yellowface. In November 2007, the producers of the Australian film Strange Bedfellows initiated legal action against Universal Studios for copyright violation. The suit was withdrawn in April 2008 after the producers of Strange Bedfellows received an early draft of Chuck & Larry that predated their film, and they were satisfied that they had not been plagiarized.
when was lost in space first on tv
Lost in Space - wikipedia Lost in Space is an American science fiction television series created and produced by Irwin Allen. The series follows the adventures of a pioneering family of space colonists who struggle to survive in a strange and often hostile universe after their ship is sabotaged and thrown off course. The show ran for three seasons, with 83 episodes airing between 1965 and 1968. The first season was filmed in black and white, with the second and third seasons filmed in color. Although the original concept (depicted in the pilot episode "No Place to Hide '', not aired until 1997) centered on the Robinson family, many later storylines focused primarily on Dr. Zachary Smith, played by Jonathan Harris. Smith and Robot B - 9 were both absent from the unaired pilot, as the addition of their characters was only decided upon once the series had been commissioned for production. Originally written as an utterly evil (if careless) saboteur, Smith gradually became the troublesome, self - centered, incompetent character who provided the comic relief for the show and caused much of the conflict and misadventures. In the unaired pilot, what caused the group to become lost in space was a chance encounter with a meteor storm, but in the first aired episode it was Smith 's unplanned presence on the ship that sent it off course into the meteor field, and his sabotage that caused the Robot to accelerate the ship into hyperdrive. Smith is thus the key to the story. On October 16, 1997, 32 years in the future from the perspective of viewers in 1965, the United States is about to launch one of history 's great adventures: man 's colonization of space. The Jupiter 2 (called Gemini 12 in the unaired pilot episode), a futuristic saucer - shaped spacecraft, stands on its launch pad undergoing final preparations. Its mission is to take a single family on a five - and - a-half - year journey (altered from 98 years in the unaired pilot) to a planet orbiting the nearest star, Alpha Centauri (the pilot show had referred to the planet itself as Alpha Centauri but this error was corrected for the series), which space probes have revealed possesses ideal conditions for human life. All of this is presented as a news report of a real space expedition, with news commentators informing us of the mission 's backstory. The Robinson family, selected from two million volunteers for this mission, consisted of Professor John Robinson, played by Guy Williams, his wife, Maureen, played by June Lockhart, their children, Judy (Marta Kristen), Penny (Angela Cartwright), and Will (Billy Mumy). They are accompanied by their pilot, U.S. Space Corps Major Donald West (Mark Goddard), who is trained to fly the ship when the time comes for the eventual landing. Initially the Robinsons and Major West will be in freezing tubes for the voyage, with the tubes opening when the spacecraft approached its destination. Unless there is a problem with the ship 's navigation or guidance system during the voyage, Major West was only to take the controls during the final approach and landing on the destination planet, while the Robinsons were to strap themselves into contour couches on the lower deck for the landing. Other nations are racing to colonize space, and they would stop at nothing, not even sabotage, to thwart the United States 's effort. It turns out that Dr. Zachary Smith (Jonathan Harris), Alpha Control 's doctor, and later said to be a psychiatrist and environmental control expert, is secretly an agent for one of those competing nations. After disposing of a guard who catches him on board the spacecraft, Smith reprograms the Jupiter 2 's B - 9 environmental control robot (voiced by Dick Tufeld) to destroy critical systems on the spaceship eight hours after launch. Smith, however, unintentionally becomes trapped aboard at launch and his extra weight throws the Jupiter 2 off course, causing it to encounter a meteor storm. This, plus the robot 's rampage that causes the ship to prematurely engage its hyperdrive, causes the expedition to become hopelessly lost in the infinite depths of outer space. Smith 's selfish actions and laziness frequently endanger the expedition. After the first half of the first season, Smith 's role assumes less sinister overtones, although he continues to display many character flaws. In "The Time Merchant '' Smith shows he actually does care about the Robinsons, when he travels back in time to the day of the Jupiter 2 launch, with the hope of changing his own fate by not boarding the ship, so allowing the Robinsons to start the mission as originally planned. However, he learns that without his weight altering the ship 's course, the Jupiter 2 will be destroyed by an uncharted asteroid. So he sacrifices his chance to stay on earth, electing to re-board the ship, thus saving the lives of the family and continuing his role amongst them as the reluctant stowaway. The fate of the castaways is never resolved, as the series was unexpectedly canceled at the end of season 3. The astronaut family of Dr. John Robinson, accompanied by an Air Force / Space Corps pilot and a robot, set out in the year 1997 from an overpopulated Earth in the spaceship Jupiter 2 to travel to a planet circling the star Alpha Centauri with hopes of colonizing it. The Jupiter 2 mission is sabotaged by Dr. Zachary Smith -- an agent for an unnamed foreign government -- who slips aboard the spaceship and reprograms the robot to destroy the ship and crew. However, when he is trapped aboard, his excess weight alters the craft 's flight path and places it directly in the path of a massive meteor storm. Smith manages to save himself by prematurely reviving the crew from suspended animation. The ship survives, but consequent damage caused by Smith 's earlier sabotage of the robot leaves them lost in space. In the third episode the Jupiter 2 crashlands on an alien world, later identified by Will as Priplanus, where they spend the rest of the season and survive a host of adventures. Smith, whom Allen had originally intended to write out, remains through all three seasons, as a source of comedic cowardice and villainy, exploiting the eternally forgiving nature of Professor Robinson. Smith was liked by the trusting Will, and tolerated by the women, but he was disliked by both the Robot and the suspicious Major Don West. At the start of the second season the repaired Jupiter 2 launches into space once more, to escape the destruction of Priplanus following a series of cataclysmic earthquakes, but in the fourth episode the Robinsons crash - land on a strange new world, to become planet - bound again for another season. This replicated the format of the first season, but now the focus of the series was more on humor than on action / adventure, as evidenced by the extreme silliness of Dr. Smith amidst a plethora of unlikely aliens who began appearing on the show, often of a whimsical fantasy - oriented nature. One of these colorful visitors even turned out to be Smith 's own cousin, intent on swindling him out of a family inheritance with the assistance of a hostile gambling machine. A new theme tune was recorded by Warren Barker for the second season, but it was decided to retain the original. In the third season, a major format change was introduced, to bring the series back to its roots as solid adventure, by allowing the Robinsons to travel to more planets. The Jupiter 2 was now allowed to freely travel space, visiting a new world each week, as the family attempt to return to Earth or to reach their original destination in the Alpha Centauri system. A newly built "Space Pod '', that mysteriously appeared as though it had always been there, provided a means of transportation between the ship and passing planets, as well as being a plot device to launch various escapades. This season had a dramatically different opening credits sequence and a new theme tune -- which, like the original, was composed by John Williams -- as part of the show 's new direction. During its three - season run, many actors made guest appearances, including familiar actors and / or actors who went on to become well - known. Among those appearing in Lost in Space episodes: Kevin Hagen, Alan Hewitt, Sherry Jackson, Werner Klemperer, Warren Oates, Don Matheson, Kurt Russell, Wally Cox, Grant Sullivan, Norman Leavitt, Tommy Farrell, Mercedes McCambridge, Lyle Waggoner, Albert Salmi, Royal Dano, Strother Martin, Michael J. Pollard, Byron Morrow, Arte Johnson, Fritz Feld, John Carradine, Al Lewis, Hans Conried, Dennis Patrick, Michael Rennie among many others. Future Hill Street Blues stars, Daniel J. Travanti (billed as "Danny Travanty '') and Michael Conrad, made guest appearances on separate episodes. Jonathan Harris, although a permanent cast member, was listed in the opening credits as "Special Guest Star '' of every episode of Lost in Space. In 1962, the first appearance of a space - faring Robinson family occurred in a comic book published by Gold Key Comics. The Space Family Robinson, who were scientists aboard Earth 's "Space Station One '', are swept away in a cosmic storm in the comic 's second issue. These Robinsons were scientist father Craig, scientist mother June, early teens Tim (son) and Tam (daughter), along with pets Clancy (dog) and Yakker (parrot). Space Station One also boasted two spacemobiles for ship - to - planet travel. The television show launched three years later, in 1965, and during its run CBS and 20th Century Fox reached an agreement with Gold Key Comics that allowed the use of the name "Robinson '' on the TV show; in return, the comic was allowed to append "-- Lost In Space '' to its title, with the potential for the TV show to propel additional sales of the comic. Following the format of Allen 's first television series, Voyage to the Bottom of the Sea, unlikely fantasy - oriented adventure stories were dreamed up that had little to do with either serious science or serious science fiction. The stories had little realism, with, for instance, explosions happening randomly, merely to cover an alien 's arrival or departure, or sometimes merely the arrival of some alien artifact. Props and monsters were regularly recycled from other Irwin Allen shows, as a matter of budgetary convenience, and the same alien would appear on Voyage one week and Lost in Space the next. A sea monster outfit that had featured on Voyage would get a spray paint job for its Lost in Space appearance, while space monster costumes would be reused on Voyage as sea monsters. The clear round plastic pen holder used as a control surface in the episode "The Derelict '' turned up regularly throughout the show 's entire run both as primary controls to activate alien machinery (or open doors or cages), and as background set dressing; some primary controls were seen used in episodes such as Season 1 's "The Keeper (Parts 1 and 2) '', "His Majesty Smith '', and Season 3 's "A Day At The Zoo '', and "The Promised Planet ''. Spacecraft models, too, were routinely re-used. The foreboding derelict ship from season 1 was redressed to become the Vera Castle in season 3, which, in turn, was reused in several episodes (and flipped upside down for one of them). The Fuel Barge from season 2 became a Space Lighthouse in season 3, with a complete re-use of the effects footage from the earlier story. The derelict ship was used again in season 3, with a simple color change. Likewise the alien pursuer 's ship in "The Sky Pirate '', which itself was an Earth ship lifted from the 1958 film War of the Satellites, was re-used in the episode "Deadliest of the Species ''. Moreover, the footage of Hapgood 's ship launching into space in episode 6 of season 1 was re-used for virtually every subsequent launch in the following three years, no matter what shape the ship it supposedly represented had had on the ground (such as the launch of the alien ship in Season 1 's "The Space Croppers '', Nerim 's ship 's launch in the Season 2 opener "Blast Off Into Space '', Season 2 's "The Colonists '' - which featured a landing approach scene, in which the landing was aborted and a reversal of film showed the ship heading back into space, this after Will, the Robot, and Smith sabotage Niolani 's landing pad for her people 's colonization of the planet, and Season 3 's "A Day At The Zoo '' when Farnum B. 's ship takes off at the end if the episode). While several of the episodes listed do not show the space ship on the ground (or partial ship, such as Nerim 's), they all use the same footage as seen for Hapgood 's ship. By the end of the first season, the character of Smith is permanently established as a bungling, self - serving, greedy, manipulative coward. These character traits are magnified in subsequent seasons. His haughty bearing, and ever - present alliterative repartee, were staples of the character. While he and Major West repeatedly clashed over his goldbricking, or because of some villainy he had perpetrated, the Robot was usually the preferred victim of his barbed and acerbic wit. Despite Harris being credited as a "Special Guest Star '' on every episode, Smith became the pivotal character of the series. Harris was the last actor cast, with the others all having appeared in the unaired pilot. He was informed that he would "have to be in last position '' in the credits. Harris voiced discomfort at this, and (with his continuation beyond the first few episodes still in doubt) suggested appearing in the last position as "Special Guest Star ''. After having "screamed and howled '', Allen agreed. The show 's writers expected that Smith would indeed be a temporary villain, who would only appear in the early episodes. Harris, on the other hand, hoped to stay longer on the show, but he found his character as written very boring, and feared it would quickly bore the audience too. Harris "began rewriting his lines and redefining his character '', by playing Smith in an attention - getting, flamboyant style, and ad - libbing his scenes with colorful, pompous dialogue. Allen quickly noticed this, and liked it. As Harris himself recalled, Allen said, "I know what you 're doing. Do more of it! '' Mumy recalls how, after he had learned his own lines, Harris would ask to rehearse with him using his own dialogue. "He truly, truly single - handledly created the character of Dr. Zachary Smith that we know, '' said Mumy. "This man we love - to - hate, a sniveling coward who would cower behind the little boy, ' Oh, the pain! Save me, William! ' That 's all him! '' Lost in Space is remembered, in part, for the Robot 's oft - repeated lines such as "Warning! Warning! '' and "It does not compute ''. Smith 's frequent put - downs of the Robot were also popular, and Jonathan Harris was proud to talk about how he used to lie in bed at night dreaming them up for use on the show. "You Bubble - headed Booby! '', "Cackling Cacophony '', "Tin Plated Traitor '', "Blithering Blatherskyte '', and "Traitorous Transistorized Toad '' are but a few alongside his trademark lines: "Oh, the pain... the pain! '' and "Never fear, Smith is here! '' One of Jonathan Harris 's last roles was providing the voice of the illusionist praying mantis "Manny '' in Disney 's A Bug 's Life, where Harris used "Oh, the pain... the pain! '' near the end of the film. The catchphrase "Danger, Will Robinson! '' originates with the series, but was only ever used once in it, during season 3, episode 11: "The Deadliest of the Species '', when the Robot warns young Will Robinson about an impending threat. It was also used as the slogan of the 1998 movie, whose official website had the address www.dangerwillrobinson.com. In 1962, Gold Key comics (formerly Dell Comics), a division of Western Publishing Company, began publishing a series of comic books under the title Space Family Robinson. The story was largely inspired by The Swiss Family Robinson but with a space - age twist. The movie and television rights to the comic book were then purchased by noted television writer Hilda Bohem (The Cisco Kid), who created a treatment under the title, Space Family 3000. In July 1964, science fiction writer and filmmaker Ib Melchior began pitching a treatment for a feature film, also under the title Space Family Robinson. There is debate as to whether or not Allen was aware of the Melchior treatment. It is also unknown whether Allen was aware of the comic book or the Hilda Bohem treatment. As copyright law only protects the actual expression of a work, and not titles, general ideas or concepts, in 1964 Allen moved forward with his own take on Space Family Robinson, with characters and situations notably different from either the Bohem or the Melchior treatments (none of the three treatments contained the characters of Smith or the Robot). Intended as a follow up to his first successful television venture, Voyage to the Bottom of the Sea, Allen quickly sold his concept for a television series to CBS. Concerned about confusion with the Gold Key comic book, CBS requested that Allen come up with a new title. Nevertheless, Hilda Bohem filed a claim against Allen and CBS Television shortly before the series premiered in 1965. A compromise was struck as part of a legal settlement. In addition to an undisclosed sum of money, Western Publishing would be allowed to change the name of its comic book to Lost in Space. There were no other legal challenges to the title until 1995, when New Line Cinema announced their intention to turn Lost in Space into a big budget motion picture. New Line had purchased the screen rights from Prelude Pictures (which had acquired the screen rights from the Irwin Allen Estate in 1993). At that time, Melchior contacted Prelude Pictures and insisted that Lost in Space was directly based upon his 1964 treatment. Melchior was aided in his efforts by Ed Shifres, a fan who had written a book entitled Space Family Robinson: The True Story (later reprinted with the title, Lost in Space: The True Story). The book attempts to show how Allen allegedly plagiarized Melchior 's concept, with two outlines presented side - by - side. To satisfy Melchior, Prelude Pictures hired the 78 - year - old filmmaker as a consultant on their feature film adaptation. This accommodation was made without the knowledge or consent of the Irwin Allen Estate or Space Productions, the original copyright holder of Lost in Space. Melchior 's contract with Prelude also guaranteed him 2 % of the producer 's gross receipts, a provision that was later the subject of a suit between Melchior and Mark Koch of Prelude Pictures. Although an Appellate Court ruled partly in Melchior 's favor, on November 17, 2004, the Supreme Court of California denied a petition by Melchior to further review the case. No further claim was made and Space Productions now contends that Allen was the sole creator of the television series Lost in Space. Melchior died on March 14, 2015 at the age of 97. In 1965 Allen filmed a 50 - minute black - and - white series pilot for Fox, which is usually known as "No Place to Hide '' (although this title does not actually appear). After CBS accepted the series, the characters Smith and the Robot were added, the spaceship, originally named the Gemini 12, was redesigned by adding a second deck, interior equipment, and altering some consoles slightly, and was rechristened the Jupiter 2. For budget considerations, a good part of the footage included in the pilot episode was reused, being carefully worked into the early series episodes. CBS was also offered Star Trek at around the same time, but turned it down in favor of Lost in Space. In an ironic twist of fate, in 2006, CBS gained the television rights to the Star Trek franchise. The Lost in Space television series was originally named Space Family Robinson. Allen was apparently unaware of the Gold Key comic of the same name and similar theme. Both were space versions of the novel Swiss Family Robinson with the title changed accordingly. Gold Key Comics did not sue Allen 's production company or 20th Century Fox for copyright infringement, as Allen was expected to license the rights for comic book adaptations of his TV properties (as he already had with Voyage to the Bottom of the Sea), but instead changed the title of the comic to Lost in Space to take advantage of the series ' prominence. The first season emphasized adventure and chronicled the daily adventures that a pioneer family might experience if marooned on an alien world. The first half of season 1 included dealing with the planet 's unusual orbit, resulting in extremes of cold and heat, and the Robinson party trekking around the rocky terrain and stormy inland oceans of Priplanus in the Chariot to avoid those extremes, encountering dangerous native plants and animals, and a few reasonably plausible off - world visitors. However, midway through the first season, following the two - parter "The Keeper '', the format changed to a "Monster of the week '' style, and the stories slipped, almost un-noticeably, from straight adventure to stories based on fantasy and fairy tales, with Will even awakening a sleeping princess with a kiss in episode 27 of season 1, "Lost Civilization. '' Excepting special effect shots (which were reused in later seasons), the first season was filmed in black and white, while the second and third seasons were filmed in color. Beginning in January 1966, ABC scheduled Batman in the same time slot. To compete, Lost in Space Season 2 imitated Batman 's campy style humor to compete against that show 's enormous success. Bright outfits, over-the - top action, outrageous villains (Space Cowboys, Pirates, Knights, Vikings, Wooden Dragons, Amazon Feministas and even Bespectacled Beagles) came to the fore, in outlandish stories. The Robinsons were cloned by a colorful critter in one story, and even attacked by a civilization of "Little Mechanical Men '' resembling their Robot who wanted him for their leader in another tale. To make matters worse, stories giving all characters focus were sacrificed, in favor of a growing emphasis on Smith, Will, and the Robot -- and Smith 's change in character was not appreciated by the other actors. According to Billy Mumy, Mark Goddard and Guy Williams both disliked the shift away from serious science fiction. The third season had more straight adventure, with the Jupiter 2 now functional and hopping from planet to planet, but the episodes still tended to be whimsical and to emphasize humor, including fanciful space hippies, more pirates, off - beat inter-galactic zoos, ice princesses and Lost in Space 's beauty pageant. One of the last episodes, "The Great Vegetable Rebellion '' with actor Stanley Adams as Tybo, the talking carrot, took the show into pure fantasy. (Called "the most insipid and bizarre episode in television history '', Kristen recalls that Goddard complained that "seven years of Stanislavski '' method acting had culminated in his talking to a carrot.) A writer Irwin had employed to script a planned crossover between Lost in Space and Land of the Giants noted that Irwin seemed scared of the vegetable story, for it got pushed further and further to the back of the schedule, as the filming of season 3 progressed. The episode 's writer, Peter Packer, who was one of the most prolific writers for Lost in Space and had penned some of the series best episodes, apologized to Harris for the script, stating he had n't "one more damned idea in my head ''. During filming, Guy Williams and June Lockhart were written out of the next two episodes, on full pay, for laughing so much during the production. Astute viewers will notice the smirks as the actors, notably Mark Goddard, tried to contain themselves during the filming of the story. "The Great Vegetable Rebellion '' appears to have directly inspired the "Rules of Luton '' episode on Space 1999 season 2. In 1977 TV Guide listed "The Great Vegetable Rebellion '' at position # 76 in its list of the ' 100 Best Episodes of All Time '. During the first two seasons, episodes concluded in a "live action freeze '' anticipating the following week, with the cliff - hanger caption, "To be continued next week! Same time -- same channel! '' For the third season, the episode would conclude, immediately followed with a vocal "teaser '' from the Robot (Dick Tufeld), warning viewers to "Stay tuned for scenes from next week 's exciting adventure! '', which would highlight the next episode, followed by the closing credits. After cancellation, the show was successful in reruns and in syndication for many years, appearing on the USA Network in the mid-to - late 1980s, most recently on FX (TV channel), Syfy, and ALN (TV network). It is currently available on Hulu streaming video, and is seen Saturday nights on MeTV. There are fan clubs in honor of the series and cast all around the world, and many Facebook group pages connecting fans of the show to each other and to the show 's stars. There is even a weekly podcast dedicated to profiling every episode of the series (initiated over fifty years after the series first aired). There was little ongoing plot continuity between episodes, except in larger goals: for example, to get enough fuel to leave the planet. However, there were some arcs in the show that allowed a small amount of continuity. The first half of the first season had an overall arc, as the Robinsons slowly got used to Smith, and as the robot began his journey to being a thinking and self - aware character. There were four sets of arc stories: Additionally, there were two arcs of linked stories: There were also sequels to some episodes ("Return from Outer Space '' follows from events in "The Sky is Falling ''), and some recurring characters including Space Pirate Tucker, the android Verda, the Green Dimension girl Athena, and Farnum B (each of whom appears in two episodes), and Zumdish (who appears in three episodes). In an odd moment of referential continuity "Two Weeks in Space '' re-used the "Space Music '' angle first shown in "Kidnapped in Space '', while the "Celestial Department Store Ordering Machine '' appeared in two episodes. In early 1968, while the final third - season episode "Junkyard in Space '' was in production, the cast and crew were informally made to believe the series would return for a fourth season. Allen had ordered new scripts for the coming season. A few weeks later, however, CBS announced a list of television series they were renewing for the 1968 -- 69 season, and Lost in Space was not included. Although CBS programming executives failed to offer any reasons why Lost in Space was cancelled, there are at least five suggested reasons offered by series executives, critics and fans, any one of which could be considered sufficient justification for cancellation given the state of the broadcast network television industry at the time. As there was no official final episode, the exploring pioneers never made it to Alpha Centauri nor found their way back to Earth. The show had sufficient ratings to support a fourth season, but it was expensive. The budget per episode for Season One was $130,980, and for Season Three, $164,788. In that time, the actors ' salaries increased; in the case of Harris, Kristen and Cartwright, their salaries nearly doubled. Part of the cost problems may have been the actors themselves: director Richardson saying of Williams ' demanding closeups of himself: The interior of the Jupiter 2 was the most expensive set for a television show at the time, about $350,000. (More than the set of the USS Enterprise a couple of years later.) According to Mumy and other sources, the show was initially picked up for a fourth season, but with a cut budget. Reportedly, 20th Century Fox was still recovering from the legendary budget overruns of Cleopatra, and thus slashed budgets across the board in its film and television productions. Allen claimed the series could not continue with a reduced budget. During a negotiating conference regarding the series direction for the fourth season with CBS chief executive Bill Paley, Allen stormed out (of the meeting) when told that the budget was being cut by 15 % from Season Three, his action thereby sealing the show 's cancellation. Robert Hamner, one of the show 's writers, states (in Starlog, # 220, November 1995) that Paley despised the show so much that the budget dispute was used as an excuse to terminate the series. Years later, Paley stated this was incorrect and that he was a fan of "the Robot ''. The Lost in Space Forever DVD cites declining ratings and escalating costs as the reasons for cancellation. Even Irwin Allen admitted that the Season 3 ratings showed an increasing percentage of children among the total viewers, meaning a drop in the "quality audience '' that advertisers preferred. A contributing factor, at least, was that June Lockhart and director Don Richardson were no longer excited about the show. Lockhart said in response to being told about the cancellation by Perry Lafferty, the head of CBS programming, "I think that 's for the best at this point, '' although she goes on to say that she would have stayed if there had been a fourth season. Lockhart immediately joined the cast of CBS ' Petticoat Junction upon Lost in Space 's cancellation. Richardson had been tipped off that the show was likely to be cancelled, was looking for another series, and had decided not to return to Lost in Space, even if it continued. Guy Williams grew embittered with his role on the show as it became increasingly "campy '' in Seasons 2 and 3 while centering squarely on the antics of Harris ' Dr. Smith character. Whether Williams would have returned for a fourth season or not was n't revealed, but he never acted again after the series, choosing instead to retire to Argentina. The final prime - time episode to be broadcast by CBS was "A Visit to Hades '', a repeat from the second season, on September 11, 1968. In 1995, Kevin Burns produced a documentary showcasing the career of Irwin Allen, hosted by Bill Mumy and June Lockhart in a recreation of the Jupiter 2 exterior set. Mumy and Lockhart utilize the "Celestial Department Store Ordering Machine '' as a temporal conduit to show information and clips on Allen 's history. Clips from Allen 's various productions as well as pilots for his unproduced series were presented along with new interviews with cast members of Allen 's shows. Mumy and Lockhart complete their presentation and enter the Jupiter 2, following which Jonathan Harris appears in character as Smith and instructs the Robot once again to destroy the ship as per his original instructions "... and this time get it right, you bubble - headed booby ''. In 1998, Burns produced a television special about the series hosted by John Larroquette and Robot B - 9 (performed by actor Bob May and voice actor Dick Tufeld). The special was hosted within a recreation of the Jupiter 2 upper deck set. The program ends with Laroquette mockingly pressing a button on the Amulet from "The Galaxy Gift '' episode, disappearing and being replaced by Mumy and Harris as an older Will Robinson and Zachary Smith. They attempt one more time to return to Earth but find that they are "Lost in Space... Forever! '' The crew had a variety of methods of transportation seen in the series. Their spaceship is the two - deck, nuclear powered Jupiter 2 flying saucer spacecraft. In the unaired pilot episode, the ship was named the Gemini 12 and consisted of a single deck. The version seen in the series was modified with a lower level and landing legs, among other changes, but footage of the Gemini 12 from the unaired pilot was still reused in early episodes. The Gemini 12 was designed by William Creber, while Robert Kinoshita redesigned it for the series as the Jupiter 2. On the lower level were the atomic motors (which use "deutronium '' for fuel), the living quarters (featuring Murphy beds), galley, laboratory, and the robot 's "magnetic lock ''. On the upper level were the guidance control system and suspended animation "freezing tubes '' necessary for non-relativistic interstellar travel. The two levels were connected by both an electronic glide tube elevator and a fixed ladder. The Jupiter 2 explicitly had artificial gravity. Entrances / exits to the ship were via the main airlock on the upper level, or via the landing struts from the lower deck, and, according to one season 2 episode, a back door. The spacecraft was also intended to serve as home to the Robinsons once it had landed on the destination planet orbiting Alpha Centauri. In one third - season episode, "Space Creature '', the power core of the Jupiter 2 is shown as impossibly over-sized; however, this may have been due to the plot of the episode, involving the travellers ' fears being amplified to supply nourishment for an alien that fed on fear. Even the lower deck could not possibly fit into the Jupiter 2 as seen from the outside -- the Jupiter 2 's levels were actually located on separate soundstages, but made to seem connected on the series, masking the scale discrepancies. Shown only in season 1 episode 3, "Island in the Sky '', "Para-jet '' thrusters, a pair of units worn on the forearms, allowed a user to descend to a planet from the orbiting spaceship and, perhaps, back again to the ship. The "Chariot '' was an all - terrain, amphibious tracked vehicle that the crew used for ground transport when they were on a planet. As stated in episode 3 of season 1, the Chariot existed in a dis - assembled state during flight, to be re-assembled once on the ground. The Chariot was actually an operational cannibalized version of the Thiokol Snowcat Spryte, with a Ford 170 - cubic - inch (3 L) inline - 6, 101 horsepower engine with a 4 - speed automatic transmission including reverse. Test footage filmed of the Chariot for the first season of the series can be seen on YouTube. Most of the Chariot 's body panels were clear -- including the roof and its dome - shaped "gun hatch ''. Both a roof rack for luggage and roof mounted "solar batteries '' were accessible by exterior fixed ladders on either side of the vehicle. It had dual headlights and dual auxiliary area lights beneath the front and rear bumpers. The roof also had swivel - mounted, interior controllable spotlights located near each front corner, with a small parabolic antenna mounted between them. The Chariot had six bucket seats (three rows of two seats) for passengers. The interior featured retractable metallised fabric curtains for privacy, a seismograph, a scanner with infrared capability, a radio transceiver, a public address system, and a rifle rack that held four laser rifles vertically near the inside of the left rear corner body panel ("Island in the Sky ''). The then new and exciting invention called a jet pack (specifically, a Bell Rocket Belt) was used occasionally by Prof Robinson or Major West. Finally, the "Space Pod '' was a small mini-spacecraft first shown in the third and final season, modeled on the Apollo Lunar Module. The Pod was used to travel from its bay in the Jupiter 2 to destinations either on a nearby planet or in space, and the pod apparently had artificial gravity and an auto - return mechanism. For self - defense, the crew of the Jupiter 2 -- including Will on occasion -- had an arsenal of laser guns at their disposal, both slings - carried rifles and holstered pistols. (The first season 's personal issue laser gun was a film prop modified from a toy semi-automatic pistol made by Remco. The laser had only been invented 5 years before.) The crew also employed a force field around the Jupiter 2 for protection while on alien planets. The force shield generator was able to protect the campsite but in one season 3 episode was able to shield the entire planet. For communication, the crew used small transceivers to communicate with each other, the Chariot, and the ship. In "The Raft '', Will improvised several miniature rockoons in an attempt to send an interstellar "message in a bottle '' distress signal. In season 2 a set of relay stations was built to further extend communications while planet - bound. Their environmental control Robot B - 9 ran air and soil tests, was extremely strong, able to discharge strong electrostatic charges from his claws, could detect threats with his scanner and could produce a defensive smoke screen. The Robot could detect faint smells (in "One of Our Dogs is Missing '') and could both understand speech as well as speak (including alien languages). The Robot claimed the ability to read human minds by translating emitted thought waves back into words ("Invaders From The Fifth Dimension '', S01E08). The Jupiter 2 had some unexplained advanced technology that simplified or did away with mundane tasks. The "auto - matic laundry '' took seconds to clean, iron, fold, and package clothes in clear plastic bags. Similarly, the "dishwasher '' would clean, wash, and dry dishes in just seconds. Some technology reflected recent real - world developments. Silver reflective space blankets, a then new invention developed by NASA in 1964, were used in "The Hungry Sea '' (air date: October 13, 1965) and "Attack of the Monster Plants '' (air date: December 15, 1965). The crew 's spacesuits were made with aluminum - coated fabric, like NASA 's Mercury spacesuits, and had Velcro fasteners, which NASA first used during the Apollo program (1961 -- 1972). While the crew normally grew a hydroponic garden on a planet as an intermediate step before cultivating the soil of a planet, they also had "protein pills '' (a complete nutritional substitute for whole foods) in cases of emergency ("The Hungry Sea '', air date: October 13, 1965; and "The Space Trader '', air date: March 9, 1966). Although it retains a following, the science - fiction community often points to Lost in Space as an example of early television 's perceived poor record at producing science - fiction. The series ' deliberate fantasy elements, a trademark of Irwin Allen productions, were perhaps overlooked as it drew comparisons to its supposed rival, Star Trek. However, Lost in Space was a mild ratings success, unlike Star Trek, which received very poor ratings during its original network television run. The more cerebral Star Trek never averaged higher than 52nd in the ratings during its three seasons, while Lost in Space finished season one with a rating of 32nd, season two in 35th place, and the third and final season in 33rd place. Lost in Space also ranked third as one of the top five favorite new shows for the 1965 -- 1966 season in a viewer TVQ poll (the others were The Big Valley, Get Smart, I Dream of Jeannie and F Troop). Lost in Space was the favorite show of John F. Kennedy, Jr. while growing up in the 1960s. Star Trek creator Gene Roddenbery insisted that the two shows could not be compared. He was more of a philosopher, while understanding that Irwin Allen was a storyteller. When asked about Lost in Space, Roddenberry acknowledged: "That show accomplishes what it sets out to do. Star Trek is not the same thing. '' Lost in Space received a 1966 Emmy Award nomination for Cinematography - Special Photographic Effects but did not win, and again in 1968 for Achievement in Visual Arts & Makeup but did not win. In 2005, it was nominated for a Saturn Award for Best DVD Retro Television Release, but did not win. In 2008, TVLand nominated and awarded the series for Awesomest Robot. The open and closing theme music was written by John Williams, the composer behind the Star Wars theme music, who was listed in the credits as "Johnny Williams ''. The original pilot and much of Season One reused Bernard Herrmann 's eerie score from the classic sci - fi film The Day the Earth Stood Still (1951). For Season Three, the opening theme was revised (again by Williams) to a more exciting and faster tempo score, accompanied by live action shots of the cast, featuring a pumped - up countdown from seven to one to launch each week 's episode. Seasons 1 and 2 had animated figures "life - roped '' together drifting "hopelessly lost in space '' and set to a dizzy and comical score that Bill Mumy once described in a magazine interview as "sounding like a circuit board ''. Much of the incidental music in the series was written by Williams who scored his four TV episodes with the movie soundtrack quality that not only helped him gain credibility and a boost towards his later success but also gave Lost in Space its distinctive musical style that is instantly recognizable by all of the show 's fans. Other notable film and television composers included Alexander Courage (composer of the Star Trek theme), who contributed six scores to the series. His most recognizable ("Wild Adventure '') included his key theme for "Lorelei '' composed for organ, woodwinds, and harp -- thus cementing this highly recognizable theme with Williams ' own "Chariot '' and the main theme for the series. The dramatic music of the show served the serious drama episodes well while highlighting the whimsy of later episodes when the deadly serious tunes were used to underscore comical and laughable fantasy threats. There have been a number of Lost in Space soundtrack CDs released. Despite never reaching the 100 episodes desired for daily stripping in syndication, Lost in Space was picked up for syndication in most major U.S. markets. By 1969, the show was declared to be the # 1 syndicated program (or close to it) in markets such as Houston, Milwaukee, Miami and even New York City, where it was said that the only competition to Lost in Space was I Love Lucy. The program did n't have the staying power throughout the 1970s of its supposed rival, Star Trek. Part of the blame was placed on the first season of Lost in Space being in black - and - white, while a majority of American households at the time had a color television receiver. By 1975, many markets began removing Lost in Space from daily schedules or moving it to less desirable time slots. The series experienced a revival when Ted Turner acquired it for his growing TBS "superstation '' in 1979. Viewer response was highly positive, and it became a TBS mainstay for the next five years. Notes: In 1998, New Line Cinema produced a film adaptation. It includes some homages, cameos and story details related to the original TV - series, including: Additional cameo appearances of actors from the original TV - series were considered, but not included in the film: The film used a number of ideas familiar to viewers from the original show: Smith reprogramming the robot and its subsequent rampage ("Reluctant Stowaway ''), near miss with the sun ("Wild Adventure ''), the derelict spaceship ("The Derelict ''), discovery of the Blawp and the crash ("Island in the Sky '') and an attempt to change history by returning to the beginning ("The Time Merchant ''). Also a scene - stealing ' Goodnight ' homage to the Waltons was included. Something fans of the original always wanted to see happen was finally realized when Don knocks out an annoyingly complaining Smith at the end of the movie, saying "That felt good! '' In late 2003, a new television series, with a somewhat changed format, was in development in the U.S. It originally was intended to be closer to the original pilot with no Smith, but including a robot, had an additional older Robinson child called David, and Penny was an infant. The pilot (titled, "The Robinsons: Lost in Space '') was commissioned by The WB Television Network. It was directed by John Woo and produced by Synthesis Entertainment, Irwin Allen Productions, Twentieth Century Fox Television and Regency Television. The Jupiter 2 interstellar flying - saucer spacecraft of the original series was changed to a non-saucer planet - landing craft, deployed from a larger inter-stellar mothership. In this adaptation John Robinson was a retiring war hero of an alien invasion and had decided to take his family to another colony elsewhere in space. However the ship is attacked by the aliens, David is lost amidst it all, and the Robinsons, along with Don, are forced to escape in the small Jupiter 2 "Space Pod '' of the mothership. The series, presumably, would have revolved around the family trying to recover David from the aliens. It was not among the network 's series pick - ups confirmed later that year. Looking back at the pilot when the 2018 Netflix reboot was aired, Neil Calloway of Flickering Myth said "you 're hardly on the edge of your seat. '' and "You start to wonder where the $2 million went, and then you question why something directed by John Woo is so pedestrian. '' The producers of the new Battlestar Galactica show bought the show 's sets. They were redesigned the next year and used for scenes on the Battlestar Pegasus. Dick Tufeld reprised his role as voice of the robot for the third time. On October 10, 2014, it was announced that Legendary TV was developing a new reboot of Lost in Space for Netflix with Dracula Untold screenwriters Matt Sazama and Burk Sharpless attached to write. On June 29, 2016, Netflix ordered the series with 10 episodes. The series hit Netflix on April 13, 2018. The series was renewed for a second season on May 13, 2018. The Robot also appears in the series in a modified form. Before the television series appeared, a comic book named Space Family Robinson was published by Gold Key Comics, written by Gaylord Du Bois and illustrated by Dan Spiegle. (Du Bois did not create the series, but he became the sole writer of the series once he began chronicling the Robinsons ' adventures with "Peril on Planet Four '' in issue # 8, and he had already written the Captain Venture second feature beginning with "Situation Survival '' in issue # 6). Due to a deal worked out with Gold Key, the title of the comic later incorporated the Lost in Space sub-title. The comic book featured different characters and a unique H - shaped spacecraft rather than one of a saucer shape. In 1991 Bill Mumy provided "Alpha Control Guidance '' for a Lost in Space revival in comic book form Lost in Space comic book for Innovation Comics, writing six of the issues. The first officially licensed comic to be based on the TV series (as the previous Gold Key version was based on the 1962 concept), the series was set several years after the show. The kids were now teenagers, with Will at 16 years old, and the stories attempted to return the series to its straight adventure roots with one story even explaining the camp / farce episodes of the series as fanciful entries in Penny 's Space Diary. Complex adult - themed story concepts were introduced and even included a love triangle developing between Penny, Judy and Don. Mark Goddard wrote a "Don '' story, something he never had in the original series (with the possible exception of "the Flaming Planet '', which might qualify as a "Don '' episode). The Jupiter 2 had various interior designs in the first year as artists were obviously not familiar with the original layout of the ship while Michal Dutkiewicz got it ' spot - on '. The first year had an arc ultimately leading the travelers to Alpha Centauri with Smith contacting his former alien masters along the way. Aeolis 14 Umbra were furious with Smith for not having succeeded in his mission to prevent the Jupiter 2, built with technology from a crashed ship of their race, from reaching the star system they had claimed as their own. The year ended with Smith caught out for his traitorous associations and imprisoned in a freezing tube for the Jupiter 's final journey to the Promised Planet. Year two was to be Mumy 's own full season story of a complex adventure following the Robinson 's arrival at their destination and capture by the Aoleans. Innovation folded in 1993 with the story only halfway through and it was n't until 2005 that Mumy was able to present his story to Lost in Space fandom as a complete graphic novel via Bubblehead Publishing. The theme of an adult Will Robinson was also explored in the film and in the song "Ballad of Will Robinson '' -- written and recorded by Mumy. In 1998 Dark Horse Comics published a three - part story chronicling the Robinson Clan as depicted in the film. In 2006 Bill Mumy and Peter David co-wrote Star Trek: The Return of the Worthy, a three - part story that was essentially a crossover between Lost in Space and Star Trek with the Enterprise crew encountering a Robinson - like expedition amongst the stars, though with different characters. In 2016, American Gothic Press published a six - issue miniseries titled Irwin Allen 's Lost in Space, the Lost Adventures, based on unfilmed scripts from the series. The scripts "The Curious Galactics '' and "Malice in Wonderland '' were written by Carey Wilber. The first script was adapted as issues 1 -- 3 of the series, with the adapted script written by Holly Interlandi and drawn by Kostas Pantaulas, with Patrick McEvoy doing coloring and covers. The second script was adapted as issues 4 -- 6 of the series, again adapted by Interlandi, with McEvoy providing pencil art, coloring and covers. In 1967, a novel based on the series, with significant changes to the personalities of the characters and the design of the ship, was published by Pyramid Books, and written by Dave Van Arnam and Ted White (as "Ron Archer ''). A scene in the book correctly predicts Richard Nixon winning the Presidency after Lyndon Johnson. In the 1972 -- 1973 television season, ABC produced The ABC Saturday Superstar Movie, a weekly collection of 60 - minute animated movies, pilots and specials from various production companies, such as Hanna - Barbera, Filmation, and Rankin - Bass -- Hanna - Barbera Productions contributed animated work based on such television series as Gidget, Yogi Bear, Tabitha, Oliver Twist, Nanny and the Professor, The Banana Splits, and Lost in Space. Dr. Smith (voiced by Jonathan Harris) was the only character from the original program to appear in the special, along with the Robot (who was named Robon and employed in flight control rather than a support activity). The spacecraft was launched vertically by rocket, and Smith was a passenger rather than a saboteur. The pilot for the animated Lost in Space series was not picked up as a series, and only this episode was produced. This cartoon was included in the Blu - ray release on September 15, 2015. 20th Century Fox has released the entire series on DVD in Region 1. Several of the releases contain bonus features including interviews, episodic promos, video stills and the original un-aired pilot episode. All episodes of Lost in Space were remastered and released on a Blu - ray disc set on September 15, 2015 (the 50th anniversary of the premiere on the CBS TV Network). The Blu - ray disc set includes a cast table reading of the final episode, written by Bill Mumy, which brings the series to a close by having the characters return to earth.
who is sheldon on big bang theory married to
Jim Parsons - wikipedia James Joseph "Jim '' Parsons (born March 24, 1973) is an American actor. He is known for playing Sheldon Cooper in the CBS sitcom The Big Bang Theory. He has received several awards for his performance, including four Primetime Emmy Awards for Outstanding Lead Actor in a Comedy Series and the Golden Globe Award for Best Actor in a Television Series Musical or Comedy. In 2011, Parsons made his Broadway debut portraying Tommy Boatwright in the play The Normal Heart, for which he received a Drama Desk Award nomination. He reprised the role in the film adaptation of the play, and received his seventh Emmy nomination, this time in the category of Outstanding Supporting Actor in a Miniseries or Movie. In film, Parsons played a supporting role in the period drama Hidden Figures (2016). Jim Parsons was born at St. Joseph Hospital in Houston, Texas, and was raised in one of its northern suburbs, Spring. He is the son of Milton Joseph "Mickey / Jack '' Parsons, Jr. and teacher Judy Ann (née McKnight). His sister Julie Ann Parsons is also a teacher. He attended Klein Oak High School in Spring. Parsons points to a role in Noises Off during his junior year as the first time "I fully connected with the role I was playing and started to truly understand what it meant to be honest on stage. '' After playing the role of the Kola - Kola bird in a school production of The Elephant 's Child at age six, Parsons was determined to become an actor. The young Parsons was heavily influenced by sitcoms, particularly Three 's Company, Family Ties, and The Cosby Show. After graduating from high school, Parsons received a bachelor 's degree from the University of Houston. He was prolific during this time, appearing in 17 plays in 3 years. He was a founding member of Infernal Bridegroom Productions and regularly appeared at the Stages Repertory Theatre. Parsons enrolled in graduate school at the University of San Diego in 1999. He was one of seven students accepted into a special two - year course in classical theater, taught in partnership with the Old Globe Theater. Program director Rick Seer recalled having reservations about admitting Parsons, saying, "Jim is a very specific personality. He 's thoroughly original, which is one reason he 's been so successful. But we worried, ' Does that adapt itself to classical theater, does that adapt itself to the kind of training that we 're doing? ' But we decided that he was so talented that we would give him a try and see how it worked out. '' Parsons enjoyed school and told an interviewer that he would have pursued a doctorate in acting if possible: "school was so safe!... you frequently would surprise yourself by what you were capable of, and you were not surprised by some things. '' Parsons graduated in 2001 and moved to New York. Parsons traced his family 's history on TLC 's Who Do You Think You Are? in September 2013 and discovered French heritage from his father 's side. One of his ancestors was the French architect Louis - François Trouard (1729 -- 1804). In New York, Parsons worked in Off - Broadway productions and made several television appearances. In a much - discussed 2003 Quiznos commercial, Parsons played a man who had been raised by wolves and continued to nurse from his wolf "mother ''. He had a recurring role on the television show Judging Amy and appeared on the television series Ed. Parsons also had minor roles in several movies, including Garden State and School for Scoundrels. Parsons has estimated that he auditioned for between 15 and 30 television pilots, but on many of the occasions when he was cast, the show failed to find a television network willing to purchase it. The exception came with The Big Bang Theory. After reading the pilot script, Parsons felt that the role of Sheldon Cooper would be a very good fit for him. Although he did not feel any sort of relationship with the character, he was enchanted by the dialogue structure, the way the writers "brilliantly use those words that most of us do n't recognize to create that rhythm. And the rhythm got me. It was the chance to dance through that dialogue, and in a lot of ways still is. '' In his audition, Parsons so impressed series creator Chuck Lorre that Lorre insisted on a second audition to see if Parsons could replicate the performance. Parsons was cast as Sheldon Cooper, a physicist with social apathy who frequently belittles his friends and the waitress who lives across the hall. The role requires Parsons to "rattle off line after line of tightly composed, rhythmic dialogue, as well as then do something with his face or body during the silence that follows. '' Parsons credits his University of San Diego training with giving him the tools to break down Sheldon 's lines. Television critic Andrew Dansby compares Parsons 's physical comedy to that of Buster Keaton and other silent film stars. Lorre praises Parsons ' instincts, saying that "You ca n't teach that. '' Lorre describes Parsons ' "great sense of control over every part of his body, the way he walks, holds his hands, cocks his head, the facial tics as ' inspired '. '' Reviewer Lewis Beale describes Parsons ' performance as "so spot - on, it seems as if the character and the actor are the same person. '' Parsons admits that the work is "more effort than I ever thought a sitcom would take. And that 's really the fun of it. '' In August 2009, Parsons won the Television Critics Association award for individual achievement in comedy, beating Alec Baldwin, Tina Fey, Steve Carell, and Neil Patrick Harris. Parsons was nominated for Primetime Emmy Awards in 2009, 2010, 2011, 2012, 2013 and 2014 for Outstanding Lead Actor in a Comedy Series, winning in 2010, 2011, 2013 and 2014. In September 2010, Parsons and co-stars Johnny Galecki and Kaley Cuoco signed new contracts, guaranteeing each of them $200,000 per episode for the fourth season of The Big Bang Theory, with substantial raises for each of the next three seasons. The three were also promised a percentage of the show 's earnings. In January 2011, Parsons won the Golden Globe award for Best Actor in a Television Series -- Comedy (the award was presented by co-star Cuoco). From August 2013, Parsons, Cuoco and Galecki each earned $325,000 per episode. In August 2014, Parsons, Galecki and Cuoco once again signed new contracts, guaranteeing each of them $1,000,000 per episode for the eighth, ninth, and tenth seasons of The Big Bang Theory, as well as quadrupling their percentage of the show 's earnings to over 1 % each. In 2011, Parsons appeared with Jack Black, Owen Wilson, Steve Martin, and Rashida Jones in the comedy film The Big Year. It was released in October. That same year, he appeared as the human alter ego of Walter, the newest Muppet introduced in The Muppets. On May 18, 2012, Parsons began appearing on Broadway as Elwood P. Dowd in a revival of Harvey. Parsons received a star on the Hollywood Walk of Fame on March 11, 2015. He voiced Oh, one of the lead roles in the DreamWorks Animation comedy film Home (2015), alongside Rihanna. On January 29, 2015, it was announced that Parsons would star as God in the Broadway production of An Act of God, a new play by David Javerbaum and directed by Joe Mantello. The play began previews at Studio 54 on May 5, 2015 and closed August 2, 2015, to positive reviews. In 2017, Parsons started hosting his own SiriusXM talk show, Jim Parsons Is Too Stupid for Politics. The show will run for six weeks. Parsons lives in New York City neighborhood of Gramercy Park while also maintaining a residence in Los Angeles. His father died in a car crash on April 29, 2001. On May 23, 2012, an article in The New York Times noted that Parsons is gay and had been in a relationship for the last ten years. His husband is art director Todd Spiewak. In October 2013, Parsons called their relationship "an act of love, coffee in the morning, going to work, washing the clothes, taking the dogs out -- a regular life, boring love ''. Parsons and Spiewak wed in New York in May 2017.
characters names from blaze and the monster machines
Blaze and the monster Machines - wikipedia Blaze and the Monster Machines is a CGI interactive educational animated television series with a focus of learning about STEM. The series premiered on Nickelodeon on October 13, 2014. The first season ran for 20 episodes. On June 15, 2015, it was renewed for a third season. On June 21, 2016, it was renewed for a fourth season. The show focuses on Blaze, an orange monster truck, and his young but smart driver, AJ. They live in a world that involves many living monster trucks, including their truck friends Starla, Stripes, Zeg, and Darington. Another friend of theirs is a girl named Gabby, who is a mechanic who can fix anything. Each episode also features Crusher, a sneaky blue truck who cheats in races. Crusher is almost always accompanied by a green car named Pickle. Animals in this monster machine world also have blended windows and wheels; however AJ and Gabby are not vehicles at all. In some episodes, Blaze, AJ and their friends are in a race against Crusher. During the race, Crusher cheats, usually with the help of his gray robots. However, Blaze and AJ manage to get through his traps, and they always beat him at the last second with the help of his blazing speed. Some episodes do not involve races, but still have Blaze competing against Crusher, sometimes by racing against him to get an item. Sometimes it will involve helping a truck friend such as Starla, Zeg, Darington, Stripes, or even Crusher. Blaze and the Monster Machines premiered on Nick Jr. in the United Kingdom and Ireland on March 6, 2015 and on Nick Jr. in Australia and New Zealand on March 9. The series is also airing on Nickelodeon and Treehouse TV in Canada, Nick Jr. in Africa (, TNT in Russia (Вспыш и чудо - машинки), and HOP! in Israel (בלייז וחברי משאית המפלצת שלו).
where is the reticular formation located in the brain
Reticular formation - wikipedia The reticular formation is a set of interconnected nuclei that are located throughout the brainstem. The reticular formation is not anatomically well defined because it includes neurons located in diverse parts of the brain. The neurons of the reticular formation make up a complex set of networks in the core of the brainstem that stretch from the upper part of the midbrain to the lower part of the medulla oblongata. The reticular formation includes ascending pathways to the cortex in the ascending reticular activating system (ARAS) and descending pathways to the spinal cord via the reticulospinal tracts of the descending reticular formation. Neurons of the reticular formation, particularly those of the ascending reticular activating system, play a crucial role in maintaining behavioral arousal and consciousness. The functions of the reticular formation are modulatory and premotor. The modulatory functions are primarily found in the rostral sector of the reticular formation and the premotor functions are localized in the neurons in more caudal regions. The reticular formation is divided into three columns: raphe nuclei (median), gigantocellular reticular nuclei (medial zone), and parvocellular reticular nuclei (lateral zone). The raphe nuclei are the place of synthesis of the neurotransmitter serotonin, which plays an important role in mood regulation. The gigantocellular nuclei are involved in motor coordination. The parvocellular nuclei regulate exhalation. The reticular formation is essential for governing some of the basic functions of higher organisms and is one of the phylogenetically oldest portions of the brain. The human reticular formation is composed of almost 100 brain nuclei and contains many projections into the forebrain, brainstem, and cerebellum, among other regions. It includes the reticular nuclei, reticulothalamic projection fibers, diffuse thalamo - cortical projections, ascending cholinergic projections, descending non-cholinergic projections, and descending reticulospinal projections. The reticular formation also contains two major neural subsystems, the ascending reticular activating system and descending reticulospinal tracts, which mediate distinct cognitive and physiological processes. It has been functionally cleaved both sagittally and coronally. Traditionally the reticular nuclei are divided into three columns: The original functional differentiation was a division of caudal and rostral. This was based upon the observation that the lesioning of the rostral reticular formation induces a hypersomnia in the cat brain. In contrast, lesioning of the more caudal portion of the reticular formation produces insomnia in cats. This study has led to the idea that the caudal portion inhibits the rostral portion of the reticular formation. Sagittal division reveals more morphological distinctions. The raphe nuclei form a ridge in the middle of the reticular formation, and, directly to its periphery, there is a division called the medial reticular formation. The medial RF is large and has long ascending and descending fibers, and is surrounded by the lateral reticular formation. The lateral RF is close to the motor nuclei of the cranial nerves, and mostly mediates their function. The medial reticular formation and lateral reticular formation are two columns of neuronal nuclei with ill - defined boundaries that send projections through the medulla and into the mesencephalon (midbrain). The nuclei can be differentiated by function, cell type, and projections of efferent or afferent nerves. Moving caudally from the rostral midbrain, at the site of the rostral pons and the midbrain, the medial RF becomes less prominent, and the lateral RF becomes more prominent. Existing on the sides of the medial reticular formation is its lateral cousin, which is particularly pronounced in the rostral medulla and caudal pons. Out from this area spring the cranial nerves, including the very important vagus nerve. The Lateral RF is known for its ganglions and areas of interneurons around the cranial nerves, which serve to mediate their characteristic reflexes and functions. The reticular formation consists of more than 100 small neural networks, with varied functions including the following: The ascending reticular activating system (ARAS), also known as the extrathalamic control modulatory system or simply the reticular activating system (RAS), is a set of connected nuclei in the brains of vertebrates that is responsible for regulating wakefulness and sleep - wake transitions. The ARAS is a part of the reticular formation and is mostly composed of various nuclei in the thalamus and a number of dopaminergic, noradrenergic, serotonergic, histaminergic, cholinergic, and glutamatergic brain nuclei. The ARAS is composed of several neuronal circuits connecting the dorsal part of the posterior midbrain and anterior pons to the cerebral cortex via distinct pathways that project through the thalamus and hypothalamus. The ARAS is a collection of different nuclei -- more than 20 on each side in the upper brainstem, the pons, medulla, and posterior hypothalamus. The neurotransmitters that these neurons release include dopamine, norepinephrine, serotonin, histamine, acetylcholine, and glutamate. They exert cortical influence through direct axonal projections and indirect projections through thalamic relays. The thalamic pathway consists primarily of cholinergic neurons in the pontine tegmentum, whereas the hypothalamic pathway is composed primarily of neurons that release monoamine neurotransmitters, namely dopamine, norepinephrine, serotonin, and histamine. The glutamate - releasing neurons in the ARAS were identified much more recently relative to the monoaminergic and cholinergic nuclei; the glutamatergic component of the ARAS includes one glutamatergic nucleus in the hypothalamus and various glutamatergic brainstem nuclei. The orexin neurons of the lateral hypothalamus innervate every component of the ascending reticular activating system and coordinate activity within the entire system. The key components of the ARAS are listed in the table below. The ARAS consists of evolutionarily ancient areas of the brain, which are crucial to survival and protected during adverse periods. As a result, the ARAS still functions during inhibitory periods of hypnosis. The ascending reticular activating system which sends neuromodulatory projections to the cortex - mainly connects to the prefrontal cortex. There is seen to be low connectivity to the motor areas of the cortex. The ascending reticular activating system is an important enabling factor for the state of consciousness. The ascending system is seen to contribute to wakefulness as characterised by cortical and behavioural arousal. The main function of the ARAS is to modify and potentiate thalamic and cortical function such that electroencephalogram (EEG) desynchronization ensues. There are distinct differences in the brain 's electrical activity during periods of wakefulness and sleep: Low voltage fast burst brain waves (EEG desynchronization) are associated with wakefulness and REM sleep (which are electrophysiologically similar); high voltage slow waves are found during non-REM sleep. Generally speaking, when thalamic relay neurons are in burst mode the EEG is synchronized and when they are in tonic mode it is desynchronized. Stimulation of the ARAS produces EEG desynchronization by suppressing slow cortical waves (0.3 -- 1 Hz), delta waves (1 -- 4 Hz), and spindle wave oscillations (11 -- 14 Hz) and by promoting gamma band (20 -- 40 Hz) oscillations. The physiological change from a state of deep sleep to wakefulness is reversible and mediated by the ARAS. Inhibitory influence from the brain is active at sleep onset, likely coming from the preoptic area (POA) of the hypothalamus. During sleep, neurons in the ARAS will have a much lower firing rate; conversely, they will have a higher activity level during the waking state. Therefore, low frequency inputs (during sleep) from the ARAS to the POA neurons result in an excitatory influence and higher activity levels (awake) will have inhibitory influence. In order that the brain may sleep, there must be a reduction in ascending afferent activity reaching the cortex by suppression of the ARAS. The ARAS also helps mediate transitions from relaxed wakefulness to periods of high attention. There is increased regional blood flow (presumably indicating an increased measure of neuronal activity) in the midbrain reticular formation (MRF) and thalamic intralaminar nuclei during tasks requiring increased alertness and attention. Mass lesions in brainstem ARAS nuclei can cause severe alterations in level of consciousness (e.g., coma). Bilateral damage to the reticular formation of the midbrain may lead to coma or death. Direct electrical stimulation of the ARAS produces pain responses in cats and educes verbal reports of pain in humans. Additionally, ascending reticular activation in cats can produce mydriasis, which can result from prolonged pain. These results suggest some relationship between ARAS circuits and physiological pain pathways. Given the importance of the ARAS for modulating cortical changes, disorders of the ARAS should result in alterations of sleep - wake cycles and disturbances in arousal. Some pathologies of the ARAS may be attributed to age, as there appears to be a general decline in reactivity of the ARAS with advancing years. Changes in electrical coupling have been suggested to account for some changes in ARAS activity: If coupling were down - regulated, there would be a corresponding decrease in higher - frequency synchronization (gamma band). Conversely, up - regulated electrical coupling would increase synchronization of fast rhythms that could lead to increased arousal and REM sleep drive. Specifically, disruption of the ARAS has been implicated in the following disorders: There are several potential factors that may adversely influence the development of the ascending reticular activating system: The reticulospinal tracts, also known as the descending or anterior reticulospinal tracts, are extrapyramidal motor tracts that descend from the reticular formation in two tracts to act on the motor neurons supplying the trunk and proximal limb flexors and extensors. The reticulospinal tracts are involved mainly in locomotion and postural control, although they do have other functions as well. The descending reticulospinal tracts are one of four major cortical pathways to the spinal cord for musculoskeletal activity. The reticulospinal tracts works with the other three pathways to give a coordinated control of movement, including delicate manipulations. The four pathways can be grouped into two main system pathways -- a medial system and a lateral system. The medial system includes the reticulospinal pathway and the vestibulospinal pathway, and this system provides control of posture. The corticospinal and the rubrospinal tract pathways belong to the lateral system which provides fine control of movement. The tract is divided into two parts, the medial (or pontine) and lateral (or medullary) reticulospinal tracts (MRST and LRST). The ascending sensory tract conveying information in the opposite direction is known as the spinoreticular tract. The reticulospinal tracts are mostly inhibited by the corticospinal tract; if damage occurs at the level of or below the red nucleus (e.g. to the superior colliculus), it is called decerebration, and causes decerebrate rigidity: an unopposed extension of the head and limbs. The reticulospinal tracts also provide a pathway by which the hypothalamus can control sympathetic thoracolumbar outflow and parasympathetic sacral outflow. The term "reticular formation '' was coined in the late 19th century by Otto Deiters, coinciding with Ramon y Cajal 's neuron doctrine. Allan Hobson states in his book The Reticular Formation Revisited that the name is an etymological vestige from the fallen era of the aggregate field theory in the neural sciences. The term "reticulum '' means "netlike structure '', which is what the reticular formation resembles at first glance. It has been described as being either too complex to study or an undifferentiated part of the brain with no organization at all. Eric Kandel describes the reticular formation as being organized in a similar manner to the intermediate gray matter of the spinal cord. This chaotic, loose, and intricate form of organization is what has turned off many researchers from looking farther into this particular area of the brain. The cells lack clear ganglionic boundaries, but do have clear functional organizations and distinct cell types. The term "reticular formation '' is seldom used anymore except to speak in generalities. Modern scientists usually refer to the individual nuclei that compose the reticular formation. Moruzzi and Magoun first investigated the neural components regulating the brain 's sleep - wake mechanisms in 1949. Physiologists had proposed that some structure deep within the brain controlled mental wakefulness and alertness. It had been thought that wakefulness depended only on the direct reception of afferent (sensory) stimuli at the cerebral cortex. The direct electrical stimulation of the brain could simulate electrocortical relays. Magoun used this principle to demonstrate, on two separate areas of the brainstem of a cat, how to produce wakefulness from sleep. First the ascending somatic and auditory paths; second, a series of "ascending relays from the reticular formation of the lower brain stem through the midbrain tegmentum, subthalamus and hypothalamus to the internal capsule. '' The latter was of particular interest, as this series of relays did not correspond to any known anatomical pathways for the wakefulness signal transduction and was coined the ascending reticular activating system (ARAS). Next, the significance of this newly identified relay system was evaluated by placing lesions in the medial and lateral portions of the front of the midbrain. Cats with mesancephalic interruptions to the ARAS entered into a deep sleep and displayed corresponding brain waves. In alternative fashion, cats with similarly placed interruptions to ascending auditory and somatic pathways exhibited normal sleeping and wakefulness, and could be awakened with somatic stimuli. Because these external stimuli would be blocked by the interruptions, this indicated that the ascending transmission must travel through the newly discovered ARAS. Finally, Magoun recorded potentials within the medial portion of the brain stem and discovered that auditory stimuli directly fired portions of the reticular activating system. Furthermore, single - shock stimulation of the sciatic nerve also activated the medial reticular formation, hypothalamus, and thalamus. Excitation of the ARAS did not depend on further signal propagation through the cerebellar circuits, as the same results were obtained following decerebellation and decortication. The researchers proposed that a column of cells surrounding the midbrain reticular formation received input from all the ascending tracts of the brain stem and relayed these afferents to the cortex and therefore regulated wakefulness. 2 ° (Spinomesencephalic tract → Superior colliculus of Midbrain tectum)
it's a low down dirty shame cast
In Living Color - wikipedia In Living Color is an American sketch comedy television series that originally ran on Fox from April 15, 1990, to May 19, 1994. Keenen Ivory Wayans created, wrote and starred in the program. The show was produced by Ivory Way Productions in association with 20th Century Fox Television and was taped at stage 7 at the Fox Television Center on Sunset Boulevard in Hollywood, California. The title of the series was inspired by the NBC announcement of broadcasts being presented "in living color '' during the 1960s, prior to mainstream color television. It also refers to the fact that most of the show 's cast were black, unlike other sketch comedy shows such as Saturday Night Live whose casts are mostly white. It was controversial due to the Wayans ' decision to portray African - American humor from the ghetto in a time when mainstream American tastes regarding black comedy had been set by more upscale shows such as The Cosby Show, causing an eventual feud for control between Fox executives and the Wayans. Other members of the Wayans family -- Damon, Kim, Shawn, and Marlon -- had regular roles, while brother Dwayne frequently appeared as an extra. The show also starred the rising stand - up comic / actor Jim Carrey alongside previously unknown actor / comedians Jamie Foxx, Tommy Davidson, David Alan Grier, and T'Keyah Crystal Keymáh. Additionally, Dancing with the Stars judge and choreographer Carrie Ann Inaba and actress and pop music star Jennifer Lopez were members of the show 's dance troupe The Fly Girls, with actress Rosie Perez serving as choreographer. The show launched the careers of Carrey, Foxx, Davidson, Grier, Keymáh, Inaba and Lopez, and is credited with bringing the Wayans family to a higher level of fame as well. It was immensely popular in its first two seasons, capturing more than a 10 - point Nielsen rating; in the third and fourth seasons, ratings faltered as the Wayans brothers fell out with Fox network leadership over creative control and rights. The series won the Primetime Emmy Award for Outstanding Variety, Music or Comedy Series in 1990. The series gained international prominence for its bold move and its all - time high ratings gained by airing a live, special episode as a counterprogram for the halftime show of U.S. leader CBS 's live telecast of Super Bowl XXVI, prompting the National Football League to book A-list acts for future game entertainment, starting with Michael Jackson the following year. In 2018, a history of the show, Homey Do n't Play That! by David Peisner, was released by 37 INK, an imprint of Simon & Schuster. Following Keenen Ivory Wayans ' success with Hollywood Shuffle and I 'm Gonna Git You Sucka, Fox Broadcasting Company approached Wayans to offer him his own show. Wayans wanted to produce a variety show similar to Saturday Night Live, but with a cast of people of color that took chances with its content. Fox gave Wayans a lot of freedom with the show, although Fox executives were a bit concerned about the show 's content prior to its television debut. In announcing its debut, Fox described In Living Color as a, "contemporary comedy variety show ''. In its preview, the Christian Science Monitor warned that its, "raw tone may offend some, but it does allow a talented troupe to experiment with black themes in a Saturday Night Live - ish format. '' Keenen Ivory Wayans said, "I wanted to do a show that reflects different points of view. We 've added an Asian and a Hispanic minority to the show. We 're trying in some way to represent all the voices... Minority talent is not in the system and you have to go outside. We found Crystal doing her act in the lobby of a theater in Chicago. We went beyond the Comedy Stores and Improvs, which are not showcase places for minorities. '' The first episode aired on Sunday, April 15, 1990, following an episode of Married... with Children. The first episode was watched by 22.7 million people, making it the 29th top show for the week. The Miami Herald said the show was as "smart and saucy as it is self - aware '' and "audacious and frequently tasteless, but terrific fun ''. The Philadelphia Inquirer called it "the fastest, funniest half - hour in a long time ''. The Seattle Times said it had "the free - wheeling, pointed sense of humor that connects with a large slice of today 's audience ''. The Columbus Dispatch described it as a "marvelously inventive '' show that has "catapulted television back to the cutting edge ''. The sketch comedy show helped launch the careers of comedians / actors Jim Carrey (then credited as "James Carrey ''), one of only two white members of the original cast, Jamie Foxx, who joined the cast in the third season and David Alan Grier (an established theatre actor who had worked in Keenen Ivory Wayans ' 1988 motion picture I 'm Gonna Git You Sucka). The series strove to produce comedy with a strong emphasis on modern black subject matter. It became renowned for parody, especially of race relations in the United States. For instance, Carrey was frequently used to ridicule white musicians such as Snow and Vanilla Ice, who performed in genres more commonly associated with black people. The Wayans themselves often played exaggerated black ghetto stereotypes for humor and effect. A sketch parodying Soul Train mocked the show as Old Train, suggesting the show (along with its host, Don Cornelius) was out of touch and only appealed to the elderly and the dead. When asked about the show 's use of stereotypes of black culture for comedy, Wayans said, "Half of comedy is making fun of stereotypes. They only get critical when I do it. Woody Allen has been having fun with his culture for years, and no one says anything about it. Martin Scorsese, his films basically deal with the Italian community, and no one ever says anything to him. John Hughes, all of his films parody upscale white suburban life. Nobody says anything to him. When I do it, then all of a sudden it becomes a racial issue. You know what I mean? It 's my culture, and I 'm entitled to poke fun at the stereotypes that I did n't create in the first place. I do n't even concern myself with that type of criticism, because it 's racist in itself. '' Prominent skits: For the first episode, an exotic - looking logo was used for the opening credits. However, after the band Living Colour claimed in a lawsuit that the show stole the band 's logo and name, the logo was changed to one with rather plain - type letters of three colors. The show title itself is an homage to the NBC Peacock tag line, "The following program is brought to you in living color '' from the 1960s when television was transitioning from black & white to color TV. In the first two seasons, the opening sequence was set in a room covered with painters ' tarps. Each cast member, wearing black - and - white, played with brightly colored paint in a different way (throwing paintballs at the camera by hand, spray painting the lens, using a roller to cover the camera lens, etc.). The sequence ended with a segue to a set built to resemble the rooftop of an apartment building, where the show 's dancers performed a routine and opened a door to let Keenen Ivory Wayans greet a live audience. For the third and fourth seasons, an animated sequence and different logo were used. Cast members were superimposed over pictures hanging in an art gallery and interacted with them in different ways (spinning the canvas to put it right - side up, swinging the frame out as if it were a door, etc.). The final image was of the logo on a black canvas, which shattered to begin the show. The fifth season retained the logo, but depicted the cast members on various signs and billboards around a city (either New York or Chicago), ending with the logo displayed on a theater marquee. The hip - hop group Heavy D & the Boyz performed two different versions of the opening theme. One version was used for the first two seasons and remixed for the fifth, while the other was featured in the third and fourth seasons. In Living Color was known for its live music performances, which started in Season 2 with Queen Latifah as their first performer (appearing again in the third season). Additional musical acts who appeared were Heavy D, Public Enemy, Kris Kross, En Vogue, Eazy - E, Monie Love, Onyx, 3rd Bass, MC Lyte, Arrested Development, Jodeci, Mary J. Blige, Tupac Shakur, Father MC, Gang Starr, Us3, and Leaders of the New School. The show employed an in - house dance troupe known as the "Fly Girls ''. The original lineup consisted of Carrie Ann Inaba (who became a choreographer and judge on Dancing with the Stars), Cari French, Deidre Lang, Lisa Marie Todd, Barbara Lumpkin and Michelle Whitney - Morrison. Rosie Perez was the choreographer for the first four seasons. The most notable former Fly Girl was future actress / singer Jennifer Lopez, who joined the show in its third season. The Fly Girls would sometimes be used as extras in sketches, or as part of an opening gag. In one sketch, they were shown performing open - heart surgery (in the sketch, the girls are dancing in order to pay their way through medical school). Another routine featured the three original female cast members dancing off - beat during the introduction of the show, when it was revealed that the regular Fly Girls were all bound and gagged and breaking through the door where Keenan Ivory Wayans enters. Three of the Fly Girls also appeared in the eleventh episode of Muppets Tonight 's second season in 1997. Keenen Ivory Wayans stopped appearing in sketches in 1992 after the end of the third season, over disputes with Fox about the network censoring the show 's content and rerunning early episodes without his consultation. Wayans feared that Fox would ultimately decrease the syndication value of In Living Color. Damon went on to pursue a movie career, though he made occasional return appearances in the fourth season. During the fourth season (1992 -- 1993), Keenen appeared only in the season opener, though he remained the executive producer and thus stayed in the opening credits until the tenth episode. Marlon left shortly after Keenen resigned as producer; and Shawn and Kim both left at the end of the fourth season. Fox censorship of scripts increased after In Living Color produced a live Super Bowl halftime special (branded by the network as The Doritos Zaptime / ' In Living Color ' Super Halftime Party). During the "Men on Football '' sketch, Damon Wayans and David Alan Grier ad libbed a suggestion that Richard Gere and track and field star Carl Lewis were homosexuals, much to Lewis 's dismay. The programming stunt lured 20 to 25 million viewers from CBS ' telecast of the halftime festivities during Super Bowl XXVI on January 26, 1992. Also, the originally - aired version of another sketch unrelated to the Super Bowl special ("Men on Fitness '' -- February 7, 1993) included a simulation of Damon Wayans 's character Blaine enjoying receiving facial ejaculation while being sprayed with a water bottle. These two segments were initially cut from reruns, but have been airing on the Centric cable channel. The DVD releases have the Gere and Lewis references cut but retain the facial ejaculation simulation. Reruns of the program on BET have questionable words and phrases (such as "ho '' and "bitch '') muted. One line ("drop the soap '') during the second "Men on Film '' sketch was muted out by Fox censors before ever airing on TV for its implications of prison rape. The DVD releases have the language intact (except for the "drop the soap '' line), but have numerous sketches edited to remove song lyrics and music video parodies due to copyright and licensing issues (for example, the "Fire Marshall Bill Christmas '' sketch originally had Jim Carrey singing "Chestnuts Roasting on an Open Fire '' before the house exploded; on the DVD version, the short scene was cut, making it look like the house immediately exploded after the last person ran out). On the May 5, 1990 broadcast, Keenen Ivory Wayans did a parody of a Colt 45 commercial featuring Billy Dee Williams (in which the purpose of the beverage is to get one 's date drunk enough to have sex) that ended with a woman (played by Kim Coles) passed out on her back on a dining table, and "Billy Dee '' moving in on her unconscious body to have sex with her. The "Bolt 45 '' sketch was seen only once during the original broadcast and omitted from repeats due to complaints from censors and viewers that it was mocking date rape. The Season 1 DVD set of ILC did not include the cut sketch from the pilot. This sketch was cut by Fox censors, and the necessary modifications were made to the master tape. Keenen Ivory Wayans accidentally mixed up the master tape of the pilot, and the edited master was broadcast instead. The sketch has never been broadcast since, not even in syndication, on FX, or on BET, and is considered lost forever (although videos of the sketch from viewer tapings have been posted to YouTube). It has been replaced by "The Exxxon Family '' (a fake promo for a sitcom about a clumsy Exxon boat captain and his wife, played by Jim Carrey and Kelly Coffield) in syndication and DVD box sets. The FXX reruns of this show are mostly intact. The pop music references / music video parodies are left intact, along with the misogynist slurs that were edited on BET. However, the "Colt 45 '' sketch is still missing (though it is available for viewing on YouTube), as well as the "drop the soap '' line and the line from "Men on Football '' about Richard Gere 's and Carl Lewis 's sexualities, and a "Fire Marshall Bill '' sketch from season five ("Fire Marshall Bill at the Magic Show '') was edited on FXX (but not the DVD) to remove a line that implies that Fire Marshall Bill and an Arab man were involved with the 1993 bombing of the World Trade Center after the magician assures Bill that his magic tricks are safe (the lines cut were, "That 's what they said about the World Trade Center, son. But me and my friend Abdul and a couple of pounds of plastique explosives showed them different '', along with Bill 's laugh and catchphrase "Lemme show ya somethin '! ''). By the fifth and final season, none of the Wayans family had any involvement with the show. The show 's sketches were no longer character - driven and satirical, giving way to wilder, cruder content and a reliance on celebrity guest appearances. Some celebrities who have appeared include Nick Bakay (who played the host of The Dirty Dozens game show sketches and wrote for the show), Barry Bonds (in the "Ugly Wanda finds her baby 's father '' arc), James Brown (appeared in the season four sketch "The Groom Room Barber Shop ''), Rodney Dangerfield (though he appeared in a season four sketch where the police pull him over), Bret Hart, Sherman Hemsley, Biz Markie (who appeared in a lot of sketches, such as "Ugly Wanda 's Ugly Sister, '' "Carl ' The Tooth ' Williams ' Paternity Trial, '' "The Bad Guidance Counselor, '' and "The Dirty Dozens: The Tournament of Champions ''), Peter Marshall (as the host of "East Hollywood Squares ''), Ed O'Neill (as the Dirty Dozens champion whom Jamie Foxx 's T - Dog Jenkins fails to defeat), Chris Rock (mostly as his character, Cheap Pete, from I 'm Gonna Git You, Sucka), Macho Man Randy Savage, Tupac Shakur, and players from the NBA, (just like players from the NHL were on Saturday Night Live). Kelly Coffield, who was the lone white female cast member, prior to Alexandra Wentworth 's arrival in the fourth season, left before the start of season five. Jim Carrey, Tommy Davidson, David Alan Grier, T'Keyah Crystal Keymáh, and Deidre Lang from the Fly Girls are the only cast members who stayed on the show from its premiere to its finale, although Carrey 's appearances became very limited due to his rising movie career, while Davidson missed a few episodes for undisclosed reasons. Chris Rock appeared (as a "special guest star '') in a number of sketches in the fifth season, and reprised his "Cheap Pete '' character from I 'm Gonna Git You Sucka. In the early years of In Living Color, Rock was parodied as being the only African American cast member on Saturday Night Live (despite SNL also having Tim Meadows at the time). In an SNL episode honoring Mother 's Day, Rock 's mother states that she is disappointed in him for not trying out for In Living Color, to which Rock states he is happy with his job on SNL. Other recurring guest stars in the fifth season include Nick Bakay (for The Dirty Dozens sketches) and Peter Marshall (for several editions of East Hollywood Squares). Rapper Biz Markie also appeared in various roles as a guest star in the fifth season, such as being in drag as Wanda the Ugly Woman 's sister or as "Dirty Dozens '' contestant Damian "Foosball '' Franklin. Ed O'Neill made a cameo appearance as Al Bundy in a "Dirty Dozens '' segment. Originally produced by 20th Century Fox Television on Fox, the series was in reruns on local affiliates for a few years, but has since become a longstanding mainstay on Fox 's sister cable networks FX and FXX. In syndication, the series is distributed by Twentieth Television. Reruns of the show also aired on BET from 2005 -- 2008, and returned in 2010. Reruns have also aired on MTV2, VH1, nuvoTV, and on BET - owned Centric. Unlike past runs on FX and the Viacom Media Networks, the FXX cut of episodes are mostly uncut and censored. The music video parodies and spoken references to licensed songs have been reinstated, but the "Bolt 45 '' sketch, the "drop the soap '' line, and the "Men on Football '' sketch with the adlibbed lines about Richard Gere 's and Carl Lewis 's alleged homosexuality are still edited (though the facial ejaculation shot on "Men on Fitness '' was reinstated), along with a line from the season five sketch "Fire Marshall Bill at the Magic Show '' that makes reference to the 1993 World Trade Center bombing (the missing line is, "That 's what they said about the World Trade Center, son. But me and my friend Abdul and a couple of pounds of plastique explosives showed them different. '' Bill 's laugh and his catchphrase "Lemme show ya somethin ' '' was also cut abruptly), due to the September 11, 2001 attacks. The Best of In Living Color aired on MyNetworkTV from April 16 to June 18, 2008. Hosted by David Alan Grier, it was a retrospective featuring classic sketches, along with cast interviews and behind - the - scenes footage. The show aired on Wednesdays at 8: 30 pm Eastern / 7: 30 pm Central, after MyNetworkTV 's sitcom Under One Roof. In 2017, the show began airing reruns on Fusion and Aspire. Note: Ratings data courtesy of TVTango.com 20th Century Fox Home Entertainment has released all five seasons of In Living Color on DVD in Region 1. Due to music licensing issues, some sketches have been edited to remove any and all mention of licensed songs, from characters waxing lyrical to entire performances (including the music video parodies and some of the Fly Girl dancing interstitials). Additionally, the "Bolt 45 '' sketch (which aired one - time only on May 5, 1990) was omitted, and the "soap '' portion of the "drop the soap '' line in the second "Men on Film '' sketch has been muted. In 2011, there were plans to make a revival of the original series that featured a new cast, characters, and sketches. The pilot episodes were hosted and executive produced by original series creator and cast member Keenen Ivory Wayans. In early 2012, Tabitha and Napoleon D'umo were hired as the choreographers. They cast the new line - up of The Fly Girls and shot pilot episodes for the show which were set to air on Fox, like the original. However, on January 8, 2013, Keenen Ivory Wayans confirmed the reboot had been canceled because he and Fox did not feel that the show was sustainable after one season. Reported cast members included Cooper Barnes, Jennifer Bartels, Sydney Castillo, Josh Duvendeck, Jermaine Fowler, Ayana Hampton, Kali Hawk, and Lil Rel Howery. In addition, featured cast members were Henry Cho, Melanie Minichino, and Chris Leidecker. Members of the new Fly Girls included Christina Chandler, Tera Perez, Lisa Rosenthal, Katee Shean, and Whitney Wiley. Many of the cast members of the revival (Bartels, Fowler, and Howery) went on to create the TruTV sketch show Friends of the People. Singer Bruno Mars paid tribute to the television program in the music video Finesse (song)
where is the speedometer located in a car
Speedometer - Wikipedia A speedometer or a speed meter is a gauge that measures and displays the instantaneous speed of a vehicle. Now universally fitted to motor vehicles, they started to be available as options in the 1900s, and as standard equipment from about 1910 onwards. Speedometers for other vehicles have specific names and use other means of sensing speed. For a boat, this is a pit log. For an aircraft, this is an airspeed indicator. Charles Babbage is credited with creating an early type of a speedometer, which was usually fitted to locomotives. The electric speedometer was invented by the Croatian Josip Belušić in 1888 and was originally called a velocimeter. Originally patented by Otto Schultze on October 7, 1902, it uses a rotating flexible cable usually driven by gearing linked to the output of the vehicle 's transmission. The early Volkswagen Beetle and many motorcycles, however, use a cable driven from a front wheel. When the vehicle is in motion, a speedometer gear assembly turns a speedometer cable, which then turns the speedometer mechanism itself. A small permanent magnet affixed to the speedometer cable interacts with a small aluminum cup (called a speedcup) attached to the shaft of the pointer on the analogue speedometer instrument. As the magnet rotates near the cup, the changing magnetic field produces eddy currents in the cup, which themselves produce another magnetic field. The effect is that the magnet exerts a torque on the cup, "dragging '' it, and thus the speedometer pointer, in the direction of its rotation with no mechanical connection between them. The pointer shaft is held toward zero by a fine torsion spring. The torque on the cup increases with the speed of rotation of the magnet. Thus an increase in the speed of the car will twist the cup and speedometer pointer against the spring. The cup and pointer will turn until the torque of the eddy currents on the cup are balanced by the opposing torque of the spring, and then stop. Given the torque on the cup is proportional to the car 's speed, and the spring 's deflection is proportional to the torque, the angle of the pointer is also proportional to the speed, so that equally spaced markers on the dial can be used for gaps in speed. At a given speed, the pointer will remain motionless and pointing to the appropriate number on the speedometer 's dial. The return spring is calibrated such that a given revolution speed of the cable corresponds to a specific speed indication on the speedometer. This calibration must take into account several factors, including ratios of the tailshaft gears that drive the flexible cable, the final drive ratio in the differential, and the diameter of the driven tires. One of the key disadvantages of the eddy current speedometer is that it can not show the vehicle speed when running in reverse gear since the cup would turn in the opposite direction - in this scenario the needle would be driven against its mechanical stop pin on the zero position. Many modern speedometers are electronic. In designs derived from earlier eddy - current models, a rotation sensor mounted in the transmission delivers a series of electronic pulses whose frequency corresponds to the (average) rotational speed of the driveshaft, and therefore the vehicle 's speed, assuming the wheels have full traction. The sensor is typically a set of one or more magnets mounted on the output shaft or (in transaxles) differential crownwheel, or a toothed metal disk positioned between a magnet and a magnetic field sensor. As the part in question turns, the magnets or teeth pass beneath the sensor, each time producing a pulse in the sensor as they affect the strength of the magnetic field it is measuring. Alternatively, particularly in vehicles with multiplex wiring, some manufacturers use the pulses coming from the ABS wheel sensors which communicate to the instrument panel via the CAN Bus. Most modern electronic speedometers have the additional ability over the eddy current type to show the vehicle speed when moving in reverse gear. A computer converts the pulses to a speed and displays this speed on an electronically controlled, analog - style needle or a digital display. Pulse information is also used for a variety of other purposes by the ECU or full - vehicle control system, e.g. triggering ABS or traction control, calculating average trip speed, or to increment the odometer in place of it being turned directly by the speedometer cable. Another early form of electronic speedometer relies upon the interaction between a precision watch mechanism and a mechanical pulsator driven by the car 's wheel or transmission. The watch mechanism endeavors to push the speedometer pointer toward zero, while the vehicle - driven pulsator tries to push it toward infinity. The position of the speedometer pointer reflects the relative magnitudes of the outputs of the two mechanisms. Typical bicycle speedometers measure the time between each wheel revolution, and give a readout on a small, handlebar - mounted digital display. The sensor is mounted on the bike at a fixed location, pulsing when the spoke - mounted magnet passes by. In this way, it is analogous to an electronic car speedometer using pulses from an ABS sensor, but with a much cruder time / distance resolution - typically one pulse / display update per revolution, or as seldom as once every 2 -- 3 seconds at low speed with a 26 - inch (2.07 m circumference, without tire) wheel. However, this is rarely a critical problem, and the system provides frequent updates at higher road speeds where the information is of more importance. The low pulse frequency also has little impact on measurement accuracy, as these digital devices can be programmed by wheel size, or additionally by wheel or tire circumference in order to make distance measurements more accurate and precise than a typical motor vehicle gauge. However these devices carry some minor disadvantage in requiring power from batteries that must be replaced every so often in the receiver (AND sensor, for wireless models), and, in wired models, the signal being carried by a thin cable that is much less robust than that used for brakes, gears, or cabled speedometers. Other, usually older bicycle speedometers are cable driven from one or other wheel, as in the motorcycle speedometers described above. These do not require battery power, but can be relatively bulky and heavy, and may be less accurate. The turning force at the wheel may be provided either from a gearing system at the hub (making use of the presence of e.g. a hub brake, cylinder gear or dynamo) as per a typical motorcycle, or with a friction wheel device that pushes against the outer edge of the rim (same position as rim brakes, but on the opposite edge of the fork) or the sidewall of the tyre itself. The former type are quite reliable and low maintenance but need a gauge and hub gearing properly matched to the rim and tyre size, whereas the latter require little or no calibration for a moderately accurate readout (with standard tyres, the "distance '' covered in each wheel rotation by a friction wheel set against the rim should scale fairly linearly with wheel size, almost as if it were rolling along the ground itself) but are unsuitable for off - road use, and must be kept properly tensioned and clean of road dirt to avoid slipping or jamming. Most speedometers have tolerances of some ± 10 %, mainly due to variations in tire diameter. Sources of error due to tire diameter variations are wear, temperature, pressure, vehicle load, and nominal tire size. Vehicle manufacturers usually calibrate speedometers to read high by an amount equal to the average error, to ensure that their speedometers never indicate a lower speed than the actual speed of the vehicle, to ensure they are not liable for drivers violating speed limits. Excessive speedometer error after manufacture can come from several causes but most commonly is due to nonstandard tire diameter, in which case the error is Percentage error = 100 × (1 − new diameter / standard diameter) (\ displaystyle (\ mbox (Percentage error)) = 100 \ times (1 - (\ mbox (new diameter)) / (\ mbox (standard diameter)))) Nearly all tires now have their size shown as "T / A_W '' on the side of the tire (See: Tire code), and the tire 's Diameter in millimetres = 2 × T × A / 100 + W × 25.4 (\ displaystyle (\ mbox (Diameter in millimetres)) = 2 \ times T \ times A / 100 + W \ times 25.4) Diameter in inches = T × A / 1270 + W (\ displaystyle (\ mbox (Diameter in inches)) = T \ times A / 1270 + W) For example, a standard tire is "185 / 70R14 '' with diameter = 2 * 185 * (70 / 100) + (14 * 25.4) = 614.6 mm (185x70 / 1270 + 14 = 24.20 in). Another is "195 / 50R15 '' with 2 * 195 * (50 / 100) + (15 * 25.4) = 576.0 mm (195x50 / 1270 + 15 = 22.68 in). Replacing the first tire (and wheels) with the second (on 15 '' = 381 mm wheels), a speedometer reads 100 * (1 - (576 / 614.6)) = 100 * (1 - 22.68 / 24.20) = 6.28 % higher than the actual speed. At an actual speed of 100 km / h (60 mph), the speedometer will indicate 100 x 1.0628 = 106.28 km / h (60 * 1.0628 = 63.77 mph), approximately. In the case of wear, a new "185 / 70R14 '' tyre of 620 mm (24.4 inch) diameter will have ~ 8 mm tread depth, at legal limit this reduces to 1.6 mm, the difference being 12.8 mm in diameter or 0.5 inches which is 2 % in 620 mm (24.4 inches). In many countries the legislated error in speedometer readings is ultimately governed by the United Nations Economic Commission for Europe (UNECE) Regulation 39, which covers those aspects of vehicle type approval that relate to speedometers. The main purpose of the UNECE regulations is to facilitate trade in motor vehicles by agreeing uniform type approval standards rather than requiring a vehicle model to undergo different approval processes in each country where it is sold. European Union member states must also grant type approval to vehicles meeting similar EU standards. The ones covering speedometers are similar to the UNECE regulation in that they specify that: The standards specify both the limits on accuracy and many of the details of how it should be measured during the approvals process, for example that the test measurements should be made (for most vehicles) at 40, 80 and 120 km / h, and at a particular ambient temperature and road surface. There are slight differences between the different standards, for example in the minimum accuracy of the equipment measuring the true speed of the vehicle. The UNECE regulation relaxes the requirements for vehicles mass - produced following type approval. At Conformity of Production Audits the upper limit on indicated speed is increased to 110 percent plus 6 km / h for cars, buses, trucks and similar vehicles, and 110 percent plus 8 km / h for two - or three - wheeled vehicles that have a maximum speed above 50 km / h (or a cylinder capacity, if powered by a heat engine, of more than 50 cm3). European Union Directive 2000 / 7 / EC, which relates to two - and three - wheeled vehicles, provides similar slightly relaxed limits in production. There were no Australian Design Rules in place for speedometers in Australia prior to July 1988. They had to be introduced when speed cameras were first used. This means there are no legally accurate speedometers for these older vehicles. All vehicles manufactured on or after 1 July 2007, and all models of vehicle introduced on or after 1 July 2006, must conform to UNECE Regulation 39. The speedometers in vehicles manufactured before these dates but after 1 July 1995 (or 1 January 1995 for forward control passenger vehicles and off - road passenger vehicles) must conform to the previous Australian design rule. This specifies that they need only display the speed to an accuracy of + / - 10 % at speeds above 40 km / h, and there is no specified accuracy at all for speeds below 40 km / h. All vehicles manufactured in Australia or imported for supply to the Australian market must comply with the Australian Design Rules. The state and territory governments may set policies for the tolerance of speed over the posted speed limits that may be lower than the 10 % in the earlier versions of the Australian Design Rules permitted, such as in Victoria. This has caused some controversy since it would be possible for a driver to be unaware that he is speeding should his vehicle be fitted with an under - reading speedometer. The amended Road Vehicles (Construction and Use) Regulations 1986 permits the use of speedometers that meet either the requirements of EC Council Directive 75 / 443 (as amended by Directive 97 / 39) or UNECE Regulation 39. The Motor Vehicles (Approval) Regulations 2001 permits single vehicles to be approved. As with the UNECE regulation and the EC Directives, the speedometer must never show an indicated speed less than the actual speed. However it differs slightly from them in specifying that for all actual speeds between 25 mph and 70 mph (or the vehicles ' maximum speed if it is lower than this), the indicated speed must not exceed 110 % of the actual speed, plus 6.25 mph. For example, if the vehicle is actually travelling at 50 mph, the speedometer must not show more than 61.25 mph or less than 50 mph. Federal standards in the United States allow a maximum 5 mph error at a speed of 50 mph on speedometer readings for commercial vehicles. Aftermarket modifications, such as different tire and wheel sizes or different differential gearing, can cause speedometer inaccuracy. On September 1, 1979 the NHTSA required speedometers to have special emphasis on 55 mph and display no more than a maximum speed of 85 mph. On March 25, 1982 the NHTSA revoked the rule because no "significant safety benefits '' could come from maintaining the standard. GPS devices are positional speedometers, based on how far the receiver has moved since the last measurement. Its speed calculations are not subject to the same sources of error as the vehicle 's speedometer (wheel size, transmission / drive ratios). Instead, the GPS 's positional accuracy, and therefore the accuracy of its calculated speed, is dependent on the satellite signal quality at the time. Speed calculations will be more accurate at higher speeds, when the ratio of positional error to positional change is lower. The GPS software may also use a moving average calculation to reduce error. Some GPS devices do not take into account the vertical position of the car so will under report the speed by the road 's gradient. As mentioned in the satnav article, GPS data has been used to overturn a speeding ticket; the GPS logs showed the defendant traveling below the speed limit when they were ticketed. That the data came from a GPS device was likely less important than the fact that it was logged; logs from the vehicle 's speedometer could likely have been used instead, had they existed.
who plays maya on bold and the beautiful
Karla Cheatham Mosley - wikipedia Karla Cheatham Mosley (born August 27, 1981) is an American actress and singer. She starred on the Emmy - nominated children 's show Hi - 5; she has starred in numerous plays and also had minor roles in several other TV shows and films. She regularly appeared as Christina Moore Boudreau in the soap opera Guiding Light and can currently be seen as Maya Avant in The Bold and the Beautiful. Karla Cheatham Mosley was born in Susquehanna Depot and raised in Westchester, New York. She graduated from New York University 's Tisch School of the Arts, with honors, and went on to study in France at the Roy Hart Vocal Institute. At a young age, Mosley performed in Sugar Beats, a children 's rock group, where she sang such hits as "Rockin ' Robin ''. In 2003, while still in college, Mosley joined the American counterpart of the Australian children 's TV show Hi - 5, where she became the youngest member of the group. Her segment was called "Body Move '', where she led viewers in stretching, exercising, and silly dances. She was usually the transitional member of the group, performing her skits in between other cast members ' acts. She also provided the voice of Chatterbox on the show. She was the American counterpart of Charli Robinson. In 2006, Mosley left the cast of Hi - 5 in order to pursue other interests. Mosley played the role of Christina Moore Boudreau on the now defunct soap opera Guiding Light. Christina is a pre-med student who married Remy Boudreau (played by Lawrence Saint - Victor). In 2013, Mosley joined the cast of The Bold and the Beautiful as ex-con turned actress Maya Avant. Karla completed a website series titled Room 8 The Series with her co-star from The Bold and the Beautiful, Lawrence Saint - Victor. The web series was part of the B&B storyline. Karla 's character has recently been revealed to be a transgender woman. In 2015 Karla 's character became the first transgender bride to be married on daytime television when she married Rick Forrester (played by Jacob Young). Mosley also appears in many local New York City venues (including for a time as the featured performer for the Gray Line Show Business Insider Tour), sometimes as singer, sometimes actor. In 2007, Mosley starred in the musical production of Dreamgirls at the TUTS Theater in Houston, TX. Mosley appeared in the children 's off - Broadway production, Max and Ruby in 2007 -- 2008. Additionally, she had a part in the Coen Brothers film, Burn After Reading, which opened in September 2008. Mosley will also be seen in the indy film, Red Hook. In 2008, Mosley starred with Lenelle Moise in "Expatriate '', a gritty off - Broadway show at the Culture Project. Her performance earned rave reviews from the New York Times, Time Out New York and Variety, which wrote: "Mosley 's voice is a serious discovery, with remarkable phrasing and range. In a torchy number called ' The Makings, ' about how Alphine 's life has given her the makings of a jazz legend, her pure high notes descend to earthy growls in a flash, and you 've got to believe her when she sings, ' Anything I wails / hits ears like honey. ' '' In 2008 she also nabbed parts in Law & Order and Gossip Girl and appeared in Museum Pieces at the West End Theater in New York City. In 2013, Mosley appeared in several episodes of Hart of Dixie. Mosley also appeared in commercials sponsoring Abreva and Riders by Lee jeans. On November 18, 2014, Karla appeared as a model on The Price Is Right. She appeared in the 6th episode of Battle Creek on April 5, 2015, titled "Cereal Killer ''. Karla Mosley sits on the board of Covenant House, the largest privately funded agency in the Americas providing shelter and other services to homeless, runaway and throwaway youth. She is a celebrity ambassador for the National Eating Disorders Association and is active in other charities. After Hurricane Katrina, Mosley organized a benefit concert by Hi - 5 to raise money for victims of Hurricane Katrina. She co-produced and performed in Broadway for Obama, a benefit concert during the 2008 presidential campaign. She also performed in Broadway for a New America 's concert for marriage equality and in Western Queens for Marriage Equality 's rally for equality. Mosley has one sister and two cats. She is married to Jeremiah Frei - Pearson.
throw the baby out with the bathwater example
Do n't throw the baby out with the bathwater - wikipedia Do n't throw the baby out with the bathwater is an idiomatic expression for an avoidable error in which something good is eliminated when trying to get rid of something bad, or in other words, rejecting the favorable along with the unfavorable. A slightly different explanation suggests that this flexible catchphrase has to do with discarding the essential while retaining the superfluous because of excessive zeal. In other words, the idiom is applicable not only when throwing out the baby with the bathwater, but also when someone might throw out the baby and keep the bathwater. This idiom derives from a German proverb, das Kind mit dem Bade ausschütten. The earliest record of this phrase is in 1512, in Narrenbeschwörung (Appeal to Fools) by Thomas Murner; and this book includes a woodcut illustration showing a woman tossing a baby out with waste water. It is a common catchphrase in German, with examples of its use in work by Martin Luther, Johannes Kepler, Johann Wolfgang von Goethe, Otto von Bismarck, Thomas Mann, and Günter Grass. Thomas Carlyle adapted the concept in an 1849 essay on slavery: And if true, it is important for us, in reference to this Negro Question and some others. The Germans say, "you must empty - out the bathing - tub, but not the baby along with it. '' Fling - out your dirty water with all zeal, and set it careening down the kennels; but try if you can keep the little child! Carlyle is urging his readers to join in the struggle to end slavery, but he also encourages them to be mindful of the need to try to avoid harming the slaves themselves in the process. Some claim the phrase originates from a time when the whole household shared the same bath water. The head of household (Lord) would bathe first, followed by the men, then the Lady and the women, then the children, followed lastly by the baby. The water would be so black from dirt that a baby could be accidentally "tossed out with the bathwater ''. Others state there is no historical evidence that there is any connection with the practice of several family members using the same bath water, the baby being bathed last. The meaning and intent of the English idiomatic expression is sometimes presented in different terms.
what is the percentage of zinc and copper in brass
Brass - wikipedia Brass is a metallic alloy that is made of copper and zinc. The proportions of zinc and copper can vary to create different types of brass alloys with varying mechanical and electrical properties. It is a substitutional alloy: atoms of the two constituents may replace each other within the same crystal structure. In contrast, bronze is an alloy of copper and tin. Both bronze and brass may include small proportions of a range of other elements including arsenic, lead, phosphorus, aluminium, manganese, and silicon. The distinction is largely historical. Modern practice in museums and archaeology increasingly avoids both terms for historical objects in favour of the all - embracing "copper alloy ''. Brass is used for decoration for its bright gold - like appearance; for applications where low friction is required such as locks, gears, bearings, doorknobs, ammunition casings and valves; for plumbing and electrical applications; and extensively in brass musical instruments such as horns and bells where a combination of high workability (historically with hand tools) and durability is desired. It is also used in zippers. Brass is often used in situations in which it is important that sparks not be struck, such as in fittings and tools used near flammable or explosive materials. Brass has higher malleability than bronze or zinc. The relatively low melting point of brass (900 to 940 ° C, 1,650 to 1,720 ° F, depending on composition) and its flow characteristics make it a relatively easy material to cast. By varying the proportions of copper and zinc, the properties of the brass can be changed, allowing hard and soft brasses. The density of brass is 8.4 to 8.73 grams per cubic centimetre (0.303 to 0.315 lb / cu in). Today, almost 90 % of all brass alloys are recycled. Because brass is not ferromagnetic, it can be separated from ferrous scrap by passing the scrap near a powerful magnet. Brass scrap is collected and transported to the foundry where it is melted and recast into billets. Billets are heated and extruded into the desired form and size. The general softness of brass means that it can often be machined without the use of cutting fluid, though there are exceptions to this. Aluminium makes brass stronger and more corrosion - resistant. Aluminium also causes a highly beneficial hard layer of aluminium oxide (Al O) to be formed on the surface that is thin, transparent and self - healing. Tin has a similar effect and finds its use especially in seawater applications (naval brasses). Combinations of iron, aluminium, silicon and manganese make brass wear and tear resistant. To enhance the machinability of brass, lead is often added in concentrations of around 2 %. Since lead has a lower melting point than the other constituents of the brass, it tends to migrate towards the grain boundaries in the form of globules as it cools from casting. The pattern the globules form on the surface of the brass increases the available lead surface area which in turn affects the degree of leaching. In addition, cutting operations can smear the lead globules over the surface. These effects can lead to significant lead leaching from brasses of comparatively low lead content. Silicon is an alternative to lead; however, when silicon is used in a brass alloy, the scrap must never be mixed with leaded brass scrap because of contamination and safety problems. In October 1999 the California State Attorney General sued 13 key manufacturers and distributors over lead content. In laboratory tests, state researchers found the average brass key, new or old, exceeded the California Proposition 65 limits by an average factor of 19, assuming handling twice a day. In April 2001 manufacturers agreed to reduce lead content to 1.5 %, or face a requirement to warn consumers about lead content. Keys plated with other metals are not affected by the settlement, and may continue to use brass alloys with higher percentage of lead content. Also in California, lead - free materials must be used for "each component that comes into contact with the wetted surface of pipes and pipe fittings, plumbing fittings and fixtures. '' On January 1, 2010, the maximum amount of lead in "lead - free brass '' in California was reduced from 4 % to 0.25 % lead. The common practice of using pipes for electrical grounding is discouraged, as it accelerates lead corrosion. The so - called dezincification resistant (DZR or DR) brasses, sometimes referred to as CR (corrosion resistant) brasses, are used where there is a large corrosion risk and where normal brasses do not meet the standards. Applications with high water temperatures, chlorides present, or deviating water qualities (soft water) play a role. DZR - brass is excellent in water boiler systems. This brass alloy must be produced with great care, with special attention placed on a balanced composition and proper production temperatures and parameters to avoid long - term failures. The high malleability and workability, relatively good resistance to corrosion, and traditionally attributed acoustic properties of brass, have made it the usual metal of choice for construction of musical instruments whose acoustic resonators consist of long, relatively narrow tubing, often folded or coiled for compactness; silver and its alloys, and even gold, have been used for the same reasons, but brass is the most economical choice. Collectively known as brass instruments, these include the trombone, tuba, trumpet, cornet, baritone horn, euphonium, tenor horn, and French horn, and many other "horns '', many in variously - sized families, such as the saxhorns. Other wind instruments may be constructed of brass or other metals, and indeed most modern student - model flutes and piccolos are made of some variety of brass, usually a cupronickel alloy similar to nickel silver / German silver. Clarinets, especially low clarinets such as the contrabass and subcontrabass, are sometimes made of metal because of limited supplies of the dense, fine - grained tropical hardwoods traditionally preferred for smaller woodwinds. For the same reason, some low clarinets, bassoons and contrabassoons feature a hybrid construction, with long, straight sections of wood, and curved joints, neck, and / or bell of metal. The use of metal also avoids the risks of exposing wooden instruments to changes in temperature or humidity, which can cause sudden cracking. Even though the saxophones and sarrusaphones are classified as woodwind instruments, they are normally made of brass for similar reasons, and because their wide, conical bores and thin - walled bodies are more easily and efficiently made by forming sheet metal than by machining wood. The keywork of most modern woodwinds, including wooden - bodied instruments, is also usually made of an alloy such as Nickel Silver / German Silver. Such alloys are stiffer and more durable than the brass used to construct the instrument bodies, but still workable with simple hand tools -- a boon to quick repairs. The mouthpieces of both brass instruments and, less commonly, woodwind instruments are often made of brass among other metals as well. Next to the brass instruments, the most notable use of brass in music is in various percussion instruments, most notably cymbals, gongs, and orchestral (tubular) bells (large "church '' bells are normally made of bronze). Small handbells and "jingle bells '' are also commonly made of brass. The harmonica is a free reed aerophone, also often made from brass. In organ pipes of the reed family, brass strips (called tongues) are used as the reeds, which beat against the shallot (or beat "through '' the shallot in the case of a "free '' reed). Although not part of the brass section, snare drums are also sometimes made of brass. Some parts on electric guitars are also made from brass, especially inertia blocks on tremolo systems for its tonal properties, and for string nuts and saddles for both tonal properties and its low friction. The bactericidal properties of brass have been observed for centuries, particularly in marine environments where it prevents biofouling. Depending upon the type and concentration of pathogens and the medium they are in, brass kills these microorganisms within a few minutes to hours of contact. A large number of independent studies confirm this antimicrobial effect, even against antibiotic - resistant bacteria such as MRSA and VRSA. The mechanisms of antimicrobial action by copper and its alloys, including brass, are a subject of intense and ongoing investigation. Brass is susceptible to stress corrosion cracking, especially from ammonia or substances containing or releasing ammonia. The problem is sometimes known as season cracking after it was first discovered in brass cartridges used for rifle ammunition during the 1920s in the British Indian Army. The problem was caused by high residual stresses from cold forming of the cases during manufacture, together with chemical attack from traces of ammonia in the atmosphere. The cartridges were stored in stables and the ammonia concentration rose during the hot summer months, thus initiating brittle cracks. The problem was resolved by annealing the cases, and storing the cartridges elsewhere. Although forms of brass have been in use since prehistory, its true nature as a copper - zinc alloy was not understood until the post medieval period because the zinc vapor which reacted with copper to make brass was not recognised as a metal. The King James Bible makes many references to "brass ''. The Shakespearean English form of the word ' brass ' can mean any bronze alloy, or copper, rather than the strict modern definition of brass. The earliest brasses may have been natural alloys made by smelting zinc - rich copper ores. By the Roman period brass was being deliberately produced from metallic copper and zinc minerals using the cementation process, and variations on this method continued until the mid-19th century. It was eventually replaced by speltering, the direct alloying of copper and zinc metal which was introduced to Europe in the 16th century. In West Asia and the Eastern Mediterranean early copper zinc alloys are now known in small numbers from a number of third millennium BC sites in the Aegean, Iraq, the United Arab Emirates, Kalmykia, Turkmenistan and Georgia and from 2nd Millennium BC sites in West India, Uzbekistan, Iran, Syria, Iraq and Canaan. However, isolated examples of copper - zinc alloys are known in China from as early as the 5th Millennium BC. The compositions of these early "brass '' objects are highly variable and most have zinc contents of between 5 % and 15 % wt which is lower than in brass produced by cementation. These may be "natural alloys '' manufactured by smelting zinc rich copper ores in redox conditions. Many have similar tin contents to contemporary bronze artefacts and it is possible that some copper - zinc alloys were accidental and perhaps not even distinguished from copper. However the large number of copper - zinc alloys now known suggests that at least some were deliberately manufactured and many have zinc contents of more than 12 % wt which would have resulted in a distinctive golden color. By the 8th -- 7th century BC Assyrian cuneiform tablets mention the exploitation of the "copper of the mountains '' and this may refer to "natural '' brass. "Oreikhalkon '' (mountain copper), the Ancient Greek translation of this term, was later adapted to the Latin aurichalcum meaning "golden copper '' which became the standard term for brass. In the 4th century BC Plato knew orichalkos as rare and nearly as valuable as gold and Pliny describes how aurichalcum had come from Cypriot ore deposits which had been exhausted by the 1st century AD. X-ray fluorescence analysis of 39 orichalcum ingots recovered from a 2,600 - year - old shipwreck off Sicily found them to be an alloy made with 75 -- 80 percent copper, 15 -- 20 percent zinc and small percentages of nickel, lead and iron. During the later part of first millennium BC the use of brass spread across a wide geographical area from Britain and Spain in the west to Iran, and India in the east. This seems to have been encouraged by exports and influence from the Middle East and eastern Mediterranean where deliberate production of brass from metallic copper and zinc ores had been introduced. The 4th century BC writer Theopompus, quoted by Strabo, describes how heating earth from Andeira in Turkey produced "droplets of false silver '', probably metallic zinc, which could be used to turn copper into oreichalkos. In the 1st century BC the Greek Dioscorides seems to have recognised a link between zinc minerals and brass describing how Cadmia (zinc oxide) was found on the walls of furnaces used to heat either zinc ore or copper and explaining that it can then be used to make brass. By the first century BC brass was available in sufficient supply to use as coinage in Phrygia and Bithynia, and after the Augustan currency reform of 23 BC it was also used to make Roman dupondii and sestertii. The uniform use of brass for coinage and military equipment across the Roman world may indicate a degree of state involvement in the industry, and brass even seems to have been deliberately boycotted by Jewish communities in Palestine because of its association with Roman authority. Brass was produced by the cementation process where copper and zinc ore are heated together until zinc vapor is produced which reacts with the copper. There is good archaeological evidence for this process and crucibles used to produce brass by cementation have been found on Roman period sites including Xanten and Nidda in Germany, Lyon in France and at a number of sites in Britain. They vary in size from tiny acorn sized to large amphorae like vessels but all have elevated levels of zinc on the interior and are lidded. They show no signs of slag or metal prills suggesting that zinc minerals were heated to produce zinc vapor which reacted with metallic copper in a solid state reaction. The fabric of these crucibles is porous, probably designed to prevent a buildup of pressure, and many have small holes in the lids which may be designed to release pressure or to add additional zinc minerals near the end of the process. Dioscorides mentioned that zinc minerals were used for both the working and finishing of brass, perhaps suggesting secondary additions. Brass made during the early Roman period seems to have varied between 20 % to 28 % wt zinc. The high content of zinc in coinage and brass objects declined after the first century AD and it has been suggested that this reflects zinc loss during recycling and thus an interruption in the production of new brass. However it is now thought this was probably a deliberate change in composition and overall the use of brass increases over this period making up around 40 % of all copper alloys used in the Roman world by the 4th century AD. Little is known about the production of brass during the centuries immediately after the collapse of the Roman Empire. Disruption in the trade of tin for bronze from Western Europe may have contributed to the increasing popularity of brass in the east and by the 6th -- 7th centuries AD over 90 % of copper alloy artefacts from Egypt were made of brass. However other alloys such as low tin bronze were also used and they vary depending on local cultural attitudes, the purpose of the metal and access to zinc, especially between the Islamic and Byzantine world. Conversely the use of true brass seems to have declined in Western Europe during this period in favour of gunmetals and other mixed alloys but by about 1000 brass artefacts are found in Scandinavian graves in Scotland, brass was being used in the manufacture of coins in Northumbria and there is archaeological and historical evidence for the production of calamine brass in Germany and The Low Countries, areas rich in calamine ore. These places would remain important centres of brass making throughout the medieval period, especially Dinant. Brass objects are still collectively known as dinanterie in French. The baptismal font at St Bartholomew 's Church, Liège in modern Belgium (before 1117) is an outstanding masterpiece of Romanesque brass casting, though also often described as bronze. The metal of the early 12th - century Gloucester Candlestick is unusual even by medieval standards in being a mixture of copper, zinc, tin, lead, nickel, iron, antimony and arsenic with an unusually large amount of silver, ranging from 22.5 % in the base to 5.76 % in the pan below the candle. The proportions of this mixture may suggest that the candlestick was made from a hoard of old coins, probably Late Roman. Latten is a term for decorative borders and similar objects cut from sheet metal, whether of brass or bronze. Aquamaniles were typically made in brass in both the European and Islamic worlds. The cementation process continued to be used but literary sources from both Europe and the Islamic world seem to describe variants of a higher temperature liquid process which took place in open - topped crucibles. Islamic cementation seems to have used zinc oxide known as tutiya or tutty rather than zinc ores for brass - making, resulting in a metal with lower iron impurities. A number of Islamic writers and the 13th century Italian Marco Polo describe how this was obtained by sublimation from zinc ores and condensed onto clay or iron bars, archaeological examples of which have been identified at Kush in Iran. It could then be used for brass making or medicinal purposes. In 10th century Yemen al - Hamdani described how spreading al - iglimiya, probably zinc oxide, onto the surface of molten copper produced tutiya vapor which then reacted with the metal. The 13th century Iranian writer al - Kashani describes a more complex process whereby tutiya was mixed with raisins and gently roasted before being added to the surface of the molten metal. A temporary lid was added at this point presumably to minimise the escape of zinc vapor. In Europe a similar liquid process in open - topped crucibles took place which was probably less efficient than the Roman process and the use of the term tutty by Albertus Magnus in the 13th century suggests influence from Islamic technology. The 12th century German monk Theophilus described how preheated crucibles were one sixth filled with powdered calamine and charcoal then topped up with copper and charcoal before being melted, stirred then filled again. The final product was cast, then again melted with calamine. It has been suggested that this second melting may have taken place at a lower temperature to allow more zinc to be absorbed. Albertus Magnus noted that the "power '' of both calamine and tutty could evaporate and described how the addition of powdered glass could create a film to bind it to the metal. German brass making crucibles are known from Dortmund dating to the 10th century AD and from Soest and Schwerte in Westphalia dating to around the 13th century confirm Theophilus ' account, as they are open - topped, although ceramic discs from Soest may have served as loose lids which may have been used to reduce zinc evaporation, and have slag on the interior resulting from a liquid process. Some of the most famous objects in African art are the lost wax castings of West Africa, mostly from what is now Nigeria, produced first by the Kingdom of Ife and then the Benin Empire. Though normally described as "bronzes '', the Benin Bronzes, now mostly in the British Museum and other Western collections, and the large portrait heads such as the Bronze Head from Ife of "heavily leaded zinc - brass '' and the Bronze Head of Queen Idia, both also British Museum, are better described as brass, though of variable compositions. Work in brass or bronze continued to be important in Benin art and other West African traditions such as Akan goldweights, where the metal was regarded as a more valuable material than in Europe. The Renaissance saw important changes to both the theory and practice of brassmaking in Europe. By the 15th century there is evidence for the renewed use of lidded cementation crucibles at Zwickau in Germany. These large crucibles were capable of producing c. 20 kg of brass. There are traces of slag and pieces of metal on the interior. Their irregular composition suggesting that this was a lower temperature not entirely liquid process. The crucible lids had small holes which were blocked with clay plugs near the end of the process presumably to maximise zinc absorption in the final stages. Triangular crucibles were then used to melt the brass for casting. 16th - century technical writers such as Biringuccio, Ercker and Agricola described a variety of cementation brass making techniques and came closer to understanding the true nature of the process noting that copper became heavier as it changed to brass and that it became more golden as additional calamine was added. Zinc metal was also becoming more commonplace By 1513 metallic zinc ingots from India and China were arriving in London and pellets of zinc condensed in furnace flues at the Rammelsberg in Germany were exploited for cementation brass making from around 1550. Eventually it was discovered that metallic zinc could be alloyed with copper to make brass; a process known as speltering and by 1657 the German chemist Johann Glauber had recognised that calamine was "nothing else but unmeltable zinc '' and that zinc was a "half ripe metal. '' However some earlier high zinc, low iron brasses such as the 1530 Wightman brass memorial plaque from England may have been made by alloying copper with zinc and include traces of cadmium similar to those found in some zinc ingots from China. However the cementation process was not abandoned and as late as the early 19th century there are descriptions of solid - state cementation in a domed furnace at around 900 -- 950 ° C and lasting up to 10 hours. The European brass industry continued to flourish into the post medieval period buoyed by innovations such as the 16th century introduction of water powered hammers for the production of battery wares. By 1559 the Germany city of Aachen alone was capable of producing 300,000 cwt of brass per year. After several false starts during the 16th and 17th centuries the brass industry was also established in England taking advantage of abundant supplies of cheap copper smelted in the new coal fired reverberatory furnace. In 1723 Bristol brass maker Nehemiah Champion patented the use of granulated copper, produced by pouring molten metal into cold water. This increased the surface area of the copper helping it react and zinc contents of up to 33 % wt were reported using this new technique. In 1738 Nehemiah 's son William Champion patented a technique for the first industrial scale distillation of metallic zinc known as distillation per descencum or "the English process. '' This local zinc was used in speltering and allowed greater control over the zinc content of brass and the production of high - zinc copper alloys which would have been difficult or impossible to produce using cementation, for use in expensive objects such as scientific instruments, clocks, brass buttons and costume jewellery. However Champion continued to use the cheaper calamine cementation method to produce lower - zinc brass and the archaeological remains of bee - hive shaped cementation furnaces have been identified at his works at Warmley. By the mid-to - late 18th century developments in cheaper zinc distillation such as John - Jaques Dony 's horizontal furnaces in Belgium and the reduction of tariffs on zinc as well as demand for corrosion - resistant high zinc alloys increased the popularity of speltering and as a result cementation was largely abandoned by the mid-19th century.
what is the nickname for the australian cricket team
Australian national sports team nicknames - Wikipedia In Australia, the national representative team of many sports has a nickname, used informally when referring to the team in the media or in conversation. These nicknames are typically derived from well - known symbols of Australia. Often the nickname is combined with that of a commercial sponsor, such as the "Qantas Wallabies '' or the "Telstra Dolphins ''. Some names are a portmanteau word with seccnd element - roo, from kangaroo; such as "Olyroos '' for the Olympic association football team. The oldest nicknames are Kangaroos and Wallabies for the rugby league football and rugby union teams. The other names are more recent, mostly invented to help publicise sports not traditionally popular in Australia. Some journalists have criticised the practice as embarrassing, gimmicky, or PR - driven. The name "Wallabies '' was chosen by the 1908 rugby union side, making its first tour of the Northern Hemisphere. British newspapers had already nicknamed the 1905 New Zealand touring team the "All Blacks '' from their kit colour; the 1906 South African tourists had adopted "Springboks ''. "Rabbits '' was first suggested for Australia, but rejected since rabbits there are notorious as pests. Until the 1980s, only touring sides were "Wallabies ''; players on the eight tours up to 1984 were "the First Wallabies '' up to "the Eighth Wallabies ''. The rugby league tour side arrived in Britain later in 1908 with a live kangaroo as mascot and were nicknamed "Kangaroos ''. "Kangaroos '' originally referred only to teams on "Kangaroo Tours '' to Britain and France. In 1994 the Australian Rugby League extended the nickname to all internationals for sponsorship reasons, drawing criticism for the break with tradition. The first such game was a 58 -- 0 win over France at Parramatta Stadium on 6 July 1994. Among the longer - established sports, the test cricket and Davis Cup tennis teams have no common nickname. Harry Beitzel 's 1967 Australian Football World Tour team was unofficially nicknamed the Galahs from their flashy uniform. Though this side was a precursor of subsequent Australian international rules football teams, the nickname has not been retained. Australian Tennis magazine invited reader to suggest a nickname for the Davis Cup team in 1996. The Australia Fed Cup team has been called the Cockatoos, first suggested by player Casey Dellacqua in a press conference at the April 2012 match against Germany. The name has been embraced by teammates and used on the website of governing body Tennis Australia. As part of a 1998 strategic business plan, Cricket Australia surveyed "stakeholders '' in 1998 about a possible nickname, to enhance marketing opportunities. State cricket teams in the Sheffield Shield had benefited from adopting nicknames in the 1990s. 69 % opposed a national nickname, partly from a sense of decorum and partly because the best names were already taken by other teams. Athletics Australia held a competition for a nickname for its squad for the 2001 World Athletics Championships. The winning entry was "the Diggers '', from the nickname for ANZAC soldiers. This was quickly abandoned after criticism from the Returned and Services League of Australia and others that this was an inappropriate use of the term. The team previously had a little - used nickname, "the Blazers ''. In December 2004, the Australian Soccer Association renamed itself Football Federation Australia (FFA) and announced an effort to rebrand association football as "football '' rather than "soccer '' in Australia. The national team had been nicknamed "the Socceroos '' by journalist Tony Horstead on a 1967 tour to South Vietnam. FFA chairman Frank Lowy commented "It has been commonly used and is a much loved name but we may see it fade out as evolution takes place '', and suggested few national football teams had nicknames. By 2016, the FFA announcement of Caltex as sponsors was titled "Caltex Australia with the Socceroos all the way ''.
how did the ottoman empire aid the central powers
Ottoman -- German alliance - wikipedia The Ottoman -- German Alliance was an alliance between the German Empire and the Ottoman Empire that was ratified on August 2, 1914, shortly following the outbreak of World War I. The alliance was created as part of a joint - cooperative effort that would strengthen and modernize the failing Ottoman military, as well as provide Germany safe passage into neighboring British colonies. On the eve of the First World War, the Ottoman Empire was in ruinous shape. As a result of successive wars fought in this period, territories were lost, the economy was in shambles and people were demoralized and tired. What the Empire needed was time to recover and to carry out reforms; however, there was no time, because the world was sliding into war and the Ottoman Empire was highly unlikely to manage to remain outside the coming conflict. Since staying neutral and focusing on recovery did not appear to be possible, the Empire had to ally with one or the other camp, because, after the Italo - Turkish War and Balkan Wars, it was completely out of resources. There were not adequate quantities of weaponry and machinery left; and neither did the Empire have the financial means to purchase new ones. The only option for the Sublime Porte was to establish an alliance with a European power; and at first it did not really matter which one that would be. As Talat Paşa, the Minister of Interior, wrote in his memoirs: "Turkey needed to join one of the country groups so that it could organize its domestic administration, strengthen and maintain its commerce and industry, expand its railroads, in short to survive and to preserve its existence. '' Most European powers were not interested in joining an alliance with the ailing Ottoman Empire. Already at the beginning of the Turco - Italian War in Northern Africa, the Grand Vizier Sait Halim Paşa had expressed the government 's desire, and the Turkish ambassadors were asked to find out whether the European capitals would be interested. Only Russia seemed to have an interest -- however, under conditions that would have amounted a Russian protectorate on the Ottoman lands. It was impossible to reconcile an alliance with the French: as France 's main ally was Russia, the long - time enemy of the Ottoman Empire since the War of 1828. Great Britain declined an Ottoman request. The Ottoman Sultan Mehmed V specifically wanted the Empire to remain a non-belligerent nation. However pressure from some of Mehmed 's senior advisors led the Empire to align with the Central Powers. Whilst Great Britain was unenthusiastic about aligning with the Ottoman Empire, Germany was enthusiastic. Germany needed the Ottoman Empire on its side. The Orient Express had run directly to Constantinople since 1889, and prior to the First World War the Sultan had consented to a plan to extend it through Anatolia to Baghdad under German auspices. This would strengthen the Ottoman Empire 's link with industrialized Europe, while also giving Germany easier access to its African colonies and to trade markets in India. To keep the Ottoman Empire from joining the Triple Entente, Germany encouraged Romania and Bulgaria to enter the Central Powers. A secret treaty was concluded between the Ottoman Empire and the German Empire on August 2, 1914. The Ottoman Empire was to enter the war on the side of the Central Powers one day after the German Empire declared war on Russia. The alliance was ratified on 2 August by many high - ranking Ottoman officials, including Grand Vizier Said Halim Pasha, the Minister of War Enver Pasha, the Interior Minister Talat Pasha, and Head of Parliament Halil Bey. However, there was no signature from the House of Osman as the Sultan Mehmed V did not sign it. According to the Constitution, the Sultan was the Commander - in - Chief of the Army, and this made the legitimacy of the Alliance questionable. This meant that the army was not able to fight on behalf of the Sultan. The Sultan himself had wanted the Empire to remain neutral. He did not wish to command a war himself and as such left the Cabinet to do much of his bidding. The third member of the cabinet of the Three Pashas Djemal Pasha also did not sign the treaty as he had tried to form an alliance with France. Not all parts of the Ottoman government accepted the Alliance. Austria - Hungary adhered to the Ottoman -- German treaty on 5 August. The Ottoman Empire did not enter the war until the Ottoman Navy bombarded Russian ports on the 29 October 1914.
where does the prince and the pauper take place
The Prince and the Pauper - wikipedia The Prince and the Pauper is a novel by American author Mark Twain. It was first published in 1881 in Canada, before its 1882 publication in the United States. The novel represents Twain 's first attempt at historical fiction. Set in 1547, it tells the story of two young boys who are identical in appearance: Tom Canty, a pauper who lives with his abusive father in Offal Court off Pudding Lane in London, and Prince Edward, son of King Henry VIII. Tom Canty, youngest son of a poor family living in Offal Court located in London, has always aspired to a better life, encouraged by the local priest, who has taught him to read and write. Loitering around the palace gates one day, he meets Edward Tudor, the Prince of Wales. Coming too close in his intense excitement, Tom is nearly caught and beaten by the Royal Guards. However, Edward stops them and invites Tom into his palace chamber. There, the two boys get to know one another. Fascinated by each other 's life and their uncanny resemblance to each other and learning they were even born on the same day, they decide to switch places "temporarily. '' The Prince momentarily goes outside, quickly hiding an article of national importance, which the reader later learns is the Great Seal of England, but dressed as Tom, he is not recognized by the guards, who drive him from the palace, and he eventually finds his way through the streets to the Canty home. There, he is subjected to the brutality of Tom 's alcoholic and abusive father, from whom he manages to escape, and meets one Miles Hendon, a soldier and nobleman returning from war. Although Miles does not believe Edward 's claims to royalty, he humors him and becomes his protector. Meanwhile, news reaches them that King Henry VIII has died and Edward is now the King. Tom, dressed as Edward, tries to cope with court customs and manners. His fellow nobles and palace staff think the prince has an illness, which has caused memory loss and fear he will go mad. They repeatedly ask him about the missing Great Seal of England, but he knows nothing about it. However, when Tom is asked to sit in on judgments, his common - sense observations reassure them his mind is sound. As Edward experiences the brutal life of a London pauper firsthand, he becomes aware of the stark class inequality in England. In particular, he sees the harsh, punitive nature of the English judicial system where people are burned at the stake, pilloried, and flogged. He realizes that the accused are convicted on flimsy evidence and human branding -- or hanged -- for petty offenses, and vows to reign with mercy when he regains his rightful place. When Edward unwisely declares to a gang of thieves that he is the King and will put an end to unjust laws, they assume he is insane and hold a mock coronation. After a series of adventures - including a stint in prison, Edward interrupts the coronation as Tom is about to be crowned as King. The nobles are shocked at their resemblance, and refuse to believe that Edward is the rightful King wearing Tom 's clothes until he produces the Great Seal of England that he hid before leaving the palace. Edward and Tom switch back to their original places and Edward is crowned King Edward VI of England. Miles is rewarded with the rank of Earl and the family right to sit in the presence of the King. In gratitude for supporting the new King 's claim to the throne, Edward names Tom the "King 's Ward, '' a privileged position he holds for the rest of his life. The ending explains that though Edward died at the age of 15, he reigned mercifully due to his experiences. The introductory quote -- "The quality of mercy is... twice blest; / It blesseth him that gives and him that takes: / ' Tis mightiest in the mightiest: it becomes / The throned monarch better than his crown '' -- is part of "The Quality of Mercy '' speech from Shakespeare 's The Merchant of Venice. While written for children, The Prince and the Pauper is both a critique of social inequality and a criticism of judging others by their appearance. Twain wrote of the book, "My idea is to afford a realizing sense of the exceeding severity of the laws of that day by inflicting some of their penalties upon the King himself and allowing him a chance to see the rest of them applied to others... '' Having returned from a second European tour -- which formed the basis of A Tramp Abroad (1880) -- Twain read extensively about English and French history. Initially intended as a play, the book was originally set in Victorian England before Twain decided to set it further back in time. He wrote The Prince and the Pauper having already started Adventures of Huckleberry Finn. The "whipping - boy story, '' originally meant as a chapter to be part of The Prince and the Pauper was published in the Hartford Bazar Budget of July 4, 1880, before Twain deleted it from the novel at the suggestion of William Dean Howells. Ultimately, The Prince and the Pauper was published by subscription by James R. Osgoode of Boston, with illustrations by F.T. Merrill. The book bears a dedication to Twain 's daughters, Susie and Clara Clemens and is subtitled "A Tale For Young People of All Ages ''. The Prince and the Pauper was adapted for the stage during Twain 's lifetime, an adaptation which involved Twain in litigation with the playwright. In November 1920, a stage adaption by Amélie Rives opened on Broadway under the direction of William Faversham, with Faversham as Miles Hendon and Ruth Findlay playing both Tom Canty and Prince Edward. An Off - Broadway musical with music by Neil Berg opened at Lamb 's Theatre on June 16, 2002. The original cast included Dennis Michael Hall as Prince Edward, Gerard Canonico as Tom Canty, Rob Evan as Miles Hendon, Stephen Zinnato as Hugh Hendon, Rita Harvey as Lady Edith, Michael McCormick as John Canty, Robert Anthony Jones as the Hermit / Dresser, Sally Wilfert as Mary Canty, Allison Fischer as Lady Jane and Aloysius Gigl as Father Andrew. The musical closed August 31, 2003. English playwright Jemma Kennedy adapted the story into a musical drama which was performed at the Unicorn Theatre in London 2012 -- 2013, directed by Selina Cartmell and starring twins Danielle Bird and Nichole Bird as the Prince and Pauper and Jake Harders as Miles Hendon. In 1946, the story was adapted into comics form by Arnold L. Hicks in Classics Illustrated ("Classic Comics '') # 29, published by Gilberton. In 1962, Dell Comics punished Walt Disney 's the Prince and the Pauper, illustrated by Dan Spiegle, based on the three - part television adaptation produced by Walt Disney 's Wonderful World of Color. In 1990, Disney Comics published Disney 's The Prince and the Pauper, by Scott Saavedra and Sergio Asteriti, based on the animated featurette starring Mickey Mouse. The novel has also been the basis of several films. In some versions, Prince Edward carries identification when he assumes Tom 's role. While animations such as the Mickey Mouse version retell the story, other cartoons employ parody (including an episode of the animated television show Johnny Bravo in which Twain appears, begging cartoonists to "let this tired story die ''). Film critic Roger Ebert suggested that the 1983 comedy film Trading Places (starring Dan Aykroyd and Eddie Murphy) has similarities to Twain 's tale due to the two characters ' switching lives (although not by choice). A much - abridged 1920 silent version was produced (as one of his first films) by Alexander Korda in Austria entitled Der Prinz und der Bettelknabe. The 1937 version starred Errol Flynn (as Hendon) and twins Billy and Bobby Mauch as Tom Canty and Edward Tudor, respectively. A Hindi film version, Raja Aur Runk, was released in 1968 and directed by Kotayya Pratyagatma. The film "Indianized '' many of the episodes in the original story. A 1977 film version of the story, starring Oliver Reed as Miles Hendon, co-starring Rex Harrison (as the Duke of Norfolk), Mark Lester and Raquel Welch and directed by Richard Fleischer, was released in the UK as The Prince and the Pauper and in the US as Crossed Swords. Walt Disney Feature Animation made a 1990 animated 24 - minute short film, inspired by the novel and starring Mickey Mouse. In this version, Mickey trades places with himself and is supported by other Disney characters. It Takes Two, starring twins Mary - Kate and Ashley Olsen, is a loose translation of this story in which two girls (one wealthy and the other an orphan, who resemble each other) switch places in order to experience each other 's lives. The 1996 Bollywood film Tere Mere Sapne is loosely based upon this story, in which two boys born on exactly the same date switch places to experience the other 's life, whilst learning valuable lessons along the way. A 2000 film directed by Giles Foster starred Aidan Quinn (as Miles Hendon), Alan Bates, Jonathan Hyde, and identical twins Jonathan and Robert Timmins. In 2004, it was adapted into an 85 - minute CGI - animated musical, Barbie as the Princess and the Pauper, with Barbie playing the blond Princess Anneliese and the brunette pauper Erika. In 2012, a second CGI musical adaptation was released, entitled Barbie: The Princess and the Popstar. In it, Barbie plays a princess blonde named Victoria (Tori) and a brunette popstar named Keira. Both crave the life of another, one day they meet and magically change places. In 2006, Garfield 's second live - action film entitled Garfield: A Tail of Two Kitties, was another adaptation of the classic story. A 2007 film, A Modern Twain Story: The Prince and the Pauper starred identical twins Dylan and Cole Sprouse. Raju Peda, produced for Indian television in 1954, is a Telugu - version adaptation of the novel starring N.T. Rama Rao and directed by B.A. Subba Rao. In 1957, CBS ' DuPont Show of the Month offered an adaptation of The Prince and the Pauper, with Johnny Washbrook of My Friend Flicka as Tom Canty and Rex Thompson as Prince Edward. A 1962 three - part Walt Disney 's Wonderful World of Color television adaptation featured Guy Williams as Miles Hendon. Both Prince Edward and Tom Canty were played by Sean Scully, using the split - screen technique which the Disney studios had used in The Parent Trap (1961) with Hayley Mills. The 21st episode of The Monkees, aired on February 6, 1967, was entitled "The Prince and the Paupers ''. A 1975 BBC television adaptation starred Nicholas Lyndhurst. In a 1976 ABC Afterschool Special, Lance Kerwin played the dual role in a modern American - based adaptation of the story entitled P.J. and the President 's Son. The BBC produced a television adaptation by writer Richard Harris, consisting of six thirty - minute episodes, in 1976. Nicholas Lyndhurst played both Prince Edward and Tom Canty. Ringo, a 1978 TV special starring Ringo Starr, involves the former Beatles drummer trading places with a talentless look - alike. The BBC TV comedy series Blackadder the Third has an episode, "Duel and Duality, '' where the Prince Regent believes that the Duke of Wellington is after him. The prince swaps clothes with Blackadder (who is his butler) and says, "This reminds of that story ' The Prince and the Porpoise '. '' Blackadder corrects him: "Pauper. The Prince and the Pauper. '' Since Blackadder the Third is set during the early 1800s, this is an anachronism. In 1996, PBS aired a Wishbone adaptation titled "The Pooch and the Pauper '' with Wishbone playing both Tom Canty and Edward VI. The BBC produced a six - part dramatization of the story in 1996, adapted by Julian Fellowes, starring James Purefoy, with Keith Michell reprising his role of Henry VIII. A 2011 episode of Phineas and Ferb ("Make Play '', season 2, episode 64) follows a similar storyline, with Candace switching places with Princess Baldegunde of Duselstein and discovering that royal life is dull. The 2017 Japanese anime series Princess Principal uses a similar story as the background for the characters Ange and Princess Charlotte; their history is revealed by Ange under the guise of a fairy tale named "The Princess and the Pickpocket ''. Ten years prior to the start of the series, Ange, who was actually the real Princess Charlotte, met Princess, who was actually a common pickpocket named Ange and looked identical to her. They befriended one another and eventually decided to trade places for a day. Soon after the switch, however, a Revolution broke out and divided their country, separating the girls and leaving them trapped in each other 's roles. In 1996, C&E, a Taiwanese software company, released an RPG video game for Sega Genesis entitled Xin Qigai Wangzi ("New Beggar Prince ''). Its story was inspired by the book, with the addition of fantastic elements such as magic, monsters, and other RPG themes. The game was ported to PC in 1998. It was eventually licensed in an English translation and released in 2006 as Beggar Prince by independent game publisher Super Fighter Team. This was one of the first new games for the discontinued Sega platform since 1998 and is perhaps the first video game adaptation of the book.
the typical u.s. recession is marked by an average economic decline of 4 percent
List of recessions in the United States - wikipedia There have been as many as 47 recessions in the United States dating back to the Articles of Confederation, and although economists and historians dispute certain 19th - century recessions, the consensus view among economists and historians is that "The cyclical volatility of GNP and unemployment was greater before the Great Depression than it has been since the end of World War II. '' Cycles in the country 's agricultural production, industrial production, consumption, business investment, and the health of the banking industry contribute to these declines. U.S. recessions have increasingly affected economies on a worldwide scale, especially as countries ' economies become more intertwined. The unofficial beginning and ending dates of recessions in the United States have been defined by the National Bureau of Economic Research (NBER), an American private nonprofit research organization. The NBER defines a recession as "a significant decline in economic activity spread across the economy, lasting more than two quarters which is 6 months, normally visible in real gross domestic product (GDP), real income, employment, industrial production, and wholesale - retail sales ''. In the 19th century, recessions frequently coincided with financial crises. Determining the occurrence of pre-20th - century recessions is more difficult due to the dearth of economic statistics, so scholars rely on historical accounts of economic activity, such as contemporary newspapers or business ledgers. Although the NBER does not date recessions before 1857, economists customarily extrapolate dates of U.S. recessions back to 1790 from business annals based on various contemporary descriptions. Their work is aided by historical patterns, in that recessions often follow external shocks to the economic system such as wars and variations in the weather affecting agriculture, as well as banking crises. Major modern economic statistics, such as unemployment and GDP, were not compiled on a regular and standardized basis until after World War II. The average duration of the 11 recessions between 1945 and 2001 is 10 months, compared to 18 months for recessions between 1919 and 1945, and 22 months for recessions from 1854 to 1919. Because of the great changes in the economy over the centuries, it is difficult to compare the severity of modern recessions to early recessions. No recession of the post-World War II era has come anywhere near the depth of the Great Depression, which lasted from 1929 until 1941 and was caused by the 1929 crash of the stock market and other factors. Attempts have been made to date recessions in America beginning in 1790. These periods of recession were not identified until the 1920s. To construct the dates, researchers studied business annals during the period and constructed time series of the data. The earliest recessions for which there is the most certainty are those that coincide with major financial crises. Beginning in 1835, an index of business activity by the Cleveland Trust Company provides data for comparison between recessions. Beginning in 1854, the National Bureau of Economic Research dates recession peaks and troughs to the month. However, a standardized index does not exist for the earliest recessions. In 1791, Congress chartered the First Bank of the United States to handle the country 's financial needs. The bank had some functions of a modern central bank, although it was responsible for only 20 % of the young country 's currency. In 1811 the bank 's charter lapsed, but it was replaced by the Second Bank of the United States, which lasted from 1816 -- 36. In the 1830s, U.S. President Andrew Jackson fought to end the Second Bank of the United States. Following the Bank War, the Second Bank lost its charter in 1836. From 1837 to 1862, there was no national presence in banking, but still plenty of state and even local regulation, such as laws against branch banking which prevented diversification. In 1863, in response to financing pressures of the Civil War, Congress passed the National Banking Act, creating nationally chartered banks. There was neither a central bank nor deposit insurance during this era, and thus banking panics were common. Recessions often led to bank panics and financial crises, which in turn worsened the recession. The dating of recessions during this period is controversial. Modern economic statistics, such as gross domestic product and unemployment, were not gathered during this period. Victor Zarnowitz evaluated a variety of indices to measure the severity of these recessions. From 1834 to 1929, one measure of recessions is the Cleveland Trust Company index, which measured business activity and, beginning in 1882, an index of trade and industrial activity was available, which can be used to compare recessions. Following the end of World War II and the large adjustment as the economy adjusted from wartime to peacetime in 1945, the collection of many economic indicators, such as unemployment and GDP, became standardized. Recessions after World War II may be compared to each other much more easily than previous recessions because of these available data. The listed dates and durations are from the official chronology of the National Bureau of Economic Research. GDP data are from the Bureau of Economic Analysis, unemployment from the Bureau of Labor Statistics (after 1948). Note that the unemployment rate often reaches a peak associated with a recession after the recession has officially ended. No recession of the post-World War II era has come anywhere near the depth of the Great Depression. In the Great Depression, GDP fell by 27 % (the deepest after demobilization is the recession beginning in December 2007, during which GDP has fallen 5.1 % as of the second quarter of 2009) and unemployment rate reached 10 % (the highest since was the 10.8 % rate reached during the 1981 -- 82 recession). The National Bureau of Economic Research dates recessions on a monthly basis back to 1854; according to their chronology, from 1854 to 1919, there were 16 cycles. The average recession lasted 22 months, and the average expansion 27. From 1919 to 1945, there were six cycles; recessions lasted an average 18 months and expansions for 35. From 1945 to 2001, and 10 cycles, recessions lasted an average 10 months and expansions an average of 57 months. This has prompted some economists to declare that the business cycle has become less severe. Factors that may have contributed to this moderation include the creation of a central bank and lender of last resort, like the Federal Reserve System in 1913, the establishment of deposit insurance in the form of the Federal Deposit Insurance Corporation in 1933, increased regulation of the banking sector, the adoption of interventionist Keynesian economics, and the increase in automatic stabilizers in the form of government programs (unemployment insurance, social security, and later Medicare and Medicaid). See Post-World War II economic expansion for further discussion. Aug 1929 -- Mar 1933 ct 1929 - Dec 1941 19.0 % (1938)
an s-wave shadow zone is formed as seismic
Shadow zone - wikipedia A seismic shadow zone is an area of the Earth 's surface where seismographs can only barely detect an earthquake after its seismic waves have passed through the Earth. When an earthquake occurs, seismic waves radiate out spherically from the earthquake 's focus. The primary seismic waves are refracted by the liquid outer core of the Earth and are not detected between 104 ° and 140 ° (between approximately 11,570 and 15,570 km or 7,190 and 9,670 mi) from the epicenter. The secondary seismic waves can not pass through the liquid outer core and are not detected more than 104 ° (approximately 11,570 km or 7,190 mi) from the epicenter. P waves that have been converted to s - waves on leaving the outer core may be detected beyond 140 degrees. The reason for this is that the velocity for P - waves and S - waves is governed by both the different properties in the material which they travel through and the different mathematical relationships they share in each case. The three properties are: incompressibility (k (\ displaystyle k)), density (p (\ displaystyle p)) and rigidity (u (\ displaystyle u)). P - wave velocity is equal to (k + 4 3 u) / p (\ displaystyle (\ sqrt ((k+ (\ tfrac (4) (3)) u) / p))) whereas S - wave velocity is equal to (u / p) (\ displaystyle (\ sqrt ((u / p)))) and so S - wave velocity is entirely dependent on the rigidity of the material it travels through. Liquids, however, have zero rigidity, hence always making the S - wave velocity overall zero and as such S - waves lose all velocity when travelling through a liquid. P - waves, however, are only partially dependent on rigidity and as such still maintain some velocity (if greatly reduced) when travelling through a liquid. Analysis of the seismology of various recorded earthquakes and their shadow zones, led geologist Richard Oldham to deduce in 1906 the liquid nature of the Earth 's outer core.
who was the first governor general of free india
Governor - General of India - Wikipedia The Governor - General of India (or, from 1858 to 1947, officially the Viceroy and Governor - General of India, commonly shortened to Viceroy of India) was originally the head of the British administration in India and, later, after Indian independence in 1947, the representative of the Indian head of state. The office was created in 1773, with the title of Governor - General of the Presidency of Fort William. The officer had direct control only over Fort William, but supervised other British East India Company officials in India. Complete authority over all of British India was granted in 1833, and the official came to be known as the "Governor - General of India ''. In 1858, the territories of the East India Company came under the direct control of the British government; see British Raj. The governor - general (now also the viceroy) headed the central government of India, which administered the provinces of British India, including the Punjab, Bengal, Bombay, Madras, the United Provinces, and others. However, much of India was not ruled directly by the British government; outside the provinces of British India, there were hundreds of nominally sovereign princely states or "native states '', whose relationship was not with the British government, but directly with the monarch. To reflect the governor - general 's role as the representative of the monarch to the feudal rulers of the princely states, from 1858 the term Viceroy and Governor - General of India (known in short as the Viceroy of India) was applied to him. The title of viceroy was abandoned when British India split into the two independent dominions of India and Pakistan, but the office of governor - general continued to exist in each country separately -- until they adopted republican constitutions in 1950 and 1956, respectively. Until 1858, the governor - general was selected by the Court of Directors of the East India Company, to whom he was responsible. Thereafter, he was appointed by the sovereign on the advice of the British government; the Secretary of State for India, a member of the UK Cabinet, was responsible for instructing him on the exercise of his powers. After 1947, the sovereign continued to appoint the governor - general, but did so on the advice of the Indian government. Governors - General served at the pleasure of the sovereign, though the practice was to have them serve five - year terms. Governors - General could have their commission rescinded; and if one was removed, or left, a provisional governor - general was sometimes appointed until a new holder of the office could be chosen. The first Governor - General of British India was Warren Hastings, and the first Governor - General of independent India was Louis Mountbatten. Many parts of the Indian subcontinent were governed by the East India Company, which nominally acted as the agent of the Mughal Emperor. In 1773, motivated by corruption in the Company, the British government assumed partial control over the governance of India with the passage of the Regulating Act of 1773. A Governor - General and Supreme Council of Bengal were appointed to rule over the Presidency of Fort William in Bengal. The first Governor - General and Council were named in the Act. The Charter Act 1833 replaced the Governor - General and Council of Fort William with the Governor - General and Council of India. The power to elect the Governor - General was retained by the Court of Directors, but the choice became subject to the Sovereign 's approval. After the Indian Rebellion of 1857, the East India Company 's territories in India were put under the direct control of the Sovereign. The Government of India Act 1858 vested the power to appoint the Governor - General in the Sovereign. The Governor - General, in turn, had the power to appoint all lieutenant governors in India, subject to the Sovereign 's approval. India and Pakistan acquired independence in 1947, but Governors - General continued to be appointed over each nation until republican constitutions were written. Louis Mountbatten, 1st Earl Mountbatten of Burma remained Governor - General of India for some time after independence, but the two nations were otherwise headed by native Governors - General. India became a secular republic in 1950; Pakistan became an Islamic one in 1956. The Governor - General originally had power only over the Presidency of Fort William in Bengal. The Regulating Act, however, granted them additional powers relating to foreign affairs and defence. The other Presidencies of the East India Company (Madras, Bombay and Bencoolen) were not allowed to declare war on or make peace with an Indian prince without receiving the prior approval of the Governor - General and Council of Fort William. The powers of the Governor - General, in respect of foreign affairs, were increased by the India Act 1784. The Act provided that the other Governors under the East India Company could not declare war, make peace or conclude a treaty with an Indian prince unless expressly directed to do so by the Governor - General or by the Company 's Court of Directors. While the Governor - General thus became the controller of foreign policy in India, he was not the explicit head of British India. That status came only with the Charter Act 1833, which granted him "superintendence, direction and control of the whole civil and military Government '' of all of British India. The Act also granted legislative powers to the Governor - General and Council. After 1858, the Governor - General (now usually known as the Viceroy) functioned as the chief administrator of India and as the Sovereign 's representative. India was divided into numerous provinces, each under the head of a Governor, Lieutenant Governor or Chief Commissioner or Administrator. Governors were appointed by the British Government, to whom they were directly responsible; Lieutenant Governors, Chief Commissioners, and Administrators, however, were appointed by and were subordinate to the Viceroy. The Viceroy also oversaw the most powerful princely rulers: the Nizam of Hyderabad, the Maharaja of Mysore, the Maharaja (Scindia) of Gwalior, the Maharaja of Jammu and Kashmir and the Gaekwad (Gaekwar) Maharaja of Baroda. The remaining princely rulers were overseen either by the Rajputana Agency and Central India Agency, which were headed by representatives of the Viceroy, or by provincial authorities. The Chamber of Princes was an institution established in 1920 by a Royal Proclamation of King - Emperor George V to provide a forum in which the princely rulers could voice their needs and aspirations to the government. The chamber usually met only once a year, with the Viceroy presiding, but it appointed a Standing Committee, which met more often. Upon independence in August 1947, the title of Viceroy was abolished. The representative of the British Sovereign became known once again as the Governor - General. C. Rajagopalachari became the only Indian Governor - General. However, once India acquired independence, the Governor - General 's role became almost entirely ceremonial, with power being exercised on a day - to - day basis by the Indian cabinet. After the nation became a republic in 1950, the President of India continued to perform the same functions. The Governor - General was always advised by a Council on the exercise of his legislative and executive powers. The Governor - General, while exercising many functions, was referred to as the "Governor - General in Council. '' The Regulating Act 1773 provided for the election of four counsellors by the East India Company 's Court of Directors. The Governor - General had a vote along with the counsellors, but he also had an additional vote to break ties. The decision of the Council was binding on the Governor - General. In 1784, the Council was reduced to three members; the Governor - General continued to have both an ordinary vote and a casting vote. In 1786, the power of the Governor - General was increased even further, as Council decisions ceased to be binding. The Charter Act 1833 made further changes to the structure of the Council. The Act was the first law to distinguish between the executive and legislative responsibilities of the Governor - General. As provided under the Act, there were to be four members of the Council elected by the Court of Directors. The first three members were permitted to participate on all occasions, but the fourth member was only allowed to sit and vote when legislation was being debated. In 1858, the Court of Directors ceased to have the power to elect members of the Council. Instead, the one member who had a vote only on legislative questions came to be appointed by the Sovereign, and the other three members by the Secretary of State for India. The Indian Councils Act 1861 made several changes to the Council 's composition. Three members were to be appointed by the Secretary of State for India, and two by the Sovereign. (The power to appoint all five members passed to the Crown in 1869). The Viceroy was empowered to appoint an additional six to twelve members (changed to ten to sixteen in 1892, and to sixty in 1909). The five individuals appointed by the Sovereign or the Indian Secretary headed the executive departments, while those appointed by the Viceroy debated and voted on legislation. In 1919, an Indian legislature, consisting of a Council of State and a Legislative Assembly, took over the legislative functions of the Viceroy 's Council. The Viceroy nonetheless retained significant power over legislation. He could authorise the expenditure of money without the Legislature 's consent for "ecclesiastical, political (and) defense '' purposes, and for any purpose during "emergencies. '' He was permitted to veto, or even stop debate on, any bill. If he recommended the passage of a bill, but only one chamber cooperated, he could declare the bill passed over the objections of the other chamber. The Legislature had no authority over foreign affairs and defence. The President of the Council of State was appointed by the Viceroy; the Legislative Assembly elected its President, but the election required the Viceroy 's approval. Until 1833, the title of the position was "Governor - General of Bengal ''. The Government of India Act 1833 converted the title into "Governor - General of India. '' The title "Viceroy and Governor - General '' was first used in the queen 's proclamation appointing Viscount Canning in 1858. It was never conferred by an act of parliament, but was used in warrants of precedence and in the statutes of knightly orders. In usage, "viceroy '' is employed where the governor - general 's position as the monarch 's representative is in view. The viceregal title was not used when the sovereign was present in India. It was meant to indicate new responsibilities, especially ritualistic ones, but it conferred no new statutory authority. The governor - general regularly used the title in communications with the Imperial Legislative Council, but all legislation was made only in the name of the Governor - General - in - Council (or the Government of India). The Governor - General was styled Excellency and enjoyed precedence over all other government officials in India. He was referred to as ' His Excellency ' and addressed as ' Your Excellency '. From 1858 to 1947, the Governor - General was known as the Viceroy of India (from the French roi, meaning ' king '), and wives of Viceroys were known as Vicereines (from the French reine, meaning ' queen '). The Vicereine was referred to as ' Her Excellency ' and was also addressed as ' Your Excellency '. Neither title was employed while the Sovereign was in India. However, the only reigning British Sovereign to visit India during the period of British rule was King George V, who accompanied by his consort Queen Mary attended the Delhi Durbar in 1911. When the Order of the Star of India was founded in 1861, the Viceroy was made its Grand Master ex officio. The Viceroy was also made the ex officio Grand Master of the Order of the Indian Empire upon its foundation in 1877. Most Governors - General and Viceroys were peers. Frequently, a Viceroy who was already a peer would be granted a peerage of higher rank, as with the granting of a marquessate to Lord Reading and an earldom and later a marquessate to Freeman Freeman - Thomas. Of those Viceroys who were not peers, Sir John Shore was a baronet, and Lord William Bentinck was entitled to the courtesy title ' Lord ' because he was the son of a Duke. Only the first and last Governors - General -- Warren Hastings and Chakravarti Rajagopalachari -- as well as some provisional Governors - General, had no honorific titles at all. From around 1885, the Viceroy of India was allowed to fly a Union Flag augmented in the centre with the ' Star of India ' surmounted by a Crown. This flag was not the Viceroy 's personal flag; it was also used by Governors, Lieutenant Governors, Chief Commissioners and other British officers in India. When at sea, only the Viceroy flew the flag from the mainmast, while other officials flew it from the foremast. From 1947 to 1950, the Governor - General of India used a dark blue flag bearing the royal crest (a lion standing on the Crown), beneath which was the word ' India ' in gold majuscules. The same design is still used by many other Commonwealth Realm Governors - General. This last flag was the personal flag of the Governor - General only. The Governor - General of Fort William resided in Belvedere House, Calcutta, until the early nineteenth century, when Government House was constructed. In 1854, the Lieutenant Governor of Bengal took up residence there. Now, the Belvedere Estate houses the National Library of India. Lord Wellesley, who is reputed to have said that ' India should be governed from a palace, not from a country house ', constructed a grand mansion, known as Government House, between 1799 and 1803. The mansion remained in use until the capital moved from Calcutta to Delhi in 1912. Thereafter, the Lieutenant Governor of Bengal, who had hitherto resided in Belvedere House, was upgraded to a full Governor and transferred to Government House. Now, it serves as the residence of the Governor of the Indian state of West Bengal, and is referred to by its Bengali name Raj Bhavan. After the capital moved from Calcutta to Delhi, the Viceroy occupied the newly built Viceroy 's House, designed by Sir Edwin Lutyens. Though construction began in 1912, it did not conclude until 1929; the palace was not formally inaugurated until 1931. The final cost exceeded £ 877,000 (over £ 35,000,000 in modern terms) -- more than twice the figure originally allocated. Today the residence, now known by the Hindi name of ' Rashtrapati Bhavan ', is used by the President of India. Throughout the British administration, Governors - General retreated to the Viceregal Lodge (Rashtrapati Niwas) at Shimla each summer to escape the heat, and the government of India moved with them. The Viceregal Lodge now houses the Indian Institute of Advanced Study. Badge of the Governor - General (1885 -- 1947) Standard of the Governor - General (1885 -- 1947) Standard of the Governor - General (1947 -- 50)
who played ulysses s grant in wild wild west
Wild Wild West - wikipedia Wild Wild West is a 1999 American western action comedy film directed by Barry Sonnenfeld. It was written by S.S. Wilson and Brent Maddock (whose previous collaborations include the Short Circuit and Tremors franchises), along with Jeffrey Price and Peter S. Seaman. A big - screen adaptation of the 1960s TV series The Wild Wild West, it stars Will Smith, Kevin Kline, Kenneth Branagh and Salma Hayek. The film takes the steampunk gadgetry from the original series to still more fantastical lengths with a series of unlikely contraptions culminating in a giant mechanical spider. Wild Wild West was a commercial disappointment, earning only $222.1 million worldwide against a $170 million budget, and received predominantly negative reviews. In 1869, 4 years after the end of the American Civil War, U.S. Army Captain James West (Will Smith) and U.S. Marshal Artemus Gordon (Kevin Kline) hunt for Confederate General "Bloodbath '' McGrath (Ted Levine), who is wanted for mass murder, throughout the Southern United States. This is due to McGrath ordering a massacre in a settlement called New Liberty, where many of the freed slaves were murdered, including West 's biological parents. The search leads to a brothel where the two try (unsuccessfully) to arrest him. It leads to a huge brawl and a cart of nitroglycerin crashing into the building, starting a fire. Both West and Gordon -- Gordon dressed as a woman -- escape. Later, in Washington, D.C., West and Gordon meet at the White House with President Ulysses S. Grant (Kevin Kline), who tells them about the disappearance of America 's key scientists and a treasonous plot by General "Bloodbath '' McGrath. Grant charges the two with finding the scientists before he inaugurates the first transcontinental railroad at Promontory, Utah. On board their train, Gordon examines the head of a murdered scientist, using a projection device to reveal the last thing the scientist saw. Finding McGrath and a clue in the image, they head to New Orleans, pursuing a lead about Dr. Arliss Loveless (Kenneth Branagh), an ex-Confederate scientist in a steam - powered wheelchair, who is hosting a party for the elite of Southern society. West mistakes a female guest for a disguised Gordon and makes an error that results in the guests wanting to lynch West. Meanwhile, Gordon roams the mansion and comes across a caged Rita Escobar (Salma Hayek), rescuing her. Gordon frees West from the lynching with an elastic rope, and the three escape to their train The Wanderer. On board, Rita asks for their help in rescuing her father, one of the kidnapped scientists, Professor Escobar. Later, Loveless hosts a reception to demonstrate his newest weapon: a steam - powered tank. The tank uses General McGrath 's soldiers as target practice, which angers McGrath. When McGrath demands an explanation, Loveless accuses him of "betrayal '' for surrendering at Appomattox, then shoots McGrath and leaves him for dead. As Loveless and his troops head over to Utah, Gordon, West and Rita find the dying McGrath, who reveals that he was framed by Loveless for the massacre of New Liberty, explaining that Loveless used the tank to kill the people there. The three then pursue Loveless on The Wanderer, but having expected their arrival and using steam - powered hydraulics, Loveless maneuvers his train behind The Wanderer. West manages to disable Loveless 's train, but not before Loveless uses a cannon - launched grappling hook to stop The Wanderer. Rita, afraid of being recaptured by Loveless, grabs one of Gordon 's explosive rigged pool balls and accidentally releases sleeping gas that knocks out West, Gordon, and herself. West and Gordon wake up as Loveless and his posse pull away in The Wanderer taking Rita hostage, announcing that he intends to capture President Grant at the "golden spike '' ceremony and also that West and Gordon will be killed should they step outside of the trap they are in. Escaping the trap, the two stumble across Loveless 's private rail line, which leads them to his industrial complex, hidden in Spider Canyon. Here, they witness Loveless 's ultimate weapon: a gigantic mechanical spider armed with two nitroglycerin cannons. Loveless uses the spider to capture President Grant and Gordon at the ceremony at Promontory Point, while West is seemingly shot by one of Loveless 's bodyguards. At his industrial complex, Loveless reveals his plan: to destroy the United States with his mechanized forces unless President Grant agrees to divide the states among Great Britain, France, Spain, Mexico, the Native American people, and himself. When Grant refuses to surrender, Loveless orders Gordon to be shot. However, West, who had survived thanks to a chain mail vest Gordon gave him, disguises himself and manages to distract Loveless, allowing Gordon to free the captives. Unfortunately, Loveless escapes on his spider in the ensuing battle, taking the President with him. To save the President, Gordon and West build a flying machine to overtake the spider as Loveless attacks a small town in an attempt to force Grant to sign the surrender. Gordon and West crash onto the spider, but manage to grab on the beam before they can fall and Munitia, one of Loveless 's henchwomen, falls to her death after the collision. After West defeats the henchmen below, throwing one off the spider after tying him to a chain, a fight ensues between him and Loveless, now on mechanical legs. Gordon shoots a hole in Loveless 's hydraulic line, allowing West to gain the upper hand. This allows Gordon and Grant to defeat Loveless 's guards, and pleading for his life, Loveless drags himself back to his wheelchair as the spider approaches a cliff. Loveless attempts to shoot West with a concealed gun, but hits the spider 's steam pipes, stopping it just before it plunges into the canyon. The abrupt stop leaves West and Loveless hanging precariously from the spider. Loveless tries to decide whether he should pull the chair 's lever that will release them or not, knowing it will send both him and West to their deaths if he does so. Loveless taunts West so much that West pulls the lever himself and survives by grabbing the ankles of the henchman he threw out earlier, while Loveless falls to his death. Grant promotes Gordon and West as Agent # 1 and Agent # 2 of his new U.S. Secret Service. Gordon asks which of them is 1 and 2, but the President brushes off the question as unimportant and tells them they will have plenty of time to talk about on the way back, as he takes "The Wanderer ''. Gordon and West meet Rita again, both planning to court her, but she crushes their hopes, announcing that Professor Escobar is actually her husband. Gordon and West ride through the desert together. Gordon asks West "Mind if I ask you a question? '' West replies "Actually, I do, Artie. '' The camera pans out to show they are actually riding the mechanical spider. In January 1992, Variety reported that Warner Bros. was planning a theatrical version of The Wild Wild West directed by Richard Donner, written by Shane Black, and starring Mel Gibson as James West (Donner directed three episodes of the original series). Donner and Gibson instead made a theatrical version of TV 's Maverick in 1994. The Wild Wild West motion picture continued in the development stage, with Tom Cruise rumored for the lead in 1995. Cruise instead revived Mission: Impossible the following year. Discussions with Will Smith and director Barry Sonnenfeld began in February 1997. Warner Bros. pursued George Clooney to co-star as Artemus Gordon, with Kevin Kline, Matthew McConaughey and Johnny Depp also in contention for the role while Steve Wilson and Brent Maddock were rewriting the script between April and May 1997. Clooney signed on the following August, dropping out of Jack Frost, and writers Jeffrey Price and Peter S. Seaman were brought aboard for a rewrite. Filming was expected to begin in January 1998, but was pushed to April 22, 1998. Clooney dropped out, citing an agreement with Sonnenfeld: "Ultimately, we all decided that rather than damage this project trying to retrofit the role for me, it was better to step aside and let them get someone else. '' Significant changes were made to Dr. Loveless as portrayed by Kenneth Branagh in the film. He went from a dwarf to a man without legs; his first name was also changed from Miguelito to Arliss and he was given the motive of a Southerner who sought the defeat of the North after the Civil War. Kevin Kline plays Gordon, whose character was similar to the version played by Ross Martin except that he was much more competitive with James West, besides being much more egotistical. The film script had Kline 's Gordon invent more ridiculous, humor - related, and implausible contraptions than those created by Martin 's Gordon in the television series. The film also depicted West and Gordon as aggressive rivals, whereas in the television series, West and Gordon had a very close friendship and trusted each other with their lives. Also, while Gordon did indeed impersonate Grant in the series ("The Night of the Steel Assassin '', "The Night of the Colonel 's Ghost '' and "The Night of the Big Blackmail '') they were not played by the same actor. Additionally, on the TV show, West was portrayed by Robert Conrad, a Caucasian, rather than an African American -- which serves a critical plot point, as West 's parents were among the victims of Loveless ' massacre at New Liberty). Jon Peters served as producer along with director Sonnenfeld. In a 2002 Q&A event that appears in An Evening with Kevin Smith, writer - director Kevin Smith talked about working with Peters on a fifth potential Superman film in 1997, revealing that Peters had three demands for the script. The first demand was that Superman not wear the suit, the second was that Superman not fly, and the third was to have Superman fight a giant spider in the third act. After Tim Burton came on board, Smith 's script was tossed away and the film was never produced due to further complications. A year later, he noted that Wild Wild West, with Peters on board as producer, was released with the inclusion of a giant mechanical spider in the final act. Neil Gaiman has said that Jon Peters also insisted a giant mechanical spider be included in a film adaptation of The Sandman. Principal photography began in 1998. The sequences on both Artemus Gordon 's and Dr. Loveless 's trains interiors were shot on sets at Warner Bros. The train exteriors were shot in Idaho on the Camas Prairie Railroad. The Wanderer is portrayed by the Baltimore & Ohio 4 - 4 - 0 No. 25, one of the oldest operating steam locomotives in the U.S. Built in 1856 at the Mason Machine Works in Taunton, Massachusetts, it was later renamed The "William Mason '' in honor of its manufacturer. During pre-production the engine was sent to the steam shops at the Strasburg Railroad for restoration and repainting. The locomotive is brought out for the B&O Train Museum in Baltimore 's "Steam Days ''. The "William Mason '' and the "Inyo '', which was the locomotive used in the original television series, both appeared in the Disney film The Great Locomotive Chase (1956). Much of the ' Wild West ' footage was shot around Santa Fe, New Mexico, particularly at the western town set at the Cook Movie Ranch. During the shooting of a sequence involving stunts and pyrotechnics, a planned building fire grew out of control and quickly overwhelmed the local fire crews that were standing by. Much of the town was destroyed before the fire was contained. In 1997, writer Gilbert Ralston sued Warner Bros. over the upcoming motion picture based on the series. Ralston helped create the original The Wild Wild West television series, and scripted the pilot episode, "The Night of the Inferno ''. In a deposition, Ralston explained that in 1964 he was approached by producer Michael Garrison who ' "said he had an idea for a series, good commercial idea, and wanted to know if I could glue the idea of a western hero and a James Bond type together in the same show. '' Ralston said he then created the Civil War characters, the format, the story outline and nine drafts of the script that was the basis for the television series. It was his idea, for example, to have a secret agent named Jim West who would perform secret missions for a bumbling Ulysses S. Grant. Ralston 's experience brought to light a common Hollywood practice of the 1950s and 1960s when television writers who helped create popular series allowed producers or studios to take credit for a show, thus cheating the writers out of millions of dollars in royalties. Ralston died in 1999, before his suit was settled. Warner Bros. ended up paying his family between $600,000 and $1.5 million. Wild Wild West received generally negative reviews from film critics, with a 17 % approval rating on Rotten Tomatoes and an average score of 4.1 out of 10, based on 130 reviews. The consensus states "Bombastic, manic, and largely laugh - free, Wild Wild West is a bizarre misfire in which greater care was lavished upon the special effects than on the script. '' On Metacritic the film has a score of 38 out of 100, based on 25 critics, indicating "generally unfavorable reviews ''. On CinemaScore, audiences gave the film an average grade of "C + '' on an A+ to F scale. Janet Maslin of The New York Times stated in her review that the film "leaves reality so far behind that its storytelling would be arbitrary even by comic - book standards, and its characters share no common ground or emotional connection. '' On a $170 million budget, the film grossed $111.8 million domestically and $108.3 million overseas for a worldwide total of $222.1 million. In its opening weekend the film grossed $27.7 million, finishing first at the box office. Roger Ebert of The Chicago Sun - Times awarded Wild Wild West with a one star rating and described it as "a comedy dead zone. You stare in disbelief as scenes flop and die. The movie is all concept and no content; the elaborate special effects are like watching money burn on the screen. '' Each award was "accepted '' in person by Robert Conrad, who had portrayed Jim West in the original series and subsequent TV films. He accepted the awards to show his objections to the movie. A soundtrack containing hip hop and R&B music was released on June 15, 1999, by Interscope Records. It peaked at # 4 on the Billboard 200 and # 4 on the Top R&B / Hip - Hop Albums. The film 's orchestral score including its main theme was composed and conducted by Elmer Bernstein, a veteran of many straight western movie scores, such as The Magnificent Seven. The score mainly follows the western genre 's symphonic tradition, while at times also acknowledging the film 's anachronistic playfulness by employing a more contemporary music style with notable rock percussion and electronic organ. The score also briefly incorporates Richard Markowitz 's theme from the television series in one cue, uncredited in the film (and not included on the album) -- ironically, this was one of the few elements to be faithful to the original series, which also did n't credit Markowitz for the theme. Additional parts of the score were composed by Elmer Bernstein 's son, Peter, and daughter, Emilie, served as one of the orchestrators and producers. Thirty minutes of the film 's orchestral music were released on CD from Varèse Sarabande in 1999. Elmer Bernstein won an ASCAP Award in the category Top Box Office Films. Like most of Smith 's films during this period, a hip hop single by the rapper / actor, called "Wild Wild West '', served as the promotional theme song for the film. Wild Wild West was a # 1 hit on the U.S. pop charts, but also won a Razzie Award. It was produced by Rob Fusari, who lifted a sample from Stevie Wonder 's 1976 hit "I Wish ''. The song features guest vocals from R&B group Dru Hill, and was a star - making vehicle for Dru Hill lead singer Sisqó. Old school rapper Kool Moe Dee had recorded a Wild Wild West single of his own in 1987, and re-performs the chorus from his old Wild Wild West as the chorus of this new Wild Wild West. (A performance of the song by Smith, Dee, Dru Hill and Sisqo at the 1999 MTV Movie Awards included Wonder performing a reprise of the chorus on piano.) The song "Bailamos '', sung by Enrique Iglesias, is also heard during the film 's end titles. The music videos for both end title songs are featured on the DVD. Several songs not heard in the film itself are featured on the promotional CD album Wild Wild West: Music Inspired By The Motion Picture (released by Interscope Records on June 15, 1999). This includes the song "Bad Guys Always Die '', recorded by Dr. Dre & Eminem. ("Wild Wild West '' and "Bailamos '' are the only songs on the album to be heard in the film). A video game was published in 1999 to tie in with the film 's release, Wild Wild West: The Steel Assassin.
when is the first college football game played this year
College football - wikipedia College football is American football played by teams of student athletes fielded by American universities, colleges, and military academies, or Canadian football played by teams of student athletes fielded by Canadian universities. It was through college football play that American football rules first gained popularity in the United States. Unlike most other sports in North America, no minor league farm organizations exist in American football. Therefore, college football is generally considered to be the second tier of American football in the United States; one step ahead of high school competition, and one step below professional competition. It is in college football where a player 's performance directly impacts his chances of playing professional football. The best collegiate players will typically declare for the professional draft after 3 to 4 years of collegiate competition, with the NFL holding its annual draft every spring in which 256 players are selected annually. Those not selected can still attempt to land an NFL roster spot as an undrafted free agent. Even after the emergence of the professional National Football League (NFL), college football remained extremely popular throughout the U.S. Although the college game has a much larger margin for talent than its pro counterpart, the sheer number of fans following major colleges provides a financial equalizer for the game, with Division I programs -- the highest level -- playing in huge stadiums, six of which have seating capacity exceeding 100,000. In many cases, college stadiums employ bench - style seating, as opposed to individual seats with backs and arm rests (although many stadiums do have a small number of chairback seats in addition to the bench seating). This allows them to seat more fans in a given amount of space than the typical professional stadium, which tends to have more features and comforts for fans. (Only three stadiums owned by U.S. colleges or universities -- Papa John 's Cardinal Stadium at the University of Louisville (Georgia State Stadium) at Georgia State University and FAU Stadium at Florida Atlantic University -- consist entirely of chairback seating.) College athletes, unlike players in the NFL, are not permitted by the NCAA to be paid salaries. Colleges are only allowed to provide non-monetary compensation such as athletic scholarships that provide for tuition, housing, and books. Modern North American football has its origins in various games, all known as "football '', played at public schools in England in the mid-19th century. By the 1840s, students at Rugby School were playing a game in which players were able to pick up the ball and run with it, a sport later known as Rugby football. The game was taken to Canada by British soldiers stationed there and was soon being played at Canadian colleges. The first documented gridiron football match was a game played at University College, a college of the University of Toronto, November 9, 1861. One of the participants in the game involving University of Toronto students was (Sir) William Mulock, later Chancellor of the school. A football club was formed at the university soon afterward, although its rules of play at this stage are unclear. In 1864, at Trinity College, also a college of the University of Toronto, F. Barlow Cumberland and Frederick A. Bethune devised rules based on rugby football. Modern Canadian football is widely regarded as having originated with a game played in Montreal, in 1865, when British Army officers played local civilians. The game gradually gained a following, and the Montreal Football Club was formed in 1868, the first recorded non-university football club in Canada. Early games appear to have had much in common with the traditional "mob football '' played in England. The games remained largely unorganized until the 19th century, when intramural games of football began to be played on college campuses. Each school played its own variety of football. Princeton University students played a game called "ballown '' as early as 1820. A Harvard tradition known as "Bloody Monday '' began in 1827, which consisted of a mass ballgame between the freshman and sophomore classes. In 1860, both the town police and the college authorities agreed the Bloody Monday had to go. The Harvard students responded by going into mourning for a mock figure called "Football Fightum '', for whom they conducted funeral rites. The authorities held firm and it was a dozen years before football was once again played at Harvard. Dartmouth played its own version called "Old division football '', the rules of which were first published in 1871, though the game dates to at least the 1830s. All of these games, and others, shared certain commonalities. They remained largely "mob '' style games, with huge numbers of players attempting to advance the ball into a goal area, often by any means necessary. Rules were simple, violence and injury were common. The violence of these mob - style games led to widespread protests and a decision to abandon them. Yale, under pressure from the city of New Haven, banned the play of all forms of football in 1860. American football historian Parke H. Davis described the period between 1869 and 1875 as the ' Pioneer Period '; the years 1876 -- 93 he called the ' Period of the American Intercollegiate Football Association '; and the years 1894 -- 1933 he dubbed the ' Period of Rules Committees and Conferences '. On November 6, 1869, Rutgers University faced Princeton University (then known as the College of New Jersey) in the first - ever game of intercollegiate football. It was played with a round ball and, like all early games, used a set of rules suggested by Rutgers captain William J. Leggett, based on the Football Association 's first set of rules, which were an early attempt by the former pupils of England 's public schools, to unify the rules of their public schools games and create a universal and standardized set of rules for the game of football and bore little resemblance to the American game which would be developed in the following decades. It is still usually regarded as the first game of college football. The game was played at a Rutgers field. Two teams of 25 players attempted to score by kicking the ball into the opposing team 's goal. Throwing or carrying the ball was not allowed, but there was plenty of physical contact between players. The first team to reach six goals was declared the winner. Rutgers won by a score of six to four. A rematch was played at Princeton a week later under Princeton 's own set of rules (one notable difference was the awarding of a "free kick '' to any player that caught the ball on the fly, which was a feature adopted from the Football Association 's rules; the fair catch kick rule has survived through to modern American game). Princeton won that game by a score of 8 - 0. Columbia joined the series in 1870, and by 1872 several schools were fielding intercollegiate teams, including Yale and Stevens Institute of Technology. Columbia University was the third school to field a team. The Lions traveled from New York City to New Brunswick on November 12, 1870 and were defeated by Rutgers 6 to 3. The game suffered from disorganization and the players kicked and battled each other as much as the ball. Later in 1870, Princeton and Rutgers played again with Princeton defeating Rutgers 6 - 0. This game 's violence caused such an outcry that no games at all were played in 1871. Football came back in 1872, when Columbia played Yale for the first time. The Yale team was coached and captained by David Schley Schaff, who had learned to play football while attending Rugby school. Schaff himself was injured and unable to the play the game, but Yale won the game 3 - 0 nonetheless. Later in 1872, Stevens Tech became the fifth school to field a team. Stevens lost to Columbia, but beat both New York University and City College of New York during the following year. By 1873, the college students playing football had made significant efforts to standardize their fledgling game. Teams had been scaled down from 25 players to 20. The only way to score was still to bat or kick the ball through the opposing team 's goal, and the game was played in two 45 minute halves on fields 140 yards long and 70 yards wide. On October 20, 1873, representatives from Yale, Columbia, Princeton, and Rutgers met at the Fifth Avenue Hotel in New York City to codify the first set of intercollegiate football rules. Before this meeting, each school had its own set of rules and games were usually played using the home team 's own particular code. At this meeting, a list of rules, based more on the Football Association 's rules than the rules of the recently founded Rugby Football Union, was drawn up for intercollegiate football games. Old "Football Fightum '' had been resurrected at Harvard in 1872, when Harvard resumed playing football. Harvard, however, preferred to play a rougher version of football called "the Boston Game '' in which the kicking of a round ball was the most prominent feature though a player could run with the ball, pass it, or dribble it (known as "babying ''). The man with the ball could be tackled, although hitting, tripping, "hacking '' (shin - kicking) and other unnecessary roughness was prohibited. There was no limit to the number of players, but there were typically ten to fifteen per side. A player could carry the ball only when being pursued. As a result of this, Harvard refused to attend the rules conference organized by Rutgers, Princeton and Columbia at the Fifth Avenue Hotel in New York City on October 20, 1873 to agree on a set of rules and regulations that would allow them to play a form of football that was essentially Association football; and continued to play under its own code. While Harvard 's voluntary absence from the meeting made it hard for them to schedule games against other American universities, it agreed to a challenge to play the rugby team of McGill University, from Montreal, in a two - game series. It was agreed that two games would be played on Harvard 's Jarvis baseball field in Cambridge, Massachusetts on May 14 and 15, 1874: one to be played under Harvard rules, another under the stricter rugby regulations of McGill. Jarvis Field was at the time a patch of land at the northern point of the Harvard campus, bordered by Everett and Jarvis Streets to the north and south, and Oxford Street and Massachusetts Avenue to the east and west. Harvard beat McGill in the "Boston Game '' on the Thursday and held McGill to a 0 - 0 tie on the Friday. The Harvard students took to the rugby rules and adopted them as their own, The games featured a round ball instead of a rugby - style oblong ball. This series of games represents an important milestone in the development of the modern game of American football. In October 1874, the Harvard team once again traveled to Montreal to play McGill in rugby, where they won by three tries. Inasmuch as Rugby football had been transplanted to Canada from England, the McGill team played under a set of rules which allowed a player to pick up the ball and run with it whenever he wished. Another rule, unique to McGill, was to count tries (the act of grounding the football past the opposing team 's goal line; it is important to note that there was no end zone during this time), as well as goals, in the scoring. In the Rugby rules of the time, a try only provided the attempt to kick a free goal from the field. If the kick was missed, the try did not score any points itself. Harvard quickly took a liking to the rugby game, and its use of the try which, until that time, was not used in American football. The try would later evolve into the score known as the touchdown. On June 4, 1875, Harvard faced Tufts University in the first game between two American colleges played under rules similar to the McGill / Harvard contest, which was won by Tufts. The rules included each side fielding 11 men at any given time, the ball was advanced by kicking or carrying it, and tackles of the ball carrier stopped play. Further elated by the excitement of McGill 's version of football, Harvard challenged its closest rival, Yale, to which the Bulldogs accepted. The two teams agreed to play under a set of rules called the "Concessionary Rules '', which involved Harvard conceding something to Yale 's soccer and Yale conceding a great deal to Harvard 's rugby. They decided to play with 15 players on each team. On November 13, 1875, Yale and Harvard played each other for the first time ever, where Harvard won 4 - 0. At the first The Game (as the annual contest between Harvard and Yale came to be named) the future "father of American football '' Walter Camp was among the 2000 spectators in attendance. Walter, who would enroll at Yale the next year, was torn between an admiration for Harvard 's style of play and the misery of the Yale defeat, and became determined to avenge Yale 's defeat. Spectators from Princeton also carried the game back home, where it quickly became the most popular version of football. On November 23, 1876, representatives from Harvard, Yale, Princeton, and Columbia met at the Massasoit House in Springfield, Massachusetts to standardize a new code of rules based on the rugby game first introduced to Harvard by McGill University in 1874. Three of the schools -- Harvard, Columbia, and Princeton -- formed the Intercollegiate Football Association, as a result of the meeting. Yale initially refused to join this association because of a disagreement over the number of players to be allowed per team (relenting in 1879) and Rutgers were not invited to the meeting. The rules that they agreed upon were essentially those of rugby union at the time with the exception that points be awarded for scoring a try, not just the conversion afterwards (extra point). Incidentally, rugby was to make a similar change to its scoring system 10 years later. Walter Camp is widely considered to be the most important figure in the development of American football. As a youth, he excelled in sports like track, baseball, and association football, and after enrolling at Yale in 1876, he earned varsity honors in every sport the school offered. Following the introduction of rugby - style rules to American football, Camp became a fixture at the Massasoit House conventions where rules were debated and changed. Dissatisfied with what seemed to him to be a disorganized mob, he proposed his first rule change at the first meeting he attended in 1878: a reduction from fifteen players to eleven. The motion was rejected at that time but passed in 1880. The effect was to open up the game and emphasize speed over strength. Camp 's most famous change, the establishment of the line of scrimmage and the snap from center to quarterback, was also passed in 1880. Originally, the snap was executed with the foot of the center. Later changes made it possible to snap the ball with the hands, either through the air or by a direct hand - to - hand pass. Rugby league followed Camp 's example, and in 1906 introduced the play - the - ball rule, which greatly resembled Camp 's early scrimmage and center - snap rules. In 1966, rugby league introduced a four - tackle rule (changed in 1972 to a six - tackle rule) based on Camp 's early down - and - distance rules. Camp 's new scrimmage rules revolutionized the game, though not always as intended. Princeton, in particular, used scrimmage play to slow the game, making incremental progress towards the end zone during each down. Rather than increase scoring, which had been Camp 's original intent, the rule was exploited to maintain control of the ball for the entire game, resulting in slow, unexciting contests. At the 1882 rules meeting, Camp proposed that a team be required to advance the ball a minimum of five yards within three downs. These down - and - distance rules, combined with the establishment of the line of scrimmage, transformed the game from a variation of rugby football into the distinct sport of American football. Camp was central to several more significant rule changes that came to define American football. In 1881, the field was reduced in size to its modern dimensions of 120 by 53 ⁄ yards (109.7 by 48.8 meters). Several times in 1883, Camp tinkered with the scoring rules, finally arriving at four points for a touchdown, two points for kicks after touchdowns, two points for safeties, and five for field goals. Camp 's innovations in the area of point scoring influenced rugby union 's move to point scoring in 1890. In 1887, game time was set at two halves of 45 minutes each. Also in 1887, two paid officials -- a referee and an umpire -- were mandated for each game. A year later, the rules were changed to allow tackling below the waist, and in 1889, the officials were given whistles and stopwatches. After leaving Yale in 1882, Camp was employed by the New Haven Clock Company until his death in 1925. Though no longer a player, he remained a fixture at annual rules meetings for most of his life, and he personally selected an annual All - American team every year from 1889 through 1924. The Walter Camp Football Foundation continues to select All - American teams in his honor. College football expanded greatly during the last two decades of the 19th century. Several major rivalries date from this time period. November 1890 was an active time in the sport. In Baldwin City, Kansas, on November 22, 1890, college football was first played in the state of Kansas. Baker beat Kansas 22 -- 9. On the 27th, Vanderbilt played Nashville (Peabody) at Athletic Park and won 40 -- 0. It was the first time organized football played in the state of Tennessee. The 29th also saw the first instance of the Army -- Navy Game. Navy won 24 -- 0. Rutgers was first to extend the reach of the game. An intercollegiate game was first played in the state of New York when Rutgers played Columbia on November 2, 1872. It was also the first scoreless tie in the history of the fledgling sport. Yale football starts the same year and has its first match against Columbia, the nearest college to play football. It took place at Hamilton Park in New Haven and was the first game in New England. The game was essentially soccer with 20 - man sides, played on a field 400 by 250 feet. Yale wins 3 - 0, Tommy Sherman scoring the first goal and Lew Irwin the other two. After the first game against Harvard, Tufts took its squad to Bates College in Lewiston, Maine for the first football game played in Maine. This occurred on November 6, 1875. Penn 's Athletic Association was looking to pick "a twenty '' to play a game of football against Columbia. This "twenty '' never played Columbia, but did play twice against Princeton. Princeton won both games 6 to 0. The first of these happened on November 11, 1876 in Philadelphia and was the first intercollegiate game in the state of Pennsylvania. Brown enters the intercollegiate game in 1878. The first game where one team scored over 100 points happened on October 25, 1884 when Yale routed Dartmouth 113 -- 0. It was also the first time one team scored over 100 points and the opposing team was shut out. The next week, Princeton outscored Lafayette by 140 to 0. The first intercollegiate game in the state of Vermont happened on November 6, 1886 between Dartmouth and Vermont at Burlington, Vermont. Dartmouth won 91 to 0. The first nighttime football game was played in Mansfield, Pennsylvania on September 28, 1892 between Mansfield State Normal and Wyoming Seminary and ended at halftime in a 0 -- 0 tie. The Army - Navy game of 1893 saw the first documented use of a football helmet by a player in a game. Joseph M. Reeves had a crude leather helmet made by a shoemaker in Annapolis and wore it in the game after being warned by his doctor that he risked death if he continued to play football after suffering an earlier kick to the head. In 1879, the University of Michigan became the first school west of Pennsylvania to establish a college football team. On May 30, 1879 Michigan beat Racine College 1 -- 0 in a game played in Chicago. The Chicago Daily Tribune called it "the first rugby - football game to be played west of the Alleghenies. '' Other Midwestern schools soon followed suit, including the University of Chicago, Northwestern University, and the University of Minnesota. The first western team to travel east was the 1881 Michigan team, which played at Harvard, Yale and Princeton. The nation 's first college football league, the Intercollegiate Conference of Faculty Representatives (also known as the Western Conference), a precursor to the Big Ten Conference, was founded in 1895. Led by coach Fielding H. Yost, Michigan became the first "western '' national power. From 1901 to 1905, Michigan had a 56 - game undefeated streak that included a 1902 trip to play in the first college football bowl game, which later became the Rose Bowl Game. During this streak, Michigan scored 2,831 points while allowing only 40. Organized intercollegiate football was first played in the state of Minnesota on September 30, 1882 when Hamline was convinced to play Minnesota. Minnesota won 2 to 0. It was the first game west of the Mississippi River. November 30, 1905, saw Chicago defeat Michigan 2 to 0. Dubbed "The First Greatest Game of the Century, '' broke Michigan 's 56 - game unbeaten streak and marked the end of the "Point - a-Minute '' years. Organized intercollegiate football was first played in the state of Virginia and the south on November 2, 1873 in Lexington between Washington and Lee and VMI. Washington and Lee won 4 -- 2. Some industrious students of the two schools organized a game for October 23, 1869 -- but it was rained out. Students of the University of Virginia were playing pickup games of the kicking - style of football as early as 1870, and some accounts even claim it organized a game against Washington and Lee College in 1871; but no record has been found of the score of this contest. Due to scantness of records of the prior matches some will claim Virginia v. Pantops Academy November 13, 1887 as the first game in Virginia. On April 9, 1880 at Stoll Field, Transylvania University (then called Kentucky University) beat Centre College by the score of 133⁄4 -- 0 in what is often considered the first recorded game played in the South. The first game of "scientific football '' in the South was the first instance of the Victory Bell rivalry between North Carolina and Duke (then known as Trinity College) held on Thanksgiving Day, 1888, at the North Carolina State Fairgrounds in Raleigh, North Carolina. On November 13, 1887 the Virginia Cavaliers and Pantops Academy fought to a scoreless tie in the first organized football game in the state of Virginia. Students at UVA were playing pickup games of the kicking - style of football as early as 1870, and some accounts even claim that some industrious ones organized a game against Washington and Lee College in 1871, just two years after Rutgers and Princeton 's historic first game in 1869. But no record has been found of the score of this contest. Washington and Lee also claims a 4 to 2 win over VMI in 1873. On October 18, 1888, the Wake Forest Demon Deacons defeated the North Carolina Tar Heels 6 to 4 in the first intercollegiate game in the state of North Carolina. On December 14, 1889, Wofford defeated Furman 5 to 1 in the first intercollegiate game in the state of South Carolina. The game featured no uniforms, no positions, and the rules were formulated before the game. January 30, 1892 saw the first football game played in the Deep South when the Georgia Bulldogs defeated Mercer 50 -- 0 at Herty Field. The beginnings of the contemporary Southeastern Conference and Atlantic Coast Conference start in 1894. The Southern Intercollegiate Athletic Association (SIAA) was founded on December 21, 1894, by Dr. William Dudley, a chemistry professor at Vanderbilt. The original members were Alabama, Auburn, Georgia, Georgia Tech, North Carolina, Sewanee, and Vanderbilt. Clemson, Cumberland, Kentucky, LSU, Mercer, Mississippi, Mississippi A&M (Mississippi State), Southwestern Presbyterian University, Tennessee, Texas, Tulane, and the University of Nashville joined the following year in 1895 as invited charter members. The conference was originally formed for "the development and purification of college athletics throughout the South ''. It is thought that the first forward pass in football occurred on October 26, 1895 in a game between Georgia and North Carolina when, out of desperation, the ball was thrown by the North Carolina back Joel Whitaker instead of punted and George Stephens caught the ball. On November 9, 1895 John Heisman executed a hidden ball trick utilizing quarterback Reynolds Tichenor to get Auburn 's only touchdown in a 6 to 9 loss to Vanderbilt. It was the first game in the south decided by a field goal. Heisman later used the trick against Pop Warner 's Georgia team. Warner picked up the trick and later used it at Cornell against Penn State in 1897. He then used it in 1903 at Carlisle against Harvard and garnered national attention. The 1899 Sewanee Tigers are one of the all - time great teams of the early sport. The team went 12 -- 0, outscoring opponents 322 to 10. Known as the "Iron Men '', with just 13 men they had a six - day road trip with five shutout wins over Texas A&M; Texas; Tulane; LSU; and Ole Miss. It is recalled memorably with the phrase "... and on the seventh day they rested. '' Grantland Rice called them "the most durable football team I ever saw. '' Organized intercollegiate football was first played in the state of Florida in 1901. A 7 - game series between intramural teams from Stetson and Forbes occurred in 1894. The first intercollegiate game between official varsity teams was played on November 22, 1901. Stetson beat Florida Agricultural College at Lake City, one of the four forerunners of the University of Florida, 6 - 0, in a game played as part of the Jacksonville Fair. On September 27, 1902, Georgetown beat Navy 4 to 0. It is claimed by Georgetown authorities as the game with the first ever "roving center '' or linebacker when Percy Given stood up, in contrast to the usual tale of Germany Schulz. The first linebacker in the South is often considered to be Frank Juhan. On Thanksgiving Day 1903, a game was scheduled in Montgomery, Alabama between the best teams from each region of the Southern Intercollegiate Athletic Association for an "SIAA championship game '', pitting Cumberland against Heisman 's Clemson. The game ended in an 11 -- 11 tie causing many teams to claim the title. Heisman pressed hardest for Cumberland to get the claim of champion. It was his last game as Clemson head coach. 1904 saw big coaching hires in the south: Mike Donahue at Auburn, John Heisman at Georgia Tech, and Dan McGugin at Vanderbilt were all hired that year. Both Donahue and McGugin just came from the north that year, Donahue from Yale and McGugin from Michigan, and were among the initial inductees of the College Football Hall of Fame. The undefeated 1904 Vanderbilt team scored an average of 52.7 points per game, the most in college football that season, and allowed just four points. The first college football game in Oklahoma Territory occurred on November 7, 1895 when the ' Oklahoma City Terrors ' defeated the Oklahoma Sooners 34 to 0. The Terrors were a mix of Methodist college students and high schoolers. The Sooners did not manage a single first down. By next season, Oklahoma coach John A. Harts had left to prospect for gold in the Arctic. Organized football was first played in the territory on November 29, 1894 between the Oklahoma City Terrors and Oklahoma City High School. The high school won 24 to 0. The University of Southern California first fielded an American football team in 1888. Playing its first game on November 14 of that year against the Alliance Athletic Club, in which USC gained a 16 -- 0 victory. Frank Suffel and Henry H. Goddard were playing coaches for the first team which was put together by quarterback Arthur Carroll; who in turn volunteered to make the pants for the team and later became a tailor. USC faced its first collegiate opponent the following year in fall 1889, playing St. Vincent 's College to a 40 -- 0 victory. In 1893, USC joined the Intercollegiate Football Association of Southern California (the forerunner of the SCIAC), which was composed of USC, Occidental College, Throop Polytechnic Institute (Caltech), and Chaffey College. Pomona College was invited to enter, but declined to do so. An invitation was also extended to Los Angeles High School. In 1891, the first Stanford football team was hastily organized and played a four - game season beginning in January 1892 with no official head coach. Following the season, Stanford captain John Whittemore wrote to Yale coach Walter Camp asking him to recommend a coach for Stanford. To Whittemore 's surprise, Camp agreed to coach the team himself, on the condition that he finish the season at Yale first. As a result of Camp 's late arrival, Stanford played just three official games, against San Francisco 's Olympic Club and rival California. The team also played exhibition games against two Los Angeles area teams that Stanford does not include in official results. Camp returned to the East Coast following the season, then returned to coach Stanford in 1894 and 1895. On 25 December 1894, Amos Alonzo Stagg 's Chicago Maroons agreed to play Camp 's Stanford football team in San Francisco in the first postseason intersectional contest, foreshadowing the modern bowl game. Future president Herbert Hoover was Stanford 's student financial manager. Chicago won 24 to 4. Stanford won a rematch in Los Angeles on December 29 by 12 to 0. The Big Game between Stanford and California is the oldest college football rivalry in the West. The first game was played on San Francisco 's Haight Street Grounds on March 19, 1892 with Stanford winning 14 -- 10. The term "Big Game '' was first used in 1900, when it was played on Thanksgiving Day in San Francisco. During that game, a large group of men and boys, who were observing from the roof of the nearby S.F. and Pacific Glass Works, fell into the fiery interior of the building when the roof collapsed, resulting in 13 dead and 78 injured. On December 4, 1900, the last victim of the disaster (Fred Lilly) died, bringing the death toll to 22; and, to this day, the "Thanksgiving Day Disaster '' remains the deadliest accident to kill spectators at a U.S. sporting event. The University of Oregon began playing American football in 1894 and played its first game on March 24, 1894, defeating Albany College 44 -- 3 under head coach Cal Young. Cal Young left after that first game and J.A. Church took over the coaching position in the fall for the rest of the season. Oregon finished the season with two additional losses and a tie, but went undefeated the following season, winning all four of its games under head coach Percy Benson. In 1899, the Oregon football team left the state for the first time, playing the California Golden Bears in Berkeley, California. American football at Oregon State University started in 1893 shortly after athletics were initially authorized at the college. Athletics were banned at the school in May 1892, but when the strict school president, Benjamin Arnold, died, President John Bloss reversed the ban. Bloss 's son William started the first team, on which he served as both coach and quarterback. The team 's first game was an easy 63 - 0 defeat over the home team, Albany College. In May 1900, Yost was hired as the football coach at Stanford University, and, after traveling home to West Virginia, he arrived in Palo Alto, California, on August 21, 1900. Yost led the 1900 Stanford team to a 7 -- 2 -- 1, outscoring opponents 154 to 20. The next year in 1901, Yost was hired by Charles A. Baird as the head football coach for the Michigan Wolverines football team. On 1 January 1902, Yost 's dominating 1901 Michigan Wolverines football team agreed to play a 3 -- 1 -- 2 team from Stanford University in the inaugural "Tournament East - West football game what is now known as the Rose Bowl Game by a score of 49 -- 0 after Stanford captain Ralph Fisher requested to quit with eight minutes remaining. The 1905 season marked the first meeting between Stanford and USC. Consequently, Stanford is USC 's oldest existing rival. The Big Game between Stanford and Cal on November 11, 1905 was the first played at Stanford Field, with Stanford winning 12 -- 5. In 1906, citing concerns about the violence in American Football, universities on the West Coast, led by California and Stanford, replaced the sport with rugby union. At the time, the future of American football was very much in doubt and these schools believed that rugby union would eventually be adopted nationwide. Other schools followed suit and also made the switch included Nevada, St. Mary 's, Santa Clara, and USC (in 1911). However, due to the perception that West Coast football was inferior to the game played on the East Coast anyway, East Coast and Midwest teams shrugged off the loss of the teams and continued playing American football. With no nationwide movement, the available pool of rugby teams to play remained small. The schools scheduled games against local club teams and reached out to rugby union powers in Australia, New Zealand, and especially, due to its proximity, Canada. The annual Big Game between Stanford and California continued as rugby, with the winner invited by the British Columbia Rugby Union to a tournament in Vancouver over the Christmas holidays, with the winner of that tournament receiving the Cooper Keith Trophy. During 12 seasons of playing rugby union, Stanford was remarkably successful: the team had three undefeated seasons, three one - loss seasons, and an overall record of 94 wins, 20 losses, and 3 ties for a winning percentage of. 816. However, after a few years, the school began to feel the isolation of its newly adopted sport, which was not spreading as many had hoped. Students and alumni began to clamor for a return to American football to allow wider intercollegiate competition. The pressure at rival California was stronger (especially as the school had not been as successful in the Big Game as they had hoped), and in 1915 California returned to American football. As reasons for the change, the school cited rule change back to American football, the overwhelming desire of students and supporters to play American football, interest in playing other East Coast and Midwest schools, and a patriotic desire to play an "American '' game. California 's return to American football increased the pressure on Stanford to also change back in order to maintain the rivalry. Stanford played its 1915, 1916, and 1917 "Big Games '' as rugby union against Santa Clara and California 's football "Big Game '' in those years was against Washington, but both schools desired to restore the old traditions. The onset of American involvement in World War I gave Stanford an out: in 1918, the Stanford campus was designated as the Students ' Army Training Corps headquarters for all of California, Nevada, and Utah, and the commanding officer, Sam M. Parker, decreed that American football was the appropriate athletic activity to train soldiers and rugby union was dropped. The University of Colorado began playing American football in 1890. Colorado found much success in its early years, winning eight Colorado Football Association Championships (1894 -- 97, 1901 -- 08). The following was taken from the Silver & Gold newspaper of December 16, 1898. It was a recollection of the birth of Colorado football written by one of CU 's original gridders, John C. Nixon, also the school 's second captain. It appears here in its original form: At the beginning of the first semester in the fall of ' 90 the boys rooming at the dormitory on the campus of the U. of C. being afflicted with a super-abundance of penned up energy, or perhaps having recently drifted from under the parental wing and delighting in their newly found freedom, decided among other wild schemes, to form an athletic association. Messrs Carney, Whittaker, Layton and others, who at that time constituted a majority of the male population of the University, called a meeting of the campus boys in the old medical building. Nixon was elected president and Holden secretary of the association. It was voted that the officers constitute a committee to provide uniform suits in which to play what was called "association football ''. Suits of flannel were ultimately procured and paid for assessments on the members of the association and generous contributions from members of the faculty. (...) The Athletic Association should now invigorate its base - ball and place it at par with its football team; and it certainly has the material with which to do it. The U of C should henceforth lead the state and possibly the west in athletic sports. (...) The style of football playing has altered considerably; by the old rules, all men in front of the runner with the ball, were offside, consequently we could not send backs through and break the line ahead of the ball as is done at present. The notorious V was then in vogue, which gave a heavy team too much advantage. The mass plays being now barred, skill on the football field is more in demand than mere weight and strength. In 1909, the Rocky Mountain Athletic Conference was founded, featuring four members: Colorado, Colorado College, Colorado School of Mines, and Colorado Agricultural College. The University of Denver and the University of Utah joined the RMAC in 1910. For its first thirty years, the RMAC was considered a major conference equivalent to today 's Division I, before 7 larger members left and formed the Mountain States Conference (also called the Skyline Conference). College football increased in popularity through the remainder of the 19th and early 20th century. It also became increasingly violent. Between 1890 and 1905, 330 college athletes died as a direct result of injuries sustained on the football field. These deaths could be attributed to the mass formations and gang tackling that characterized the sport in its early years. The 1894 Harvard - Yale game, known as the "Hampden Park Blood Bath '', resulted in crippling injuries for four players; the contest was suspended until 1897. The annual Army - Navy game was suspended from 1894 to 1898 for similar reasons. One of the major problems was the popularity of mass - formations like the flying wedge, in which a large number of offensive players charged as a unit against a similarly arranged defense. The resultant collisions often led to serious injuries and sometimes even death. Georgia fullback Richard Von Albade Gammon notably died on the field from concussions received against Virginia in 1897, causing Georgia, Georgia Tech, and Mercer to suspend their football programs. The situation came to a head in 1905 when there were 19 fatalities nationwide. President Theodore Roosevelt reportedly threatened to shut down the game if drastic changes were not made. However, the threat by Roosevelt to eliminate football is disputed by sports historians. What is absolutely certain is that on October 9, 1905, Roosevelt held a meeting of football representatives from Harvard, Yale, and Princeton. Though he lectured on eliminating and reducing injuries, he never threatened to ban football. He also lacked the authority to abolish football and was, in fact, actually a fan of the sport and wanted to preserve it. The President 's sons were also playing football at the college and secondary levels at the time. Meanwhile, John H. Outland held an experimental game in Wichita, Kansas that reduced the number of scrimmage plays to earn a first down from four to three in an attempt to reduce injuries. The Los Angeles Times reported an increase in punts and considered the game much safer than regular play but that the new rule was not "conducive to the sport ''. In 1906, President Roosevelt organized a meeting among thirteen school leaders at the White House to find solutions to make the sport safer for the athletes. Because the college officials could not agree upon a change in rules, it was decided over the course of several subsequent meetings that an external governing body should be responsible. Finally, on December 28, 1905, 62 schools met in New York City to discuss rule changes to make the game safer. As a result of this meeting, the Intercollegiate Athletic Association of the United States was formed in 1906. The IAAUS was the original rule making body of college football, but would go on to sponsor championships in other sports. The IAAUS would get its current name of National Collegiate Athletic Association (NCAA) in 1910, and still sets rules governing the sport. The rules committee considered widening the playing field to "open up '' the game, but Harvard Stadium (the first large permanent football stadium) had recently been built at great expense; it would be rendered useless by a wider field. The rules committee legalized the forward pass instead. Though it was underutilized for years, this proved to be one of the most important rule changes in the establishment of the modern game. Another rule change banned "mass momentum '' plays (many of which, like the infamous "flying wedge '', were sometimes literally deadly). As a result of the 1905 -- 1906 reforms, mass formation plays became illegal and forward passes legal. Bradbury Robinson, playing for visionary coach Eddie Cochems at Saint Louis University, threw the first legal pass in a September 5, 1906, game against Carroll College at Waukesha. Other important changes, formally adopted in 1910, were the requirements that at least seven offensive players be on the line of scrimmage at the time of the snap, that there be no pushing or pulling, and that interlocking interference (arms linked or hands on belts and uniforms) was not allowed. These changes greatly reduced the potential for collision injuries. Several coaches emerged who took advantage of these sweeping changes. Amos Alonzo Stagg introduced such innovations as the huddle, the tackling dummy, and the pre-snap shift. Other coaches, such as Pop Warner and Knute Rockne, introduced new strategies that still remain part of the game. Besides these coaching innovations, several rules changes during the first third of the 20th century had a profound impact on the game, mostly in opening up the passing game. In 1914, the first roughing - the - passer penalty was implemented. In 1918, the rules on eligible receivers were loosened to allow eligible players to catch the ball anywhere on the field -- previously strict rules were in place allowing passes to only certain areas of the field. Scoring rules also changed during this time: field goals were lowered to three points in 1909 and touchdowns raised to six points in 1912. Star players that emerged in the early 20th century include Jim Thorpe, Red Grange, and Bronko Nagurski; these three made the transition to the fledgling NFL and helped turn it into a successful league. Sportswriter Grantland Rice helped popularize the sport with his poetic descriptions of games and colorful nicknames for the game 's biggest players, including Notre Dame 's "Four Horsemen '' backfield and Fordham University 's linemen, known as the "Seven Blocks of Granite ''. In 1907 at Champaign, Illinois Chicago and Illinois played in the first game to have a halftime show featuring a marching band. Chicago won 42 -- 6. On November 25, 1911 Kansas and Missouri played the first homecoming football game. The game was "broadcast '' play - by - play over telegraph to at least 1,000 fans in Lawrence, Kansas. It ended in a 3 -- 3 tie. The game between West Virginia and Pittsburgh on October 8, 1921, saw the first live radio broadcast of a college football game when Harold W. Arlin announced that year 's Backyard Brawl played at Forbes Field on KDKA. Pitt won 21 -- 13. On October 28, 1922, Princeton and Chicago played the first game to be nationally broadcast on radio. Princeton won 21 -- 18 in a hotly contested game which had Princeton dubbed the "Team of Destiny. '' One publication claims "The first scouting done in the South was in 1905, when Dan McGugin and Captain Innis Brown, of Vanderbilt went to Atlanta to see Sewanee play Georgia Tech. '' Fuzzy Woodruff claims Davidson was the first in the south to throw a legal forward pass in 1906. The following season saw Vanderbilt execute a double pass play to set up the touchdown that beat Sewanee in a meeting of unbeatens for the SIAA championship. Grantland Rice cited this event as the greatest thrill he ever witnessed in his years of watching sports. Vanderbilt coach Dan McGugin in Spalding 's Football Guide 's summation of the season in the SIAA wrote "The standing. First, Vanderbilt; second, Sewanee, a might good second; '' and that Aubrey Lanier "came near winning the Vanderbilt game by his brilliant dashes after receiving punts. '' Bob Blake threw the final pass to center Stein Stone, catching it near the goal amongst defenders. Honus Craig then ran in the winning touchdown. Utilizing the "jump shift '' offense, John Heisman 's Georgia Tech Golden Tornado won 222 to 0 over Cumberland on October 7, 1916, at Grant Field in the most lopsided victory in college football history. Tech went on a 33 - game winning streak during this period. The 1917 team was the first national champion from the South, led by a powerful backfield. It also had the first two players from the Deep South selected first - team All - American in Walker Carpenter and Everett Strupper. Pop Warner 's Pittsburgh Panthers were also undefeated, but declined a challenge by Heisman to a game. When Heisman left Tech after 1919, his shift was still employed by protege William Alexander. In 1906 Vanderbilt defeated Carlisle 4 to 0, the result of a Bob Blake field goal. In 1907 Vanderbilt fought Navy to a 6 to 6 tie. In 1910 Vanderbilt held defending national champion Yale to a scoreless tie. Helping Georgia Tech 's claim to a title in 1917, the Auburn Tigers held undefeated, Chic Harley - led Big Ten champion Ohio State to a scoreless tie the week before Georgia Tech beat the Tigers 68 to 7. The next season, with many players gone due to World War I, a game was finally scheduled at Forbes Field with Pittsburgh. The Panthers, led by freshman Tom Davies, defeated Georgia Tech 32 to 0. Tech center Bum Day was the first player on a Southern team ever selected first - team All - American by Walter Camp. 1917 saw the rise of another Southern team in Centre of Danville, Kentucky. In 1921 Bo McMillin - led Centre upset defending national champion Harvard 6 to 0 in what is widely considered one of the greatest upsets in college football history. The next year Vanderbilt fought Michigan to a scoreless tie at the inaugural game at Dudley Field (now Vanderbilt Stadium), the first stadium in the South made exclusively for college football. Michigan coach Fielding Yost and Vanderbilt coach Dan McGugin were brothers - in - law, and the latter the protege of the former. The game featured the season 's two best defenses and included a goal line stand by Vanderbilt to preserve the tie. Its result was "a great surprise to the sporting world. '' Commodore fans celebrated by throwing some 3,000 seat cushions onto the field. The game features prominently in Vanderbilt 's history. That same year, Alabama upset Penn 9 to 7. Vanderbilt 's line coach then was Wallace Wade, who in 1925 coached Alabama to the south 's first Rose Bowl victory. This game is commonly referred to as "the game that changed the south. '' Wade followed up the next season with an undefeated record and Rose Bowl tie. Georgia 's 1927 "dream and wonder team '' defeated Yale for the first time. Georgia Tech, led by Heisman protege William Alexander, gave the dream and wonder team its only loss, and the next year were national and Rose Bowl champions. The Rose Bowl included Roy Riegels ' wrong - way run. On October 12, 1929, Yale lost to Georgia in Sanford Stadium in its first trip to the south. Wade 's Alabama again won a national championship and Rose Bowl in 1930. Glenn "Pop '' Warner coached at several schools throughout his career, including the University of Georgia, Cornell University, University of Pittsburgh, Stanford University, and Temple University. One of his most famous stints was at the Carlisle Indian Industrial School, where he coached Jim Thorpe, who went on to become the first president of the National Football League, an Olympic Gold Medalist, and is widely considered one of the best overall athletes in history. Warner wrote one of the first important books of football strategy, Football for Coaches and Players, published in 1927. Though the shift was invented by Stagg, Warner 's single wing and double wing formations greatly improved upon it; for almost 40 years, these were among the most important formations in football. As part of his single and double wing formations, Warner was one of the first coaches to effectively utilize the forward pass. Among his other innovations are modern blocking schemes, the three - point stance, and the reverse play. The youth football league, Pop Warner Little Scholars, was named in his honor. Knute Rockne rose to prominence in 1913 as an end for the University of Notre Dame, then a largely unknown Midwestern Catholic school. When Army scheduled Notre Dame as a warm - up game, they thought little of the small school. Rockne and quarterback Gus Dorais made innovative use of the forward pass, still at that point a relatively unused weapon, to defeat Army 35 -- 13 and helped establish the school as a national power. Rockne returned to coach the team in 1918, and devised the powerful Notre Dame Box offense, based on Warner 's single wing. He is credited with being the first major coach to emphasize offense over defense. Rockne is also credited with popularizing and perfecting the forward pass, a seldom used play at the time. The 1924 team featured the Four Horsemen backfield. In 1927, his complex shifts led directly to a rule change whereby all offensive players had to stop for a full second before the ball could be snapped. Rather than simply a regional team, Rockne 's "Fighting Irish '' became famous for barnstorming and played any team at any location. It was during Rockne 's tenure that the annual Notre Dame - University of Southern California rivalry began. He led his team to an impressive 105 -- 12 -- 5 record before his premature death in a plane crash in 1931. He was so famous at that point that his funeral was broadcast nationally on radio. In the early 1930s, the college game continued to grow, particularly in the South, bolstered by fierce rivalries such as the "South 's Oldest Rivalry '', between Virginia and North Carolina and the "Deep South 's Oldest Rivalry '', between Georgia and Auburn. Although before the mid-1920s most national powers came from the Northeast or the Midwest, the trend changed when several teams from the South and the West Coast achieved national success. Wallace William Wade 's 1925 Alabama team won the 1926 Rose Bowl after receiving its first national title and William Alexander 's 1928 Georgia Tech team defeated California in the 1929 Rose Bowl. College football quickly became the most popular spectator sport in the South. Several major modern college football conferences rose to prominence during this time period. The Southwest Athletic Conference had been founded in 1915. Consisting mostly of schools from Texas, the conference saw back - to - back national champions with Texas Christian University (TCU) in 1938 and Texas A&M in 1939. The Pacific Coast Conference (PCC), a precursor to the Pac - 12 Conference (Pac - 12), had its own back - to - back champion in the University of Southern California which was awarded the title in 1931 and 1932. The Southeastern Conference (SEC) formed in 1932 and consisted mostly of schools in the Deep South. As in previous decades, the Big Ten continued to dominate in the 1930s and 1940s, with Minnesota winning 5 titles between 1934 and 1941, and Michigan (1933, 1947, and 1948) and Ohio State (1942) also winning titles. As it grew beyond its regional affiliations in the 1930s, college football garnered increased national attention. Four new bowl games were created: the Orange Bowl, Sugar Bowl, the Sun Bowl in 1935, and the Cotton Bowl in 1937. In lieu of an actual national championship, these bowl games, along with the earlier Rose Bowl, provided a way to match up teams from distant regions of the country that did not otherwise play. In 1936, the Associated Press began its weekly poll of prominent sports writers, ranking all of the nation 's college football teams. Since there was no national championship game, the final version of the AP poll was used to determine who was crowned the National Champion of college football. The 1930s saw growth in the passing game. Though some coaches, such as General Robert Neyland at Tennessee, continued to eschew its use, several rules changes to the game had a profound effect on teams ' ability to throw the ball. In 1934, the rules committee removed two major penalties -- a loss of five yards for a second incomplete pass in any series of downs and a loss of possession for an incomplete pass in the end zone -- and shrunk the circumference of the ball, making it easier to grip and throw. Players who became famous for taking advantage of the easier passing game included Alabama end Don Hutson and TCU passer "Slingin '' Sammy Baugh. In 1935, New York City 's Downtown Athletic Club awarded the first Heisman Trophy to University of Chicago halfback Jay Berwanger, who was also the first ever NFL Draft pick in 1936. The trophy was designed by sculptor Frank Eliscu and modeled after New York University player Ed Smith. The trophy recognizes the nation 's "most outstanding '' college football player and has become one of the most coveted awards in all of American sports. During World War II, college football players enlisted in the armed forces, some playing in Europe during the war. As most of these players had eligibility left on their college careers, some of them returned to college at West Point, bringing Army back - to - back national titles in 1944 and 1945 under coach Red Blaik. Doc Blanchard (known as "Mr. Inside '') and Glenn Davis (known as "Mr. Outside '') both won the Heisman Trophy, in 1945 and 1946 respectively. On the coaching staff of those 1944 -- 1946 Army teams was future Pro Football Hall of Fame coach Vince Lombardi. The 1950s saw the rise of yet more dynasties and power programs. Oklahoma, under coach Bud Wilkinson, won three national titles (1950, 1955, 1956) and all ten Big Eight Conference championships in the decade while building a record 47 - game winning streak. Woody Hayes led Ohio State to two national titles, in 1954 and 1957, and won three Big Ten titles. The Michigan State Spartans were known as the "football factory '' during the 1950s, where coaches Clarence Munn and Duffy Daugherty led the Spartans to two national titles and two Big Ten titles after joining the Big Ten athletically in 1953. Wilkinson and Hayes, along with Robert Neyland of Tennessee, oversaw a revival of the running game in the 1950s. Passing numbers dropped from an average of 18.9 attempts in 1951 to 13.6 attempts in 1955, while teams averaged just shy of 50 running plays per game. Nine out of ten Heisman Trophy winners in the 1950s were runners. Notre Dame, one of the biggest passing teams of the decade, saw a substantial decline in success; the 1950s were the only decade between 1920 and 1990 when the team did not win at least a share of the national title. Paul Hornung, Notre Dame quarterback, did, however, win the Heisman in 1956, becoming the only player from a losing team ever to do so. Following the enormous success of the 1958 NFL Championship Game, college football no longer enjoyed the same popularity as the NFL, at least on a national level. While both games benefited from the advent of television, since the late 1950s, the NFL has become a nationally popular sport while college football has maintained strong regional ties. As professional football became a national television phenomenon, college football did as well. In the 1950s, Notre Dame, which had a large national following, formed its own network to broadcast its games, but by and large the sport still retained a mostly regional following. In 1952, the NCAA claimed all television broadcasting rights for the games of its member institutions, and it alone negotiated television rights. This situation continued until 1984, when several schools brought a suit under the Sherman Antitrust Act; the Supreme Court ruled against the NCAA and schools are now free to negotiate their own television deals. ABC Sports began broadcasting a national Game of the Week in 1966, bringing key matchups and rivalries to a national audience for the first time. New formations and play sets continued to be developed. Emory Bellard, an assistant coach under Darrell Royal at the University of Texas, developed a three - back option style offense known as the wishbone. The wishbone is a run - heavy offense that depends on the quarterback making last second decisions on when and to whom to hand or pitch the ball to. Royal went on to teach the offense to other coaches, including Bear Bryant at Alabama, Chuck Fairbanks at Oklahoma and Pepper Rodgers at UCLA; who all adapted and developed it to their own tastes. The strategic opposite of the wishbone is the spread offense, developed by professional and college coaches throughout the 1960s and 1970s. Though some schools play a run - based version of the spread, its most common use is as a passing offense designed to "spread '' the field both horizontally and vertically. Some teams have managed to adapt with the times to keep winning consistently. In the rankings of the most victorious programs, Michigan, Texas, and Notre Dame are ranked first, second, and third in total wins. In 1940, for the highest level of college football, there were only five bowl games (Rose, Orange, Sugar, Sun, and Cotton). By 1950, three more had joined that number and in 1970, there were still only eight major college bowl games. The number grew to eleven in 1976. At the birth of cable television and cable sports networks like ESPN, there were fifteen bowls in 1980. With more national venues and increased available revenue, the bowls saw an explosive growth throughout the 1980s and 1990s. In the thirty years from 1950 to 1980, seven bowl games were added to the schedule. From 1980 to 2008, an additional 20 bowl games were added to the schedule. Some have criticized this growth, claiming that the increased number of games has diluted the significance of playing in a bowl game. Yet others have countered that the increased number of games has increased exposure and revenue for a greater number of schools, and see it as a positive development. With the growth of bowl games, it became difficult to determine a national champion in a fair and equitable manner. As conferences became contractually bound to certain bowl games (a situation known as a tie - in), match - ups that guaranteed a consensus national champion became increasingly rare. In 1992, seven conferences and independent Notre Dame formed the Bowl Coalition, which attempted to arrange an annual No. 1 versus No. 2 matchup based on the final AP poll standings. The Coalition lasted for three years; however, several scheduling issues prevented much success; tie - ins still took precedence in several cases. For example, the Big Eight and SEC champions could never meet, since they were contractually bound to different bowl games. The coalition also excluded the Rose Bowl, arguably the most prestigious game in the nation, and two major conferences -- the Pac - 10 and Big Ten -- meaning that it had limited success. In 1995, the Coalition was replaced by the Bowl Alliance, which reduced the number of bowl games to host a national championship game to three -- the Fiesta, Sugar, and Orange Bowls -- and the participating conferences to five -- the ACC, SEC, Southwest, Big Eight, and Big East. It was agreed that the No. 1 and No. 2 ranked teams gave up their prior bowl tie - ins and were guaranteed to meet in the national championship game, which rotated between the three participating bowls. The system still did not include the Big Ten, Pac - 10, or the Rose Bowl, and thus still lacked the legitimacy of a true national championship. However, one positive side effect is that if there were three teams at the end of the season vying for a national title, but one of them was a Pac - 10 / Big Ten team bound to the Rose Bowl, then there would be no difficulty in deciding which teams to place in the Bowl Alliance "national championship '' bowl; if the Pac - 10 / Big Ten team won the Rose Bowl and finished with the same record as whichever team won the other bowl game, they could have a share of the national title. This happened in the final year of the Bowl Alliance, with Michigan winning the 1998 Rose Bowl and Nebraska winning the 1998 Orange Bowl. Without the Pac - 10 / Big Ten team bound to a bowl game, it would be difficult to decide which two teams should play for the national title. In 1998, a new system was put into place called the Bowl Championship Series. For the first time, it included all major conferences (ACC, Big East, Big 12, Big Ten, Pac - 10, and SEC) and four major bowl games (Rose, Orange, Sugar and Fiesta). The champions of these six conferences, along with two "at - large '' selections, were invited to play in the four bowl games. Each year, one of the four bowl games served as a national championship game. Also, a complex system of human polls, computer rankings, and strength of schedule calculations was instituted to rank schools. Based on this ranking system, the No. 1 and No. 2 teams met each year in the national championship game. Traditional tie - ins were maintained for schools and bowls not part of the national championship. For example, in years when not a part of the national championship, the Rose Bowl still hosted the Big Ten and Pac - 10 champions. The system continued to change, as the formula for ranking teams was tweaked from year to year. At - large teams could be chosen from any of the Division I conferences, though only one selection -- Utah in 2005 -- came from a non-BCS affiliated conference. Starting with the 2006 season, a fifth game -- simply called the BCS National Championship Game -- was added to the schedule, to be played at the site of one of the four BCS bowl games on a rotating basis, one week after the regular bowl game. This opened up the BCS to two additional at - large teams. Also, rules were changed to add the champions of five additional conferences (Conference USA (C - USA), the Mid-American Conference (MAC), the Mountain West Conference (MW), the Sun Belt Conference and the Western Athletic Conference (WAC)), provided that said champion ranked in the top twelve in the final BCS rankings, or was within the top 16 of the BCS rankings and ranked higher than the champion of at least one of the "BCS conferences '' (also known as "AQ '' conferences, for Automatic Qualifying). Several times since this rule change was implemented, schools from non-AQ conferences have played in BCS bowl games. In 2009, Boise State played TCU in the Fiesta Bowl, the first time two schools from non-BCS conferences played each other in a BCS bowl game. The last team from the non-AQ ranks to reach a BCS bowl game in the BCS era was Northern Illinois in 2012, which played in (and lost) the 2013 Orange Bowl. The longtime resistance to a playoff system at the FBS level finally ended with the creation of the College Football Playoff (CFP) beginning with the 2014 season. The CFP is a Plus - One system, a concept that became popular as a BCS alternative following controversies in 2003 and 2004. The CFP is a four - team tournament whose participants are chosen and seeded by a 13 - member selection committee. The semifinals are hosted by two of a group of six traditional bowl games often called the "New Year 's Six '', with semifinal hosting rotating annually among three pairs of games in the following order: Rose / Sugar, Orange / Cotton, and Fiesta / Peach. The two semifinal winners then advance to the College Football Playoff National Championship, whose host is determined by open bidding several years in advance. The establishment of the CFP followed a tumultuous period of conference realignment in Division I. The WAC, after seeing all but two of its football members leave, dropped football after the 2012 season. The Big East split into two leagues in 2013; the schools that did not play FBS football reorganized as a new non-football Big East Conference, while the FBS member schools that remained in the original structure joined with several new members and became the American Athletic Conference. The American retained the Big East 's automatic BCS bowl bid for the 2013 season, but lost this status in the CFP era. The 10 FBS conferences are formally and popularly divided into two groups: Although rules for the high school, college, and NFL games are generally consistent, there are several minor differences. The NCAA Football Rules Committee determines the playing rules for Division I (both Bowl and Championship Subdivisions), II, and III games (the National Association of Intercollegiate Athletics (NAIA) is a separate organization, but uses the NCAA rules). College teams mostly play other similarly sized schools through the NCAA 's divisional system. Division I generally consists of the major collegiate athletic powers with larger budgets, more elaborate facilities, and (with the exception of a few conferences such as the Pioneer Football League) more athletic scholarships. Division II primarily consists of smaller public and private institutions that offer fewer scholarships than those in Division I. Division III institutions also field teams, but do not offer any scholarships. Football teams in Division I are further divided into the Bowl Subdivision (consisting of the largest programs) and the Championship Subdivision. The Bowl Subdivision has historically not used an organized tournament to determine its champion, and instead teams compete in post-season bowl games. That changed with the debut of the four - team College Football Playoff at the end of the 2014 season. Teams in each of these four divisions are further divided into various regional conferences. Several organizations operate college football programs outside the jurisdiction of the NCAA: A college that fields a team in the NCAA is not restricted from fielding teams in club or sprint football, and several colleges field two teams, a varsity (NCAA) squad and a club or sprint squad (no schools, as of 2015, field both club and sprint teams at the same time). Map of Division I (A) FBS Map of Division I (AA) FCS Map of NCAA Division II Map of NCAA Division III Map of NAIA Map of NJCAA Map of CCCAA Started in the 2014 season, four Division I FBS teams are selected at the end of regular season to compete in a playoff for the FBS national championship. The inaugural champion was Ohio State University. The College Football Playoff replaced the Bowl Championship Series, which had been used as the selection method to determine the national championship game participants since in the 1998 season. At the Division I FCS level, the teams participate in a 24 - team playoff (most recently expanded from 20 teams in 2013) to determine the national championship. Under the current playoff structure, the top eight teams are all seeded, and receive a bye week in the first round. The highest seed receives automatic home field advantage. Starting in 2013, non-seeded teams can only host a playoff game if both teams involved are unseeded; in such a matchup, the schools must bid for the right to host the game. Selection for the playoffs is determined by a selection committee, although usually a team must have a 7 - 4 record to even be considered. Losses to an FBS team count against their playoff eligibility, while wins against a Division II opponent do not count towards playoff consideration. Thus, only Division I wins (whether FBS, FCS, or FCS non-scholarship) are considered for playoff selection. The Division I National Championship game is held in Frisco, Texas. Division II and Division III of the NCAA also participate in their own respective playoffs, crowning national champions at the end of the season. The National Association of Intercollegiate Athletics also holds a playoff. Unlike other college football divisions and most other sports -- collegiate or professional -- the Football Bowl Subdivision, formerly known as Division I-A college football, has historically not employed a playoff system to determine a champion. Instead, it has a series of postseason "bowl games ''. The annual National Champion in the Football Bowl Subdivision is then instead traditionally determined by a vote of sports writers and other non-players. This system has been challenged often, beginning with an NCAA committee proposal in 1979 to have a four - team playoff following the bowl games. However, little headway was made in instituting a playoff tournament until 2014, given the entrenched vested economic interests in the various bowls. Although the NCAA publishes lists of claimed FBS - level national champions in its official publications, it has never recognized an official FBS national championship; this policy continues even after the establishment of the College Football Playoff (which is not directly run by the NCAA) in 2014. As a result, the official Division I National Champion is the winner of the Football Championship Subdivision, as it is the highest level of football with an NCAA - administered championship tournament. The first bowl game was the 1902 Rose Bowl, played between Michigan and Stanford; Michigan won 49 - 0. It ended when Stanford requested and Michigan agreed to end it with 8 minutes on the clock. That game was so lopsided that the game was not played annually until 1916, when the Tournament of Roses decided to reattempt the postseason game. The term "bowl '' originates from the shape of the Rose Bowl stadium in Pasadena, California, which was built in 1923 and resembled the Yale Bowl, built in 1915. This is where the name came into use, as it became known as the Rose Bowl Game. Other games came along and used the term "bowl '', whether the stadium was shaped like a bowl or not. At the Division I FBS level, teams must earn the right to be bowl eligible by winning at least 6 games during the season (teams that play 13 games in a season, which is allowed for Hawaii and any of its home opponents, must win 7 games). They are then invited to a bowl game based on their conference ranking and the tie - ins that the conference has to each bowl game. For the 2009 season, there were 34 bowl games, so 68 of the 120 Division I FBS teams were invited to play at a bowl. These games are played from mid-December to early January and most of the later bowl games are typically considered more prestigious. After the Bowl Championship Series, additional all - star bowl games round out the post-season schedule through the beginning of February. Partly as a compromise between both bowl game and playoff supporters, the NCAA created the Bowl Championship Series (BCS) in 1998 in order to create a definitive National Championship game for college football. The series included the four most prominent bowl games (Rose Bowl, Orange Bowl, Sugar Bowl, Fiesta Bowl), while the National Championship game rotated each year between one of these venues. The BCS system was slightly adjusted in 2006, as the NCAA added a fifth game to the series, called the National Championship Game. This allowed the four other BCS bowls to use their normal selection process to select the teams in their games while the top two teams in the BCS rankings would play in the new National Championship Game. The BCS selection committee used a complicated, and often controversial, computer system to rank all Division 1 - FBS teams and the top two teams at the end of the season played for the National Championship. This computer system, which factored in newspaper polls, online polls, coaches ' polls, strength of schedule, and various other factors of a team 's season, led to much dispute over whether the two best teams in the country were being selected to play in the National Championship Game. The BCS ended after the 2013 season and, since the 2014 season, the FBS national champion has been determined by a four - team tournament known as the College Football Playoff (CFP). A selection committee of college football experts decides the participating teams. Six major bowl games (the Rose, Sugar, Cotton, Orange, Peach, and Fiesta) rotate on a three - year cycle as semifinal games, with the winners advancing to the College Football Playoff National Championship. This arrangement is contractually locked in until the 2026 season. College football is a controversial institution within American higher education, where the amount of money involved -- what people will pay for the entertainment provided -- is a corrupting factor within universities that they are usually ill - equipped to deal with. According to William E. Kirwan, chancellor of the University of Maryland System and co-director of the Knight Commission on Intercollegiate Athletics, "We 've reached a point where big - time intercollegiate athletics is undermining the integrity of our institutions, diverting presidents and institutions from their main purpose. '' Football coaches often make more than the presidents of their universities which employ them. Athletes are alleged to receive preferential treatment both in academics and when they run afoul of the law. Although in theory football is an extra-curricular activity engaged in as a sideline by students, it is widely believed to turn a substantial profit, from which the athletes receive no direct benefit. There has been serious discussion about making student - athletes university employees to allow them to be paid. In reality, the majority of major collegiate football programs operated at a financial loss in 2014. Canadian football, which parallels American football, is played by collegiate teams in Canada under the auspices of U Sports. (Unlike in the United States, no junior colleges play football in Canada, and the sanctioning body for junior college athletics in Canada, CCAA, does not sanction the sport.) However, amateur football outside of colleges is played in Canada, such as in the Canadian Junior Football League. Organized competition in American football also exists at the collegiate level in Mexico (ONEFA), the UK (British Universities American Football League), Japan (Japan American Football Association, Koshien Bowl), and South Korea (Korea American Football Association).
where is most of the work of a cell carried out
Cell biology - Wikipedia Cell biology or cytology or cytobiology, (from the Greek κυτος, kytos, "vessel '') is a branch of biology that studies the different structures and functions of the cell and focuses mainly on the idea of the cell as the basic unit of life. Cell biology explains the structure, organization of the organelles they contain, their physiological properties, metabolic processes, Signaling pathways, life cycle, and interactions with their environment. This is done both on a microscopic and molecular level as it encompasses prokaryotic cells and eukaryotic cells. Knowing the components of cells and how cells work is fundamental to all biological sciences; it is also essential for research in bio-medical fields such as cancer, and other diseases. Research in cell biology is closely related to genetics, biochemistry, molecular biology, immunology, and developmental biology. The study of the cell is done on a molecular level; however, most of the processes within the cell are made up of a mixture of small organic molecules, inorganic ions, hormones, and water. Approximately 75 -- 85 % of the cell 's volume is due to water making it an indispensable solvent as a result of its polarity and structure. These molecules within the cell, which operate as substrates, provide a suitable environment for the cell to carry out metabolic reactions and signalling. The cell shape varies among the different types of organisms, and are thus then classified into two categories: eukaryotes and prokaryotes. In the case of eukaryotic cells -- which are made up of animal, plant, fungi, and protozoa cells -- the shapes are generally round and spherical, while for prokaryotic cells -- which are composed of bacteria and archaea -- the shapes are: spherical (cocci), rods (bacillus), curved (vibrio), and spirals (spirochetes). Cell biology focuses more on the study of eukaryotic cells, and their signalling pathways, rather than on prokaryotes which is covered under microbiology. The main constituents of the general molecular composition of the cell includes: proteins and lipids which are either free - flowing or membrane - bound, along with different internal compartments known as organelles. This environment of the cell is made up of hydrophilic and hydrophobic regions which allows for the exchange of the above - mentioned molecules and ions. The hydrophilic regions of the cell are mainly on the inside and outside of the cell, while the hydrophobic regions are within the phospholipid bilayer of the cell membrane. The cell membrane consists of lipids and proteins, which accounts for its hydrophobicity as a result of being non-polar substances. Therefore, in order for these molecules to participate in reactions, within the cell, they need to be able to cross this membrane layer to get into the cell. They accomplish this process of gaining access to the cell via: osmotic pressure, diffusion, concentration gradients, and membrane channels. Inside of the cell are extensive internal sub-cellular membrane - bounded compartments called organelles. The growth process of the cell does not refer to the size of the cell, but instead the density of the number of cells present in the organism at a given time. Cell growth pertains to the increase in the number of cells present in an organism as it grows and develops; as the organism gets larger so too does the number of cells present. Cells are the foundation of all organisms, they are the fundamental unit of life. The growth and development of the cell are essential for the maintenance of the host, and survival of the organisms. For this process the cell goes through the steps of the cell cycle and development which involves cell growth, DNA replication, cell division, regeneration, specialization, and cell death. The cell cycle is divided into four distinct phases, G1, S, G2, and M. The G phases -- which is the cell growth phase -- makes up approximately 95 % of the cycle. The proliferation of cells is instigated by progenitors, the cells then differentiate to become specialized, where specialized cells of the same type aggregate to form tissues, then organs and ultimately systems. The G phases along with the S phase -- DNA replication, damage and repair -- are considered to be the interphase portion of the cycle. While the M phase (mitosis and cytokinesis) is the cell division portion of the cycle. The cell cycle is regulated by a series of signalling factors and complexes such as CDK 's, kinases, and p53. to name a few. When the cell has completed its growth process, and if it is found to be damaged or altered it undergoes cell death, either by apoptosis or necrosis, to eliminate the threat it causes to the organism 's survival. Cells may be observed under the microscope, using several different techniques; these include optical microscopy, transmission electron microscopy, scanning electron microscopy, fluorescence microscopy, correlative light - electron microscopy, and confocal microscopy. There are several different methods used in the study of cells: Purification of cells and their parts Purification may be performed using the following methods: Practical job applications for a degree in Cell Molecular Biology includes the following.
who plays quarks mother on deep space nine
Cecily Adams - wikipedia Cecily April Adams (February 6, 1958 -- March 3, 2004) was an American actress, casting director, and lyricist. Adams was born in Jamaica, Queens, New York, the daughter of comic actor Don Adams and singer Adelaide Efantis. Her siblings including her brother Sean, and her sisters Carolyn Steele, Christine Adams, Cathy Metchik, Paramount TV exec Stacey Adams and Beige Adams. She attended Beverly Hills High School, where she participated in acting, an activity she continued at the University of California at Irvine. Adams studied improvisational comedy at the Groundlings and was a member of the Acme Comedy Theater in Los Angeles. She was also an acting coach. Adams is known for portraying the recurring character of Ishka (also known as "Moogie ''), mother of the Ferengi brothers Rom and Quark, in four of her five appearances in the television series Star Trek: Deep Space Nine, replacing Andrea Martin. Adams was in fact nine years younger than Armin Shimerman, who played Quark, despite playing his mother. She appeared in guest roles on a variety of television series including Just Shoot Me!, Murphy Brown, and Party of Five, and with her father in his television series Check It Out! and television movie Get Smart Again. Adams played a lead role in the 1991 independent feature film Little Secrets. Adams was also a lyricist, and with her collaborator David Burke wrote pop songs as well as commercial jingles and television theme songs. Adams worked in casting TV series such as Third Rock From the Sun and Eerie, Indiana, and features including American Heart (1992) and Home Room (2002). Until her death, she served as casting director for That ' 70s Show. Adams was married to actor / writer Jim Beaver in 1989; their daughter Madeline was born in 2001. Adams, though a non-smoker, died of lung cancer on March 3, 2004, at the age of 46, in Los Angeles, California. Her husband 's memoir, Life 's That Way, details her last few months. She was cremated and her ashes scattered at Fern Canyon in Prairie Creek Redwoods State Park, California, and at Franklin Canyon Park in Beverly Hills, California.
before mount everest what was the highest mountain in the world
List of past presumed highest mountains - Wikipedia The following is a list of mountains that have been presumed, at one time, to be the highest mountain in the world. How general were the following presumptions is unclear. Before the age of exploration, no geographer could make any plausible assumption.
how much of the indian population speaks english
List of countries by English - speaking population - Wikipedia The following is a list of English - speaking population by country, including information on both native speakers and second - language speakers. Some numbers have been calculated by Wikipedia editors by mixing data from different sources; figures not attributed to sources and given with a date should be treated with caution. Also note that in most sources, the results showed are of people who say that they can speak English, while that was not verified; which means the actual amount of English speakers could be higher or lower (Because of certain people that over - or underestimate their English skills). Non-English speaking populations: Click on a coloured area to see an article about English in that country or region
who was the first chief executive officer of niti ayog
NITI Aayog - Wikipedia The NITI Aayog (Hindi for Policy Commission), also National Institution for Transforming India, is a policy think tank of the Government of India, established with the aim to achieve Sustainable Development Goals and to enhance cooperative federalism by fostering the involvement of State Governments of India in the economic policy - making process using a bottom - up approach. Its initiatives include "15 year road map '', "7 - year vision, strategy and action plan '', AMRUT, Digital India, Atal Innovation Mission, Medical Education Reform, Agriculture reforms (Model Land Leasing Law, Reforms of the Agricultural Produce Marketing Committee Act, Agricultural Marketing and Farmer Friendly Reforms Index for ranking states), Indices Measuring States ' Performance in Health, Education and Water Management, Sub-Group of Chief Ministers on Rationalization of Centrally Sponsored Schemes, Sub-Group of Chief Ministers on Swachh Bharat Abhiyan, Sub-Group of Chief Ministers on Skill Development, Task Forces on Agriculture and Elimination of Poverty, and Transforming India Lecture Series. It was established in 2015, by the NDA government, to replace the Planning Commission which followed a top - down model. The Prime Minister is the Ex-officio chairman. The permanent members of the governing council are all the state Chief Ministers, along with the Chief Ministers of Delhi and Puducherry, the Lieutenant Governor of Andaman and Nicobar, and a vice chairman nominated by the Prime Minister. In addition, temporary members are selected from leading universities and research institutions. These members include a chief executive officer, four ex-official members and two part - time members. The Union Government of India announced the formation of NITI Aayog on 1 January 2015, and the first meeting was held on 8 February 2015. On 29 May 2014, the Independent Evaluation Office submitted an assessment report to Prime Minister Narendra Modi with the recommendation to replace the Planning Commission with a "control commission. '' On 13 August 2014, the Union Cabinet scrapped the Planning Commission, to be replaced with a diluted version of the National Advisory Council (NAC) of India. On 1 January 2015 a Cabinet resolution was passed to replace the Planning Commission with the newly formed NITI Aayog (National Institution for Transforming India). The first meeting of NITI Aayog was chaired by Narendra Modi on 8 February 2015. Finance Minister Arun Jaitley made the following observation on the necessity of creating NITI Aayog, "The 65 year - old Planning Commission had become a redundant organisation. It was relevant in a command economy structure, but not any longer. India is a diversified country and its states are in various phases of economic development along with their own strengths and weaknesses. In this context, a ' one size fits all ' approach to economic planning is obsolete. It can not make India competitive in today 's global economy. '' NITI Aayog has started a new initiative on the advice of Prime Minister Narendra Modi called NITI Lectures: Transforming India. The aim of this initiative is to invite globally reputed policy makers, experts, administrators to India to share their knowledge, expertise, experience in policy making and good governance with Indian counterparts. This initiative will be a series of lectures started with first lecture delivered by Deputy Prime Minister of Singapore Mr. Tharman Shanmugaratnam. He delivered lecture on subject called "India and the Global Economy '' at Vigyan Bhavan, New Delhi. The Prime Minister spoke about the idea behind this lecture series and stated that his vision for India is rapid transformation, not gradual evolution. On 31 August 2017, NITI Aayog developed a State Statistics Handbook that consolidates key statistics across sectors for every Indian State / UT. While the State data on crucial indicators is currently fragmented across different sources, this handbook provides a one - stop database of important State statistics. The NITI Aayog comprises the following: With the Prime Minister as the Chairperson, the committee consists of Media related to NITI Aayog at Wikimedia Commons
the responsibility of a roman male during the republic was to become a citizen-soldier
History of citizenship - wikipedia History of citizenship describes the changing relation between an individual and the state, commonly known as citizenship. Citizenship is generally identified not as an aspect of Eastern civilization but of Western civilization. There is a general view that citizenship in ancient times was a simpler relation than modern forms of citizenship, although this view has been challenged. While there is disagreement about when the relation of citizenship began, many thinkers point to the early city - states of ancient Greece, possibly as a reaction to the fear of slavery, although others see it as primarily a modern phenomenon dating back only a few hundred years. In Roman times, citizenship began to take on more of the character of a relationship based on law, with less political participation than in ancient Greece but a widening sphere of who was considered to be a citizen. In the Middle Ages in Europe, citizenship was primarily identified with commercial and secular life in the growing cities, and it came to be seen as membership in emerging nation - states. In modern democracies, citizenship has contrasting senses, including a liberal - individualist view emphasizing needs and entitlements and legal protections for essentially passive political beings, and a civic - republican view emphasizing political participation and seeing citizenship as an active relation with specific privileges and obligations. While citizenship has varied considerably throughout history, there are some common elements of citizenship over time. Citizenship bonds extend beyond basic kinship ties to unite people of different genetic backgrounds, that is, citizenship is more than a clan or extended kinship network. It generally describes the relation between a person and an overall political entity such as a city - state or nation and signifies membership in that body. It is often based on, or a function of, some form of military service or expectation of future military service. It is generally characterized by some form of political participation, although the extent of such participation can vary considerably from minimal duties such as voting to active service in government. And citizenship, throughout history, has often been seen as an ideal state, closely allied with freedom, an important status with legal aspects including rights, and it has sometimes been seen as a bundle of rights or a right to have rights. Last, citizenship almost always has had an element of exclusion, in the sense that citizenship derives meaning, in part, by excluding non-citizens from basic rights and privileges. While a general definition of citizenship is membership in a political society or group, citizenship as a concept is difficult to define. Thinkers as far back as Aristotle realized that there was no agreed - upon definition of citizenship. And modern thinkers, as well, agree that the history of citizenship is complex with no single definition predominating. It is hard to isolate what citizenship means without reference to other terms such as nationalism, civil society, and democracy. According to one view, citizenship as a subject of study is undergoing transformation, with increased interest while the meaning of the term continues to shift. There is agreement citizenship is culture - specific: it is a function of each political culture. Further, how citizenship is seen and understood depends on the viewpoint of the person making the determination, such that a person from an upper class background will have a different notion of citizenship than a person from the lower class. The relation of citizenship has not been a fixed or static relation, but constantly changes within each society, and that according to one view, citizenship might "really have worked '' only at select periods during certain times, such as when the Athenian politician Solon made reforms in the early Athenian state. The history of citizenship has sometimes been presented as a stark contrast between ancient citizenship and post-medieval times. One view is that citizenship should be studied as a long and direct progression throughout Western civilization, beginning from Ancient Greece or perhaps earlier, extending to the present; for example, thinker Feliks Gross examined citizenship as the "history of the continuation of a single institution. '' Other views question whether citizenship can be examined as a linear process, growing over time, usually for the better, and see the linear progression approach as an oversimplification possibly leading to incorrect conclusions. According to this view, citizenship should not be considered as a "progressive realisation of the core meanings that are definitionally built into citizenship. '' Another caveat, offered by some thinkers, is to avoid judging citizenship from one era in terms of the standards of another era; according to this view, citizenship should be understood by examining it within the context of a city - state or nation, and trying to understand it as people from these societies understood it. The rise of citizenship has been studied as an aspect of the development of law. One view is that the beginning of citizenship dates back to the ancient Israelites. These people developed an understanding of themselves as a distinct and unique people -- different from the Egyptians or Babylonians. They had a written history, common language and one - deity - only religion sometimes described as ethical monotheism. While most peoples developed a loose identity tied to a specific geographic location, the Jewish people kept their common identity despite being physically moved to different lands, such as when they were held captive as slaves in ancient Egypt or Babylon. The Jewish Covenant has been described as a binding agreement not just with a few people or tribal leaders, but between the whole nation of Israel, including men, women and children, with the Jewish deity Yahweh. Jews, similar to other tribal groups, did not see themselves as citizens per se but they formed a strong attachment to their own group, such that people of different ethnicities were considered as part of an "outgroup ''. This is in contrast to the modern understanding of citizenship as a way to accept people of different races and ethnicities under the umbrella of being citizens of a nation. There is more widespread agreement that the first real instances of citizenship began in ancient Greece. And while there were precursors of the relation in societies before then, it emerged in readily discernible form in the Greek city - states which began to dot the shores of the Aegean Sea, the Black Sea, the Adriatic Sea, and elsewhere around the Mediterranean perhaps around the 8th century BCE. The modern day distinction sometimes termed consent versus descent distinction -- that is, citizenship by choice versus birthright citizenship, has been traced back to ancient Greece. And thinkers such as J.G.A. Pocock have suggested that the modern - day ideal of citizenship was first articulated by the ancient Athenians and Romans, although he suggested that the "transmission '' of the sense of citizenship over two millennia was essentially a myth enshrouding western civilization. One writer suggests that despite the long history of China, there never was a political entity within China similar to the Greek polis. To the ancients, citizenship was a bond between a person and the city - state. Before Greek times, a person was generally connected to a tribe or kin - group such as an extended family, but citizenship added a layer to these ties -- a non-kinship bond between the person and the state. Historian Geoffrey Hosking in his 2005 Modern Scholar lecture course suggested that citizenship in ancient Greece arose from an appreciation for the importance of freedom. Hosking explained: It can be argued that this growth of slavery was what made Greeks particularly conscious of the value of freedom. After all, any Greek farmer might fall into debt and therefore might become a slave, at almost any time... When the Greeks fought together, they fought in order to avoid being enslaved by warfare, to avoid being defeated by those who might take them into slavery. And they also arranged their political institutions so as to remain free men. The Greek sense of the polis, in which citizenship and the rule of law prevailed, was an important strategic advantage for the Greeks during their wars with Persia. The polis was grounded in nomos, the rule of law, which meant that no man -- no matter who he might be -- was master, and all men were subject to the same rules. Any leader who set himself above the law was reckoned to be a tyrannos -- a tyrant. It was also grounded in the notion of citizenship -- the idea that every man born from the blood of the community has a share in power and responsibility. This notion that... the proper way for us to live is as citizens in communities under the rule of law... is an idea originated by the Greeks and bequeathed by them as their greatest contribution to the rest of mankind and history. It meant that Greeks were willing to live, fight, and die for their poleis... Greeks could see the benefits of having slaves, since their labor permitted slaveowners to have substantial free time, enabling participation in public life. While Greeks were spread out in many separate city - states, they had many things in common in addition to shared ideas about citizenship: the Mediterranean trading world, kinship ties, the common Greek language, a shared hostility to the so - called non-Greek - speaking or barbarian peoples, belief in the prescience of the oracle at Delphi, and later on the early Olympic Games which involved generally peaceful athletic competitions between city - states. City - states often feuded with each other; one view was that regular wars were necessary to perpetuate citizenship, since the seized goods and slaves helped make the city - state rich, and that a long peaceful period meant ruin for citizenship. An important aspect of polis citizenship was exclusivity. Polis meant both the political assembly as well as the entire society. Inequality of status was widely accepted. Citizens had a higher status than non-citizens, such as women, slaves or barbarians. For example, women were believed to be irrational and incapable of political participation, although a few writers, most notably Plato, disagreed. Methods used to determine whether someone could be a citizen or not could be based on wealth, identified by the amount of taxes one paid, or political participation, or heritage if both parents were citizens of the polis. The first form of citizenship was based on the way people lived in the ancient Greek times, in small - scale organic communities of the polis. Citizenship was not seen as a separate activity from the private life of the individual person, in the sense that there was not a distinction between public and private life. The obligations of citizenship were deeply connected into one 's everyday life in the polis. The Greek sense of citizenship may have arisen from military necessity, since a key military formation demanded cohesion and commitment by each particular soldier. The phalanx formation had hoplite soldiers ranked shoulder - to - shoulder in a "compact mass '' with each soldier 's shield guarding the soldier to his left. If a single fighter failed to keep his position, then the entire formation could fall apart. Individual soldiers were generally protected provided that the entire mass stayed together. This technique called for large numbers of soldiers, sometimes involving most of the adult male population of a city - state, who supplied weapons at their own expense. The idea of citizenship, then, was that if each man had a say in whether the entire city - state should fight an adversary, and if each man was bound to the will of the group, then battlefield loyalty was much more likely. Political participation was thus linked with military effectiveness. In addition, the Greek city - states were the first instances in which judicial functions were separated from legislative functions in the law courts. Selected citizens served as jurors, and they were often paid a modest sum for their service. Greeks often despised tyrannical governments. In a tyrannical arrangement, there was no possibility of citizenship since political life was totally engineered to benefit the ruler. Several thinkers suggest that ancient Sparta, not Athens, was the originator of the concept of citizenship. Spartan citizenship was based on the principle of equality among a ruling military elite called Spartiates. They were "full Spartan citizens '' -- men who graduated from a rigorous regimen of military training and at age 30 received a land allotment called a kleros, although they had to keep paying dues to pay for food and drink as was required to maintain citizenship. In the Spartan approach to phalanx warfare, virtues such as courage and loyalty were particularly emphasized relative to other Greek city - states. Each Spartan citizen owned at least a minimum portion of the public land which was sufficient to provide food for a family, although the size of these plots varied. The Spartan citizens relied on the labor of captured slaves called helots to do the everyday drudgework of farming and maintenance, while the Spartan men underwent a rigorous military regimen, and in a sense it was the labor of the helots which permitted Spartans to engage in extensive military training and citizenship. Citizenship was viewed as incompatible with manual labor. Citizens ate meals together in a "communal mess ''. They were "frugally fed, ferociously disciplined, and kept in constant training through martial games and communal exercises, '' according to Hosking. As young men, they served in the military. It was seen as virtuous to participate in government when men grew older. Participation was required; failure to appear could entail a loss of citizenship. But the philosopher Aristotle viewed the Spartan model of citizenship as "artificial and strained '', according to one account. While Spartans were expected to learn music and poetry, serious study was discouraged. Historian Ian Worthington described a "Spartan mirage '' in the sense that the mystique about military invincibility tended to obscure weaknesses within the Spartan system, particularly their dependence on helots. In contrast with Athenian women, Spartan women could own property, and owned at one point up to 40 % of the land according to Aristotle, and they had greater independence and rights, although their main task was not to rule the homes or participate in governance but rather to produce strong and healthy babies. Aristotle, according to J.G.A. Pocock, suggested that ancient Greeks thought that being a citizen was a natural state. It was an elitist notion, according to Peter Riesenberg, in which small scale communities had generally similar ideas of how people should behave in society and what constituted appropriate conduct. Geoffrey Hosking described a possible Athenian logic leading to participatory democracy: If you 've got a lot of soldiers of rather modest means, and you want them to enthusiastically participate in war, then you 've got to have a political and economic system which does n't allow too many of them to fall into debt, because debt ultimately means slavery, and slaves can not fight in the army. And it needs a political system which gives them a say on matters that concern their lives. As a consequence, the original Athenian aristocratic constitution gradually became more inappropriate, and gave way to a more inclusive arrangement. In the early 6th century BCE, the reformer Solon canceled all existing land debts, and enabled free Athenian males to participate in the assembly or ecclesia. In addition, he encouraged foreign craftsmen, particularly skilled in pottery, to move to Athens and offered citizenship by naturalization as an incentive. Solon expected that aristocratic Athenians would continue running affairs but nevertheless citizens had a "political voice in the Assembly. '' Subsequent reformers moved Athens even more towards direct democracy. The Greek reformer Cleisthenes in 508 BC re-engineered Athenian society from organizations based on family - style groupings, or phratries, to larger mixed structures which combined people from different types of geographic areas -- coastal areas and cities, hinterlands, and plains -- into the same group. Cleisthenes abolished the tribes by "redistributing their identity so radically '' so they ceased to exist. The result was that farmers, sailors and sheepherders came together in the same political unit, in effect lessening kinship ties as a basis for citizenship. In this sense, Athenian citizenship extended beyond basic bonds such as ties of family, descent, religion, race, or tribal membership, and reached towards the idea of a civic multiethnic state built on democratic principles. Cleisthenes took democracy to the masses in a way that Solon did n't... Cleisthenes gave these same people the opportunity to participate in a political system in which all citizens -- noble and non-noble -- were in theory equal, and regardless of where they lived in Attica, could take part in some form of state administration. According to Feliks Gross, such an arrangement can succeed if people from different backgrounds can form constructive associations. The Athenian practice of ostracism, in which citizens could vote anonymously for a fellow citizen to be expelled from Athens for up to ten years, was seen as a way to pre-emptively remove a possible threat to the state, without having to go through legal proceedings. It was intended to promote internal harmony. Athenian citizenship was based on obligations of citizens towards the community rather than rights given to its members. This was not a problem because people had a strong affinity with the polis; their personal destiny and the destiny of the entire community were strongly linked. Also, citizens of the polis saw obligations to the community as an opportunity to be virtuous. It was a source of honour and respect. According to one view, the citizenry was "its own master ''. The people were sovereign; there was no sovereignty outside of the people themselves. In Athens, citizens were both ruler and ruled. Further, important political and judicial offices were rotated to widen participation and prevent corruption, and all citizens had the right to speak and vote in the political assembly. Pocock explained: ... what makes the citizen the highest order of being is his capacity to rule, and it follows that rule over one 's equal is possible only where one 's equal rules over one. Therefore the citizen rules and is ruled; citizens join each other in making decisions where each decider respects the authority of the others, and all join in obeying the decisions (now known as "laws '') they have made. The Athenian conception was that "laws that should govern everybody, '' in the sense of equality under the law or the Greek term isonomia. Citizens had certain rights and duties: the rights included the chance to speak and vote in the common assembly, to stand for public office, to serve as jurors, to be protected by the law, to own land, and to participate in public worship; duties included an obligation to obey the law, and to serve in the armed forces which could be "costly '' in terms of buying or making expensive war equipment or in risking one 's own life, according to Hosking. This balance of participation, obligations and rights constituted the essence of citizenship, together with the feeling that there was a common interest which imposed its obligations on everyone. Hosking noticed that citizenship was "relatively narrowly distributed '' and excluded all women, all minors, all slaves, all immigrants, and most colonials, that is, citizens who left their city to start another usually lost their rights from their city - state of origin. Many historians felt this exclusiveness was a weakness in Athenian society, according to Hosking, but he noted that there were perhaps 50,000 Athenian citizens overall, and that at most, a tenth of these ever took part in an actual assembly at any one time. Hosking argued that if citizenship had been spread more widely, it would have hurt solidarity. Pocock expresses a similar sentiment and noted that citizenship requires a certain distance from the day - to - day drudgery of daily living. Greek males solved this problem to some extent with the subjugation of women as well as the institution of slavery which freed their schedules so they could participate in the assembly. Pocock asked: for citizenship to happen, was it necessary to prevent free people from becoming "too much involved in the world of things ''? Or, could citizenship be extended to working class persons, and if so, what does this mean for the nature of citizenship itself? The philosopher Plato envisioned a warrior class similar to the Spartan conception in that these persons did not engage in farming, business, or handicrafts, but their main duty was to prepare for war: to train, to exercise, to train, to exercise, constantly. Like the Spartan practice, Plato 's idealized community was one of citizens who kept common meals to build common bonds. Citizenship status, in Plato 's ideal view, was inherited. There were four separate classes. There were penalties for failing to vote. A key part of citizenship was obeying the law and being "deferent to the social and political system '' and having internal self - control. Writing a generation after Plato, and in contrast with his teacher, Aristotle did not like Sparta 's commune - oriented approach. He felt Sparta 's land allocation system as well as the communal meals led to a world in which rich and poor were polarized. He recognized differences in citizenship patterns based on age: the young were "underdeveloped '' citizens, while the elderly were "superannuated '' citizens. And he noted that it was hard to classify the citizenship status of some persons, such as resident aliens who still had access to courts, or citizens who had lost their citizenship franchise. Still, Aristotle 's conception of citizenship was that it was a legally guaranteed role in creating and running government. It reflected the division of labor which he believed was a good thing; citizenship, in his view, was a commanding role in society with citizens ruling over non-citizens. At the same time, there could not be a permanent barrier between the rulers and the ruled, according to Aristotle 's conception, and if there was such a barrier, citizenship could not exist. Aristotle 's sense of citizenship depended on a "rigorous separation of public from private, of polis from oikos, of persons and actions from things '' which allowed people to interact politically with equals. To be truly human, one had to be an active citizen to the community: To take no part in the running of the community 's affairs is to be either a beast or a god! In Aristotle 's view, "man is a political animal ''. Isolated men were not truly free, in his view. A beast was animal - like without self - control over passions and unable to coordinate with other beasts, and therefore could not be a citizen. And a god was so powerful and immortal that he or she did not need help from others. In Aristotle 's conception, citizenship was possible generally in a small city - state since it required direct participation in public affairs with people knowing "one another 's characters ''. What mattered, according to Pocock 's interpretation of Aristotle, was that citizens had the freedom to take part in political discussions if they chose to do so. And citizenship was not merely a means to being free, but was freedom itself, a valued escape from the home - world of the oikos to the political world of the polis. It meant active sharing in civic life, meaning that all men rule, and are ruled, alternatively. And citizens were those who shared in deliberative and judicial office, and in that sense, attained the status of citizenship. What citizens do should benefit not just a segment of society, but be in the interest of everybody. Unlike Plato, Aristotle believed that women were incapable of citizenship since it did not suit their natures. In Aristotle 's conception, humans are destined "by nature '' to live in a political association and take short turns at ruling, inclusively, participating in making legislative, judicial and executive decisions. But Aristotle 's sense of "inclusiveness '' was limited to adult Greek males born in the polity: women, children, slaves, and foreigners (that is, resident aliens), were generally excluded from political participation. Roman citizenship was similar to the Greek model but differed in substantive ways. Geoffrey Hosking argued that Greek ideas of citizenship in the city - state, such as the principles of equality under the law, civic participation in government, and notions that "no one citizen should have too much power for too long '', were carried forth into the Roman world. But unlike the Greek city - states which enslaved captured peoples following a war, Rome offered relatively generous terms to its captives, including chances for captives to have a "second category of Roman citizenship ''. Conquered peoples could not vote in the Roman assembly but had full protections of the law, and could make economic contracts and could marry Roman citizens. They blended together with Romans in a culture sometimes described as Romanitas -- ceremonies, public baths, games, and a common culture helped unite diverse groups within the empire. One view was that the Greek sense of citizenship was an "emancipation from the world of things '' in which citizens essentially acted upon other citizens; material things were left back in the private domestic world of the oikos. But the Roman sensibility took into account to a greater extent that citizens could act upon material things as well as other citizens, in the sense of buying or selling property, possessions, titles, goods. Accordingly, citizens often encountered other citizens on the basis of commerce which often required regulation. It introduced a new level of complexity regarding the concept of citizenship. Pocock explained: The person was defined and represented through his actions upon things; in the course of time, the term property came to mean, first, the defining characteristic of a human or other being; second, the relation which a person had with a thing; and third, the thing defined as the possession of some person. A further departure from the Greek model was that the Roman government pitted the upper - class patrician interests against the lower - order working groups known as the plebeian class in a dynamic arrangement, sometimes described as a "tense tug - of - war '' between the dignity of the great man and the liberty of the small man. Through worker discontent, the plebs threatened to set up a rival city to Rome, and through negotiation around 494 BCE, won the right to have their interests represented in government by officers known as tribunes. The Roman Republic, according to Hosking, tried to find a balance between the upper and lower classes. And writers such as Burchell have argued that citizenship meant different things depending on what social class one belonged to: for upper - class men, citizenship was an active chance to influence public life; for lower - class men, it was about a respect for "private rights '' or ius privatum. Pocock explained that a citizen came to be understood as a person "free to act by law, free to ask and expect the law 's protection, a citizen of such and such a legal community, of such and such a legal standing in that community. '' An example was Saint Paul demanding fair treatment after his arrest by claiming to be a Roman citizen. Many thinkers including Pocock suggested that the Roman conception of citizenship had a greater emphasis than the Greek one of it being a legal relationship with the state, described as the "legal and political shield of a free person ''. And citizenship was believed to have had a "cosmopolitan character ''. Citizenship meant having rights to have possessions, immunities, expectations, which were "available in many kinds and degrees, available or unavailable to many kinds of person for many kinds of reason. '' Citizens could "sue and be sued in certain courts ''. And the law, itself, was a kind of bond uniting people, in the sense of it being the results of past decisions by the assembly, such that citizenship came to mean "membership in a community of shared or common law ''. According to Pocock, the Roman emphasis on law changed the nature of citizenship: it was more impersonal, universal, multiform, having different degrees and applications. It included many different types of citizenship: sometimes municipal citizenship, sometimes empire - wide citizenship. Law continued to advance as a subject under the Romans. The Romans developed law into a kind of science known as jurisprudence. Law helped protect citizens: The college of priests agreed to have basic laws inscribed upon twelve stone tablets displayed in the forum for everyone to see... Inscribing these things on stone tablets was very important because it meant, first of all, that law was stable and permanent; the same for everyone, and it could not be altered at the whim of powerful people. And secondly, it was publicly known; it was not secret; it could be consulted by anybody at any time. Specialists in law found ways to adapt the fixed laws, and to have the common law or jus gentium, work in harmony with natural law or ius naturale, which are rules common to all things. Property was protected by law, and served as a protection of individuals against the power of the state. In addition, unlike the Greek model where laws were mostly made in the assembly, Roman law was often determined in other places than official government bodies. Rules could originate through court rulings, by looking to past court rulings, by sovereign decrees, and the effect was that the assembly 's power became increasingly marginalized. In the Roman Empire, polis citizenship expanded from small scale communities to the entire empire. In the early years of the Roman Republic, citizenship was a prized relationship which was not widely extended. Romans realised that granting citizenship to people from all over the empire legitimized Roman rule over conquered areas. As the centuries went by, citizenship was no longer a status of political agency, but it had been reduced to a judicial safeguard and the expression of rule and law. The Roman conception of citizenship was relatively more complex and nuanced than the earlier Athenian conception, and it usually did not involve political participation. There was a "multiplicity of roles '' for citizens to play, and this sometimes led to "contradictory obligations ''. Roman citizenship was not a single black - and - white category of citizen versus non-citizen, but rather there were more gradations and relationships possible. Women were respected to a greater extent with a secure status as what Hosking terms "subsidiary citizens ''. But the citizenship rules generally had the effect of building loyalty throughout the empire among highly diverse populations. The Roman statesman Cicero, while encouraging political participation, saw that too much civic activism could have consequences that were possibly dangerous and disruptive. David Burchell argued that in Cicero 's time, there were too many citizens pushing to "enhance their dignitas '', and the result of a "political stage '' with too many actors all wanting to play a leading role, was discord. The problem of extreme inequality of landed wealth led to a decline in the citizen - soldier arrangement, and was one of many causes leading to the dissolution of the Republic and rule by dictators. The Roman Empire gradually expanded the inclusiveness of persons considered as "citizens '', while the economic power of persons declined, and fewer men wanted to serve in the military. The granting of citizenship to wide swaths of non-Roman groups diluted its meaning, according to one account. When the Western Roman empire fell in 476 AD, the western part run by Rome was sacked, while the eastern empire headquartered at Constantinople endured. Some thinkers suggest that as a result of historical circumstances, western Europe evolved with two competing sources of authority -- religious and secular -- and that the ensuing separation of church and state was a "major step '' in bringing forth the modern sense of citizenship. In the eastern half which survived, religious and secular authority were merged in the one emperor. The eastern Roman emperor Justinian, who ruled the eastern empire from 527 to 565, thought that citizenship meant people living with honor, not causing harm, and to "give each their due '' in relation with fellow citizens. In the feudal system, there were relationships characterized as reciprocal, with bonds between lords and vassals going both ways: vassals promised loyalty and subsistence, while lords promised protection. The basis of feudal arrangement was control over land. The loyalty of a person was not to a law, or to a constitution, or to an abstract concept such as a nation, but to a person, namely, the next higher - level up, such as a knight, lord, or king. One view is that feudalism 's reciprocal obligation system gave rise to the idea of the individual and the citizen. According to a related view, the Magna Carta, while a sort of "feudal document '', marked a transition away from feudalism since the document was not a personal unspoken bond between nobles and the king, but rather was more like a contract between two parties, written in formal language, describing how different parties were supposed to behave towards each other. The Magna Carta posited that the liberty, security and freedom of individuals were "inviolable ''. Gradually the personal ties linking vassals with lords were replaced with contractual and more impersonal relationships. The early days of medieval communes were marked by intensive citizenship, according to one view. Sometimes there was terrific religious activism, spurred by fanatics and religious zealotry, and as a result of the discord and religious violence, Europeans learned to value the "dutiful passive citizen '' as much preferred to the "self - directed religious zealot '', according to another. According to historian Andrew C. Fix, unlike the rest of Europe, 14th century Italy was urbanized to a much greater extent, with more people living in towns such as Milan, Rome, Genoa, Pisa, Florence, Venice and Naples. Trade in spices with the Middle East, and new industries such as wool and clothing, led to greater prosperity, which in turn permitted greater education and study of the liberal arts, particularly among urbanized youth. A philosophy of Studia Huminitatis, later called humanism, emerged with an emphasis away from the church and towards secularism; thinkers reflected on the study of ancient Rome and ancient Greece including its ideas of citizenship and politics. Competition among the cities helped spur thinking. Fix suggested that of the northern Italian cities, it was Florence which most closely resembled a Republic, although most Italian cities were "complex oligarchies ruled by groups of rich citizens called patricians, the commercial elite. '' While the city had thrown off control by the Holy Roman Empire in the twelfth century, it came close to being conquered by Milan in 1450, but was spared when the capable leader of Milan died unexpectedly during a siege. Florence 's city leaders wondered how to protect their city in the future, and figured that civic education was crucial so that citizens and leaders could cope with future unexpected crises. Politics, previously "shunned as unspiritual '', came to be viewed as a "worthy and honorable vocation '', and it was expected that most sectors of the public, from the richer commercial classes and patricians, to workers and the lower classes, should participate in public life. A new sense of citizenship began to emerge based on an "often turbulent internal political life in the towns '', according to Fix, with competition among guilds and "much political debate and confrontation ''. During the Renaissance and growth of Europe, medieval political scholar Walter Ullmann suggested that the essence of the transition was from people being subjects of a monarch or lord to being citizens of a city and later to a nation. A distinguishing characteristic of a city was having its own law, courts, and independent administration. And being a citizen often meant being subject to the city 's law in addition to helping to choose officials. Cities were defensive entities, and its citizens were persons who were "economically competent to bear arms, to equip and train themselves. '' According to one theorist, the requirement that individual citizen - soldiers provide their own equipment for fighting helped to explain why Western cities evolved the concept of citizenship, while Eastern ones generally did not. And city dwellers who had fought alongside nobles in battles were no longer content with having a subordinate social status, but demanded a greater role in the form of citizenship. In addition to city administration as a way of participating in political decision - making, membership in guilds was an indirect form of citizenship in that it helped their members succeed financially; guilds exerted considerable political influence in the growing towns. During European Middle Ages, citizenship was usually associated with cities. Nobility in the aristocracy used to have privileges of a higher nature than commoners. The rise of citizenship was linked to the rise of republicanism, according to one account, since if a republic belongs to its citizens, then kings have less power. In the emerging nation - states, the territory of the nation was its land, and citizenship was an idealized concept. Increasingly, citizenship related not to a person such as a lord or count, but rather citizenship related a person to the state on the basis of more abstract terms such as rights and duties. Citizenship was increasingly seen as a result of birth, that is, a birthright. But nations often welcomed foreigners with vital skills and capabilities, and came to accept these new people under a process of naturalization. Increasing frequency of cases of naturalization helped people see citizenship as a relationship which was freely chosen by people. Citizens were people who voluntarily chose allegiance to the state, who accepted the legal status of citizenship with its rights and responsibilities, who obeyed its laws, who were loyal to the state. The early modern period saw significant social change in Great Britain in terms of the position of individuals in society and the growing power of Parliament in relation to the monarch. In the 17th century, there was renewed interest in Magna Carta. Passage of the Petition of Right in 1628 and Habeas Corpus Act in 1679 established certain liberties for subjects. The idea of a political party took form with groups debating rights to political representation during the Putney Debates of 1647. After the English Civil Wars (1642 -- 1651) and the Glorious Revolution of 1688, the Bill of Rights was enacted in 1689, which codified certain rights and liberties. The Bill set out the requirement for regular elections, rules for freedom of speech in Parliament and limited the power of the monarch, ensuring that, unlike much of Europe at the time, royal absolutism would not prevail. Across Europe, the Age of Enlightenment in the 18th century spread new ideas about liberty, reason and politics across the continent and beyond. British colonists across the Atlantic had grown up in a system in which local government was democratic, marked by participation by affluent men, but after the French and Indian War, colonists came to resent an increase in taxes imposed by Britain to offset expenses. What was particularly irksome to colonists was their lack of representation in the British Parliament, and the phrase no taxation without representation became a common grievance. The struggle between rebelling colonists and British troops was a time when citizenship "worked '', according to one view. American and subsequent French declarations of rights were instrumental in linking the notion of fundamental rights to popular sovereignty in the sense that governments drew their legitimacy and authority from the consent of the governed. The Framers designed the United States Constitution to accommodate a rapidly growing republic by opting for representative democracy as opposed to direct democracy, but this arrangement challenged the idea of citizenship in the sense that citizens were, in effect, choosing other persons to represent them and take their place in government. The revolutionary spirit created a sense of "broadening inclusion ''. The Constitution specified a three - part structure of government with a federal government and state governments, but it did not specify the relation of citizenship. The Bill of Rights protected the rights of individuals from intrusion by the federal government, although it had little impact on judgements by the courts for the first 130 years after ratification. The term citizen was not defined by the Constitution until the Fourteenth Amendment was added in 1868, which defined American citizenship as "All persons born or naturalized in the United States '' not possessing foreign allegiance. The American Revolution demonstrated that it was plausible for Enlightenment ideas about how a government should be organized to actually be put into practice. The French Revolution marked major changes and has been widely seen as a watershed event in modern politics. Up until then, the main ties between people under the Ancien Regime were hierarchical, such that each person owed loyalty to the next person further up the chain of command; for example, serfs were loyal to local vassals, who in turn were loyal to nobles, who in turn were loyal to the king, who in turn was presumed to be loyal to God. Clergy and aristocracy had special privileges, including preferential treatment in law courts, and were exempt from taxes; this last privilege had the effect of placing the burden of paying for national expenses on the peasantry. One scholar who examined pre-Revolutionary France described powerful groups which stifled citizenship and included provincial estates, guilds, military governors, courts with judges who owned their offices, independent church officials, proud nobles, financiers and tax farmers. They blocked citizenship indirectly since they kept a small elite governing group in power, and kept regular people away from participating in political decision - making. These arrangements changed substantially during and after the French Revolution. Louis XVI mismanaged funds, vacillated, was blamed for inaction during a famine, causing the French people to see the interest of the king and the national interest as opposed. During the early stages of the uprising, the abolition of aristocratic privilege happened during a pivotal meeting on August 4, 1789, in which an aristocrat named Vicomte de Noailles proclaimed before the National Assembly that he would renounce all special privileges and would henceforward be known only as the "Citizen of Noailles. '' Other aristocrats joined him which helped to dismantle the Ancien Regime 's seignorial rights during "one night of heated oratory '', according to one historian. Later that month, the Assembly 's Declaration of the Rights of Man and Citizen linked the concept of rights with citizenship and asserted that rights of man were "natural, inalienable, and sacred '', that all men were "born free and equal, and that the aim of all political association is maintenance of their rights '', according to historian Robert Bucholz. However, the document said nothing about the rights of women, although activist Olympe de Gouge issued a proclamation two years later which argued that women were born with equal rights to men. People began to identify a new loyalty to the nation as a whole, as citizens, and the idea of popular sovereignty earlier espoused by the thinker Rousseau took hold, along with strong feelings of nationalism. Louis XVI and his wife were guillotined. Citizenship became more inclusive and democratic, aligned with rights and national membership. The king 's government was replaced with an administrative hierarchy at all levels, from a national legislature to even power at the local commune, such that power ran both up and down the chain of command. Loyalty became a cornerstone in the concept of citizenship, according to Peter Riesenberg. One analyst suggested that in the French Revolution, two often polar - opposite versions of citizenship merged: (1) the abstract idea of citizenship as equality before the law caused by the centralizing and rationalizing policies of absolute monarchs and (2) the idea of citizenship as a privileged status reserved for rule - makers, brought forth defensively by an aristocratic elite guarding its exclusiveness. According to one view by the German philosopher Max Stirner, the Revolution emancipated the citizen but not the individual, since the individuals were not the agents of change, but only the collective force of all individuals; in Stirner 's sense, the "agent of change '' was effectively the nation. The British thinker T.H. Marshall saw in the 18th century "serious growth '' of civil rights, with major growth in the legal aspects of citizenship, often defended through courts of law. These civil rights extended citizenship 's legal dimensions: they included the right to free speech, the right to a fair trial, and generally equal access to the legal system. Marshall saw the 18th century as signifying civil rights which was a precursor to political rights such as suffrage, and later, in the 20th century, social rights such as welfare. After 1750, states such as Britain and France invested in massive armies and navies which were so expensive to maintain that the option of hiring mercenary soldiers became less attractive. Rulers found troops within the public, and taxed the public to pay for these troops, but one account suggested that the military buildup had a side - effect of undermining the military 's autonomous political power. Another view corroborates the idea that military conscription spurred development of a broader role for citizens. A phenomenon known as the public sphere arose, according to philosopher Jürgen Habermas, as a space between authority and private life in which citizens could meet informally, exchange views on public matters, criticize government choices and suggest reforms. It happened in physical spaces such as public squares as well as in coffeehouses, museums, restaurants, as well as in media such as newspapers, journals, and dramatic performances. It served as a counterweight to government, a check on its power, since a bad ruling could be criticized by the public in places such as editorials. According to Schudson, the public sphere was a "playing field for citizenship ''. In the late - 19th century, thinking about citizenship began to influence China. Discussion started of ideas (such as legal limits, definitions of monarchy and the state, parliaments and elections, an active press, public opinion) and of concepts (such as civic virtue, national unity, and social progress). John Stuart Mill in his work On Liberty (1859) believed that there should be no distinctions between men and women, and that both were capable of citizenship. British sociologist Thomas Humphrey Marshall suggested that the changing patterns of citizenship were as follows: first, a civil relation in the sense of having equality before the law, followed by political citizenship in the sense of having the power to vote, and later a social citizenship in the sense of having the state support individual persons along the lines of a welfare state. Marshall argued in the middle of the 20th century that modern citizenship encompassed all three dimensions: civil, political, and social. He wrote that citizenship required a vital sense of community in the sense of a feeling of loyalty to a common civilization. Thinkers such as Marc Steinberg saw citizenship emerge from a class struggle interrelated with the principle of nationalism. People who were native - born or naturalised members of the state won a greater share of the rights out of "a continuing series of transactions between persons and agents of a given state in which each has enforceable rights and obligations '', according to Steinberg. This give - and - take to a common acceptance of the powers of both the citizen and the state. He argued that: The contingent and uneven development of a bundle of rights understood as citizenship in the early nineteenth century was heavily indebted to class conflict played out in struggles over state policy on trade and labor. Nationalism emerged. Many thinkers suggest that notions of citizenship rights emerged from this spirit of each person identifying strongly with the nation of their birth. A modern type of citizenship is one which lets people participate in a number of different ways. Citizenship is not a "be-all end - all '' relation, but only one of many types of relationships which a person might have. It has been seen as an "equalizing principle '' in the sense that most other people have the same status. One theory sees different types of citizenship emanating out from concentric circles -- from the town, to the state, to the world -- and that citizenship can be studied by looking at which types of relations people value at any one time. The idea that participating in lawmaking is an essential aspect of citizenship continues to be expressed by different thinkers. For example, British journalist and pamphleteer William Cobbett said that the "greatest right '', which he called the "right of rights '', was having a share in the "making of the laws '', and then submitting the laws to the "good of the whole. '' The idea of citizenship, and western senses of government, began to emerge in Asia in the 19th and 20th centuries. In Meiji Japan, popular social forces exerted influence against traditional types of authority, and out of a period of negotiations and concessions by the state came a time of "expanding democracy '', according to one account. Numerous cause - and - effect relations worked to bring about a Japanese version of citizenship: expanding military activity led to an enlarged state and territory, which furthered direct rule including the power of the military and the Japanese emperor, but this indirectly led to popular resistance, struggle, bargaining, and consequently an expanded role for citizens in early 20th century Japan. The concept of citizenship is hard to isolate, since it relates to many other contextual aspects of society such as the family, military service, the individual, freedom, religion, ideas of right and wrong, ethnicity, and patterns for how a person should behave in society. According to British politician Douglas Hurd, citizenship is essentially doing good to others. When there are many different ethnic and religious groups within a nation, citizenship may be the only real bond which unites everybody as equals without discrimination -- it is a "broad bond '' as one writer described it. Citizenship links "a person with the state '' and gives people a universal identity -- as a legal member of a nation -- besides their identity based on ties of ethnicity or an ethnic self. But clearly there are wide differences between ancient conceptions of citizenship and modern ones. While the modern one still respects the idea of participation in the political process, it is usually done through "elaborate systems of political representation at a distance '' such as representative democracy, and carried out under the "shadow of a permanent professional administrative apparatus. '' Unlike the ancient patterns, modern citizenship is much more passive; action is delegated to others; citizenship is often a constraint on acting, not an impetus to act. Nevertheless, citizens are aware of their obligations to authorities, and they are aware that these bonds "limits their personal political autonomy in a quite profound manner ''. But there are disagreements that the contrast between ancient and modern versions of citizenship was that sharp; one theorist suggested that the supposedly "modern '' aspects of so - called passive citizenship, such as tolerance, respect for others, and simply "minding one 's own business '', were present in ancient times too. Citizenship can be seen as both a status and an ideal. Sometimes mentioning the idea of citizenship implies a host of theories as well as the possibility of social reform, according to one view. It invokes a model of what a person should do in relation to the state, and suggests education or punishment for those who stray from the model. Several thinkers see the modern notion of individualism as being sometimes consistent with citizenship, and other times opposed to it. Accordingly, the modern individual and the modern citizen seem to be the same, but too much individualism can have the effect of leading to a "crisis of citizenship ''. Another agreed that individualism can corrupt citizenship. Another sees citizenship as a substantial dilemma between the individual and society, and between the individual and the state, and asked questions such as whether the focus of a person 's efforts should be on the collective good or on the individual good? In a Marxist view, the individual and the citizen were both "essentially necessary '' to each other in that neither could exist without the other, but both aspects within a person were essentially antagonistic to each other. Habermas suggested in his book The Structural Transformation of the Public Sphere that while citizenship widened to include more people, the public sphere shrunk and became commercialized, devoid of serious debate, with media coverage of political campaigns having less focus on issues and more focus on sound bites and political scandals, and in the process, citizenship became more common but meant less. Political participation declined for most people. Other thinkers echo that citizenship is a vortex for competing ideas and currents, sometimes working against each other, sometimes working in harmony. For example, sociologist T.H. Marshall suggested that citizenship was a contradiction between the "formal political equality of the franchise '' and the "persistence of extensive social and economic inequality. '' In Marshall 's sense, citizenship was a way to straddle both issues. A wealthy person and a poor person were both equal in the sense of being citizens, but separated by the economic inequality. Marshall saw citizenship as the basis for awarding social rights, and he made a case that extending such rights would not jeopardize the structure of social classes or end inequality. He saw capitalism as a dynamic system with constant clashes between citizenship and social class, and how these clashes played out determined how a society 's political and social life would manifest themselves. Citizenship was not always about including everybody, but was also a powerful force to exclude persons at the margins of society, such as the outcasts, illegal immigrants and others. In this sense, citizenship was not only about getting rights and entitlements but it was a struggle to "reject claims of entitlement by those initially residing outside the core, and subsequently, of migrant and immigrant labour. '' But one thinker described democratic citizenship as inclusive, generally, and wrote that democratic citizenship: ... (democratic citizenship) extends human, political and civil rights to all inhabitants, regardless of race, religion, ethnicity, or culture. In a civic state, which is based on the concept of such citizenship, even foreigners are protected by the rule of law. '' Citizenship in the modern sense is often seen as having two widely divergent strains marked by tension between them. The liberal - individualist conception of citizenship, or sometimes merely the liberal conception, has a concern that the individual 's status may be undermined by government. The perspective suggests a language of "needs '' and "entitlements '' necessary for human dignity and is based on reason for the pursuit of self - interest or more accurately as enlightened self - interest. The conception suggests a focus on the manufacture of material things as well as man 's economic vitality, with society seen as a "market - based association of competitive individuals. '' From this view, citizens are sovereign, morally autonomous beings with duties to pay taxes, obey the law, engage in business transactions, and defend the nation if it comes under attack, but are essentially passive politically. This conception of citizenship has sometimes been termed conservative in the sense that passive citizens want to conserve their private interests, and that private people have a right to be left alone. This formulation of citizenship was expressed somewhat in the philosophy of John Rawls, who believed that every person in a society has an "equal right to a fully adequate scheme of equal basic rights and liberties '' and that society has an obligation to try to benefit the "least advantaged members of society ''. But this sense of citizenship has been criticized; according to one view, it can lead to a "culture of subjects '' with a "degeneration of public spirit '' since economic man, or homo economicus, is too focused on material pursuits to engage in civic activity to be true citizens. A competing vision is that democratic citizenship may be founded on a "culture of participation ''. This orientation has sometimes been termed the civic - republican or classical conception of citizenship since it focuses on the importance of people practicing citizenship actively and finding places to do this. Unlike the liberal - individualist conception, the civic - republican conception emphasizes man 's political nature, and sees citizenship as an active, not passive, activity. A general problem with this conception, according to critics, is that if this model is implemented, it may bring about other issues such as the free rider problem in which some people neglect basic citizenship duties and consequently get a free ride supported by the citizenship efforts of others. This view emphasizes the democratic participation inherent in citizenship, and can "channel legitimate frustrations and grievances '' and bring people together to focus on matters of common concern and lead to a politics of empowerment, according to theorist Dora Kostakopoulou. Like the liberal - individualist conception, it is concerned about government running roughshod over individuals, but unlike the liberal - individualist conception, it is relatively more concerned that government will interfere with popular places to practice citizenship in the public sphere, rather than take away or lessen particular citizenship rights. This sense of citizenship has been described as "active and public citizenship '', and has sometimes been called a "revolutionary idea ''. According to one view, most people today live as citizens according to the liberal - individualist conception but wished they lived more according to the civic - republican ideal. The subject of citizenship, including political discussions about what exactly the term describes, can be a battleground for ideological debates. In Canada, citizenship and related issues such as civic education are "hotly contested. '' There continues to be sentiment within the academic community that trying to define one "unitary theory of citizenship '' which would describe citizenship in every society, or even in any one society, would be a meaningless exercise. Citizenship has been described as "multi-layered belongings '' -- different attachments, different bonds and allegiances. This is the view of Hebert & Wilkinson who suggest there is not one single perspective on citizenship but "multiple citizenship '' relations since each person belongs to many different groups which define him or her. Sociologist Michael Schudson examined changing patterns of citizenship in US history and suggested there were four basic periods: Schudson chronicled changing patterns in which citizenship expanded to include formerly disenfranchised groups such as women and minorities while parties declined. Interest groups influenced legislators directly via lobbying. Politics retreated to being a peripheral concern for citizens who were often described as "self - absorbed ''. In the 21st - century America, citizenship is generally considered to be a legal marker recognizing that a person is an American. Duty is generally not part of citizenship. Citizens generally do not see themselves as having a duty to provide assistance to one another, although officeholders are seen as having a duty to the public. Rather, citizenship is a bundle of rights which includes being able to get assistance from the federal government. A similar pattern marks the idea of citizenship in many western - style nations. Most Americans do not think much about citizenship except perhaps when applying for a passport and traveling internationally. Feliks Gross sees 20th century America as an "efficient, pluralistic and civic system that extended equal rights to all citizens, irrespective of race, ethnicity and religion. '' According to Gross, the US can be considered as a "model of a modern civic and democratic state '' although discrimination and prejudice still survive. The exception, of course, is that persons living within the borders of America illegally see citizenship as a major issue. Nevertheless, one of the constants is that scholars and thinkers continue to agree that the concept of citizenship is hard to define, and lacks a precise meaning.
where does the tradition of stockings come from
Christmas stocking - wikipedia A Christmas stocking is an empty sock or sock - shaped bag that is hung on Christmas Eve so that Santa Claus (or Father Christmas) can fill it with small toys, candy, fruit, coins or other small gifts when he arrives. These small items are often referred to as stocking stuffers or stocking fillers. In some Christmas stories, the contents of the Christmas stocking are the only toys the child receives at Christmas from Santa Claus; in other stories (and in tradition), some presents are also wrapped up in wrapping paper and placed under the Christmas tree. Tradition in Western culture threatens that a child who behaves badly during the year will receive only a piece or pile of coal. However, coal is rarely if ever left in a stocking, as it is considered cruel. Some people even put their Christmas stocking by their bedposts so Santa Claus can fill it by the bed while they sleep. While there are no written records of the origin of the Christmas Stocking, there are popular legends that attempt to tell the history of this Christmas tradition. One such legend has several variations, but the following is a good example: This led to the custom of children hanging stockings or putting out shoes, eagerly awaiting gifts from Saint Nicholas. Sometimes the story is told with gold balls instead of bags of gold. That is why three gold balls, sometimes represented as oranges, are one of the symbols for St. Nicholas. And so, St. Nicholas is a gift - giver. This is also the origin of three gold balls being used as a symbol for pawnbrokers. A tradition that began in a European country originally, children simply used one of their everyday socks, but eventually special Christmas stockings were created for this purpose. The Christmas stocking custom is derived from the Germanic / Scandinavian figure Odin. According to Phyllis Siefker, children would place their boots, filled with carrots, straw, or sugar, near the chimney for Odin 's flying horse, Sleipnir, to eat. Odin would reward those children for their kindness by replacing Sleipnir 's food with gifts or candy. This practice, she claims, survived in Germany, Belgium and the Netherlands after the adoption of Christianity and became associated with Saint Nicholas as a result of the process of Christianization. Today, stores carry a large variety of styles and sizes of Christmas stockings, and Christmas stockings are also a popular homemade craft. This claim is disputed though as there are no records of stocking filling practices related to Odin until there is a merging of St. Nicholas with Odin. St. Nicholas had an earlier merging with the Grandmother cult in Bari, Italy where the grandmother would put gifts in stockings. This merged St. Nicholas would later travel north and merge with the Odin cults. Many families create their own Christmas stockings with each family member 's name applied to the stocking so that Santa will know which stocking belongs to which family member. According to the Guinness World Records, the largest recorded Christmas stocking measured 51 m 35 cm (168 ft 5.65 in) in length and 21 m 63 cm (70 ft 11.57 in) in width (heel to toe) and was produced by a volunteer emergency services organisation in Carrara, Tuscany, Italy, on 5 January 2011. To fulfill the Guinness guideline that the stocking contain presents, volunteers filled it with balloons containing sweets. Prior to this the record had been broken in London on 14 December 2007 by volunteers of The Children 's Society, whose stocking measured 32.56 m long and 14.97 m wide.
who is known as the father of astronomy
List of people considered father or mother of a scientific field - wikipedia The following is a list of people who are considered a "father '' or "mother '' (or "founding father '' or "founding mother '') of a scientific field. Such people are generally regarded to have made the first significant contributions to and / or delineation of that field; they may also be seen as "a '' rather than "the '' father or mother of the field. Debate over who merits the title can be perennial. As regards science itself, the title has been bestowed on the ancient Greek philosophers Thales -- who attempted to explain natural phenomena without recourse to mythology -- and Democritus, the atomist. Henrietta Leavitt (mother) Edwin Hubble (father) Jean - François Champollion Noam Chomsky Developed Baconian method in his Novum Organum (1620). Carey (the passage to be looked up later) therefore denounces him as the father of communism. "Mr. Ricardo 's system is one of discords... its whole tends to the production of hostility among classes and nations... His hook is the true manual of the demagogue, who seeks power by means of agrarianism, war, and plunder. '' (H.C. Carey, The Past, the Present, and the Future, Philadelphia, 1848, pp. 74 -- 75.)
when was fathers day first celebrated in the uk
Father 's Day - wikipedia Father 's Day is a celebration honoring fathers and celebrating fatherhood, paternal bonds, and the influence of fathers in society. In Catholic Europe, it has been celebrated on March 19 (St. Joseph 's Day) since the Middle Ages. This celebration was brought by the Spanish and Portuguese to Latin America, where March 19 is often still used for it, though many countries in Europe and the Americas have adopted the U.S. date, which is the third Sunday of June. It is celebrated on various days in many parts of the world, most commonly in the months of March, April and June. It complements similar celebrations honoring family members, such as Mother 's Day, Siblings Day, and Grandparents ' Day. A customary day for the celebration of fatherhood in Catholic Europe is known to date back to at least the Middle Ages, and it is observed on 19 March, as the feast day of Saint Joseph, who is referred to as the fatherly Nutritor Domini ("Nourisher of the Lord '') in Catholicism and "the putative father of Jesus '' in southern European tradition. This celebration was brought to the Americas by the Spanish and Portuguese, and in Latin America, Father 's Day is still celebrated on 19 March. The Catholic Church actively supported the custom of a celebration of fatherhood on St. Joseph 's day from either the last years of the 14th century or from the early 15th century, apparently on the initiative of the Franciscans. In the Coptic Church, the celebration of fatherhood is also observed on St Joseph 's Day, but the Copts observe this celebration on July 20. This Coptic celebration may date back to the fifth century. Father 's Day was not celebrated in the US, outside Catholic traditions, until the 20th century. As a civic celebration in the US, it was inaugurated in the early 20th century to complement Mother 's Day by celebrating fathers and male parenting. After Anna Jarvis ' successful promotion of Mother 's Day in Grafton, West Virginia, the first observance of a "Father 's Day '' was held on July 5, 1908, in Fairmont, West Virginia, in the Williams Memorial Methodist Episcopal Church South, now known as Central United Methodist Church. Grace Golden Clayton was mourning the loss of her father, when in December 1907, the Monongah Mining Disaster in nearby Monongah killed 361 men, 250 of them fathers, leaving around a thousand fatherless children. Clayton suggested that her pastor Robert Thomas Webb honor all those fathers. Clayton 's event did not have repercussions outside Fairmont for several reasons, among them: the city was overwhelmed by other events, the celebration was never promoted outside the town itself and no proclamation of it was made by the city council. Also, two events overshadowed this event: the celebration of Independence Day July 4, 1908, with 12,000 attendants and several shows including a hot air balloon event, which took over the headlines in the following days, and the death of a 16 - year - old girl on July 4. The local church and council were overwhelmed and they did not even think of promoting the event, and it was not celebrated again for many years. The original sermon was not reproduced by the press and it was lost. Finally, Clayton was a quiet person, who never promoted the event and never talked to other persons about it. In 1911, Jane Addams proposed that a citywide Father 's Day celebration be held in Chicago, but she was turned down. In 1912, there was a Father 's Day celebration in Vancouver, Washington, suggested by Methodist pastor J.J. Berringer of the Irvington Methodist Church. They mistakenly believed that they had been the first to celebrate such a day. They followed a 1911 suggestion by the Portland Oregonian. Harry C. Meek, a member of Lions Clubs International, claimed that he had first come up with the idea for Father 's Day in 1915. Meek said that the third Sunday in June was chosen because it was his birthday. The Lions Club has named him the "Originator of Father 's Day ''. Meek made many efforts to promote Father 's Day and make it an official holiday. On June 19, 1910, a Father 's Day celebration was held at the YMCA in Spokane, Washington by Sonora Smart Dodd. Her father, the civil war veteran William Jackson Smart, was a single parent who raised his six children there. She was also a member of Old Centenary Presbyterian Church (now Knox Presbyterian Church), where she first proposed the idea. After hearing a sermon about Jarvis ' Mother 's Day in 1909 at Central Methodist Episcopal Church, she told her pastor that fathers should have a similar holiday to honor them. Although she initially suggested June 5, her father 's birthday, the pastors did not have enough time to prepare their sermons, and the celebration was deferred to the third Sunday in June. Several local clergymen accepted the idea, and on June 19, 1910, the first Father 's Day, "sermons honoring fathers were presented throughout the city ''. However, in the 1920s, Dodd stopped promoting the celebration because she was studying at the Art Institute of Chicago, and it faded into relative obscurity, even in Spokane. In the 1930s, Dodd returned to Spokane and started promoting the celebration again, raising awareness at a national level. She had the help of those trade groups that would benefit most from the holiday, for example the manufacturers of ties, tobacco pipes, and any traditional present for fathers. By 1938, she had the help of the Father 's Day Council, founded by the New York Associated Men 's Wear Retailers to consolidate and systematize the holiday 's commercial promotion. Americans resisted the holiday for its first few decades, viewing it as nothing more than an attempt by merchants to replicate the commercial success of Mother 's Day, and newspapers frequently featured cynical and sarcastic attacks and jokes. However, the said merchants remained resilient and even incorporated these attacks into their advertisements. By the mid-1980s, the Father 's Day Council wrote, "(...) (Father 's Day) has become a Second Christmas for all the men 's gift - oriented industries. '' A bill to accord national recognition of the holiday was introduced in Congress in 1913. In 1916, President Woodrow Wilson went to Spokane to speak at a Father 's Day celebration and he wanted to make it an officially recognized federal holiday, but Congress resisted, fearing that it would become commercialized. US President Calvin Coolidge recommended in 1924 that the day be observed throughout the entire nation, but he stopped short at issuing a national proclamation. Two earlier attempts to formally recognize the holiday had been defeated by Congress. In 1957, Maine Senator Margaret Chase Smith wrote a Father 's Day proposal accusing Congress of ignoring fathers for 40 years while honoring mothers, thus "(singling) out just one of our two parents ''. In 1966, President Lyndon B. Johnson issued the first presidential proclamation honoring fathers, designating the third Sunday in June as Father 's Day. Six years later, the day was made a permanent national holiday when President Richard Nixon signed it into law in 1972. In addition to Father 's Day, International Men 's Day is celebrated in many countries on November 19 in honor of men and boys who are not fathers. In the United States, Dodd used the "Fathers ' Day '' spelling on her original petition for the holiday, but the spelling "Father 's Day '' was already used in 1913 when a bill was introduced to the U.S. Congress as the first attempt to establish the holiday, and it was still spelled the same way when its creator was commended in 2008 by the U.S. Congress. The officially recognized date of Father 's Day varies from country to country. This section lists some significant examples, in order of date of observance. February 23 Russia (Defender of the Fatherland Day) * St Joseph 's Day March 19 May 7 Kazakhstan South Korea (Parents ' Day) Second Sunday in May May 14, 2017 May 13, 2018 May 12, 2019 Romania (Ziua Tatălui) Third Sunday in May May 21, 2017 May 20, 2018 May 19, 2019 Tonga Ascension Day May 25, 2017 May 10, 2018 May 30, 2019 Germany First Sunday in June Jun 4, 2017 Jun 3, 2018 Jun 2, 2019 Lithuania (Tėvo diena) Switzerland June 5 Denmark (also Constitution Day) Second Sunday in June Jun 11, 2017 Jun 10, 2018 Jun 9, 2019 Austria Belgium Third Sunday in June Jun 18, 2017 Jun 17, 2018 Jun 16, 2019 Jun 21, 2020 June 17 June 21 Egypt Jordan Kosovo Lebanon Syria United Arab Emirates June 22 Guernsey Jersey June 23 Last Sunday in June Jun 25, 2017 Jun 24, 2018 Jun 30, 2019 Haiti Second Sunday in July Jul 9, 2017 Jul 8, 2018 Jul 14, 2019 Uruguay Last Sunday in July Jul 30, 2017 Jul 29, 2018 Jul 28, 2019 Dominican Republic August 8 China * * Mongolia Taiwan Second Sunday in August Aug 13, 2017 Aug 12, 2018 Aug 11, 2019 Brazil Samoa Last Monday in August Aug 28, 2017 Aug 27, 2018 Aug 26, 2019 South Sudan First Sunday in September Sep 3, 2017 Sep 2, 2018 Sep 1, 2019 Second Sunday in September Sep 10, 2017 Sep 9, 2018 Sep 8, 2019 Latvia First Sunday in October Oct 1, 2017 Oct 7, 2018 Oct 6, 2019 Luxembourg Second Sunday in November Nov 12, 2017 Nov 11, 2018 Nov 10, 2019 November 12 Indonesia December 5 Thailand (The birthday of King Bhumibol) Bhadrapada Amavasya (Gokarna Aunsi) Between August 30 and September 30 Nepal 13 Rajab, Ali Ibn Abi Talib birthday April 21, 2016 April 10, 2017 Iran Kuwait Bahrain Iraq Oman Qatar Egypt Yemen Syria Lebanon Somalia Sudan Mauritania * Officially, as the name suggests, the holiday celebrates people who are serving or were serving the Russian Armed Forces (both men and women). But the congratulations are traditionally, nationally accepted by all fathers, other adult men and male children as well. * * There is no official Father 's Day of the P.R. China. During the Republican period prior to 1949, Father 's Day on August 8 was first celebrated in Shanghai in 1945. Father 's Day in Argentina is celebrated on the third Sunday of June. There have been attempts to change the date to August 24, to commemorate the day on which the Father of the Nation José de San Martín became a father. In 1953, the proposal to celebrate Father 's Day in all educational establishments on August 24, in honor of José de San Martín, was raised to the General Direction of Schools of Mendoza Province. The day was celebrated for the first time in 1958, on the third Sunday of June, but it was not included in the school calendars due to pressure from several groups. Schools in the Mendoza Province continued to celebrate Father 's Day on August 24, and, in 1982, the provincial governor passed a law declaring Father 's Day in the province to be celebrated on that day. In 2004, a proposal to change the date to August 24 were presented to the Argentine Chamber of Deputies as a single, unified project. In Aruba, Father 's Day is celebrated on the third Sunday of June and is not a public holiday. In Australia, Father 's Day is celebrated on the first Sunday of September, which is the first Sunday of Spring in Australia, and is not a public holiday. At school, children handcraft their present for their fathers. Consumer goods companies have all sorts of special offers for fathers: socks, ties, electronics, suits, and men 's healthcare products. Most families present fathers with gifts and cards, and share a meal to show appreciation, much like Mother 's Day. YMCA Victoria continues the tradition of honouring the role fathers and father figures play in parenting through the annual awarding of Local Community Father of the Year in 32 municipalities in Victoria. The Father 's Day Council of Victoria annually recognises fathers in the Father of the Year Award. In Austria, Father 's Day is celebrated on the second Sunday of June and it is not a public holiday. In Belgium, Father 's Day is celebrated on the second Sunday of June and it is not a public holiday. In Brazil Father 's Day (Dia dos Pais, in Portuguese) is celebrated three months after Mother 's Day, on the second Sunday of August. Publicist Sylvio Bhering picked the day in honor of Saint Joachim, patron of fathers. While it is not an official holiday (see Public holidays in Brazil), it is widely observed and typically involves spending time with and giving gifts to one 's father or father figure. In Canada, Father 's Day is celebrated on the third Sunday of June and is not a public holiday. Father 's Day typically involves spending time with one 's father or the father figures in one 's life. Small family gatherings and the giving of gifts may be part of the festivities organized for Father 's Day. In People 's Republic of China, there is no official Father 's Day. Some people celebrate on the third Sunday of June, according to the tradition of the United States. Prior to the People 's Republic, when the Republic of China governed from Nanjing, Father 's Day was celebrated on August 8. This was determined by the fact that the eighth (ba) day of the eighth (ba) month makes two "eights '' (八 八, ba - ba), which sounds similar to the colloquial word for "daddy '' (ba - ba, 爸爸). It is still celebrated on this date in areas still under the control of the Republic of China, including Taiwan. In Costa Rica, the Unidad Social Cristiana party presented a bill to change the celebration of Father 's Day from the third Sunday of June to March 19, the day of Saint Joseph. That was in order to give tribute to this saint, who gave his name to the capital of the country San José, Costa Rica, and so family heads will be able to celebrate the Father 's Day at the same time as the Feast of Saint Joseph the Worker. The official date is still the third Sunday of June. In Croatia, according to the Roman Catholic tradition, fathers are celebrated on Saint Joseph 's Day (Dan svetog Josipa), March 19. It is not a public holiday. In Denmark, Father 's Day is celebrated on June 5. It coincides with Constitution Day. In Estonia, Father 's day ("Isadepäev '') is celebrated on the second Sunday of November. It is an established flag day and a national holiday. In Finland, Father 's Day (Isänpäivä, Fars dag) is celebrated on the second Sunday of November. It is an established flag day. In France lighter manufacturer "Flaminaire '' introduced the idea of father 's day first in 1949 for commercial reasons. Director "Marcel Quercia '' wanted to sell their lighter in France. In 1950, they introduced "la Fête des Pères '', which would take place every third Sunday of June (following the American example). Their slogan "Nos papas nous l'ont dit, pour la fête des pères, ils désirent tous un Flaminaire '' (Our fathers told us, for father 's day, they all want a Flaminaire). In 1952, the holiday was officially decreed. A national father 's day committee was set up to give a prize for fathers that deserved it most (originally, candidates were nominated by the social services of each town hall 's / mayor 's office); This complements "la Fête des Mères '' (Mother 's day) which was made official in France in 1928 and added to the calendar in Vichy in 1941. In Germany, Father 's Day (Vatertag) is celebrated differently from other parts of the world. It is always celebrated on Ascension Day (the Thursday forty days after Easter), which is a federal holiday. Regionally, it is also called men 's day, Männertag, or gentlemen 's day, Herrentag. It is a tradition for groups of males (young and old but usually excluding pre-teenage boys) to do a hiking tour with one or more smaller wagons, Bollerwagen, pulled by manpower. In the wagons are wine or beer bottles (according to the region) and traditional regional food, Hausmannskost. Many men use this holiday as an opportunity to get drunk. According to the Federal Statistical Office of Germany, alcohol - related traffic accidents multiply by three on this day. The tradition of Father 's Day is especially prevalent in Eastern Germany. These traditions are probably rooted in Christian Ascension Day 's processions to the farmlands, which has been celebrated since the 18th century. Men would be seated in a wooden cart and carried to the village 's plaza, and the mayor would award a prize to the father who had the most children, usually a big piece of ham. In the late 19th century the religious component was progressively lost, especially in urban areas such as Berlin, and groups of men organized walking excursions with beer and ham. By the 20th century, alcohol consumption had become a major part of the tradition. Many people will take the following Friday off at work, and some schools are closed on that Friday as well; many people then use the resulting four - day - long weekend for a short vacation. Father 's Day, is observed on the feast day of Fathers, It is celebrated as a public international day, like in many other countries including the U.S., on the third Sunday of June. In Europe like in the rest of the world, it became a manifestation of divorced fathers, and the day was inaugurated by Professor Dr Nicolas Spitalas (http://spitalas.blogspot.com) who created also the International Movement of Dads. His Association SYGAPA (Men 's and Fathers ' Dignity (www.sos-sygapa.eu) is the biggest movement in the world (35.000 members). In Greece, like in other European countries, this day is named (Fête des Peres / Feast of Fathers) In Haiti, Father 's Day (Fête des peres) is celebrated on the last Sunday of June and is not a public holiday. Fathers are recognized and celebrated on this day with cards, gifts, breakfast, lunch brunch or early Sunday dinner; whether enjoying the day at the beach or mountains, spending family time or doing favourite activities. Children exclaim "bonne fête papa '', while everyone wishes all fathers "bonne Fête des Pères ''. (Happy Father 's Day) In Hong Kong, Father 's Day is celebrated on the third Sunday of June and is not a public holiday. In Hungary, Father 's Day is celebrated on the third Sunday of June and is not a public holiday. Father 's Day (Telugu: ఫాదర్స్ డే) is not celebrated in all of India. But is observed the third Sunday of June by mostly westernized urban centers. The event is not a public holiday. The day is usually celebrated only in bigger cities of India like Hyderabad, Chennai, Mumbai, New Delhi, Kanpur, Bengaluru, Kolkata and others. After this day was first observed in the United States in 1908 and gradually gained popularity, Indian metropolitan cities, much later, followed suit by recognising this event. In India, the day is usually celebrated with children giving gifts like greeting cards, electronic gadgets, shirts, coffee mugs or books to their fathers. In Indonesia, Father 's Day is celebrated on November 12 and is not a public holiday. Father 's Day in Indonesia was first declared in 2006 in Solo City Hall attended by hundreds of people from various community groups, including people from community of inter-religion communication. Because of its recent declaration, there is not very much hype about the celebration, compared to the celebration of Mother 's Day on December 22. In Ireland, Father 's Day is celebrated on the third Sunday of June and is not a public holiday. In Israel, Father 's Day is usually celebrated on May 1 together with Workers ' Day or Labour Day. In Italy, according to the Roman Catholic tradition, Father 's Day is celebrated on Saint Joseph 's Day, commonly called Feast of Saint Joseph (Festa di San Giuseppe), March 19. It was a public holiday until 1977. In Japan, Father 's Day is celebrated on the third Sunday of June and is not a public holiday. Kazakhstan continues the Soviet Union 's tradition of celebrating Defender of the Fatherland Day instead of Father 's Day like in Russia and other former soviet countries. It is usually called "Man 's Day '' and it is considered equivalent of Father 's Day. It is still celebrated on February 23. In Kenya, Father 's Day is celebrated on the third Sunday of June and is not a public holiday. In South Korea, Parents ' day is celebrated on May 8 and is not a public holiday. In Latvia, Father 's Day (Tēvu diena) is celebrated on the second Sunday of September and is not a public holiday. In Latvia people did not always celebrate this day because of the USSR 's influence with its own holidays. This day in Latvia was ' officially born ' in 2008 when it was celebrated and marked in the calendar for the first time on September 14 (second September Sunday) to promote the idea that man as the father must be satisfied and proud of his family and children, also, the father is important to gratitude and loving words from his family for devoted to continuous altruistic concerns. Because this day is new to the country it does not have established unique traditions, but people borrow ideas from other country 's Father 's Day traditions to congratulate fathers in Latvia. In Lithuania, Father 's Day (Tėvo diena) is celebrated on the first Sunday of June and is a public holiday. In Macau, Father 's Day (Dia do Pai) is celebrated on the third Sunday of June and is not a public holiday. In Malaysia, Father 's Day falls on the third Sunday of June. Malta has followed the international trend and celebrates Father 's Day on the third Sunday in June. As in the case of Mother 's Day, the introduction of Father 's Day celebrations in Malta was encouraged by Frans H Said (Uncle Frans of the children 's radio programmes). The first mention of Father 's Day was in June 1977, and the day is now part of the local events calendar. (The Times of Malta 11 June 2017) (Il - Mument - Maltese newspaper - 18 June 2017) In Mexico, Father 's Day is celebrated on the third Sunday of June and is not a public holiday. The Mongolian Men 's Association began the celebration of Father 's Day on 8 of August since 2005. The Newar population (natives of Kathmandu valley) in Nepal honors fathers on the day of kusa aunsi, which occurs in late August or early September, depending on the year, since it depends on the lunar calendar. The Western - inspired celebration of Father 's Day that was imported into the country is always celebrated on the same day as Gokarna Aunsi. The rest of the population has also begun to celebrate the Gokarna Aunsi day It is commonly known as Abu ya Khwa Swoyegu in Nepal Bhasa or Buwaako mukh herne din (बुवाको मुख हेर्ने दिन) in Nepali (literally "day for looking at father 's face ''). On the new moon day (Amavasya) it is traditional to pay respect to one 's deceased father; Hindus go to the Shiva temple of Gokarneswor Mahadev, in Gokarna, a suburb of Kathmandu while Buddhists go to Jan Bahal (Seto Machhendranath or white Tara) temple in Kathmandu. Traditionally, in the Kathmandu Valley, the south - western corner is reserved for women and women - related rituals, and the north - eastern is for men and men - related rituals. The worship place for Mata Tirtha Aunsi ("Mother Pilgrimage New Moon '') is located in Mata Tirtha in the south - western half of the valley, while the worship place for Gokarna Aunsi is located in the north - eastern half. This division is reflected in many aspects of the life in the Kathmandu Valley. In the Netherlands, Father 's Day (Vaderdag) is celebrated on the third Sunday of June and is not a public holiday. Traditionally, as on Mother 's Day, fathers get breakfast in bed made by their children and families gather together and have dinner, usually at the grandparents ' house. In recent years, families also started having dinner out, and as on Mother 's Day, it is one of the busiest days for restaurants. At school, children handcraft their present for their fathers. Consumer goods companies have all sorts of special offers for fathers: socks, ties, electronics, suits, and men 's healthcare products. In New Zealand, Father 's Day is celebrated on the first Sunday of September and it is not a public holiday. Fathers ' Day seems to have been first observed at St Matthew 's Church, Auckland on 14 July 1929 and first appeared in commercial advertising the following year. By 1931 other churches had adopted the day. In 1935 much of Australia moved to mark the day at the beginning of September and New Zealand followed, with a Wellington advert in 1937, a Christchurch Salvation Army service in 1938 and in Auckland from 1939. In Norway, Father 's day (Farsdag), is celebrated on the second Sunday of November. It is not a public holiday. Father 's Day is celebrated on the third Sunday of June. The Rutgers WPF launched a campaign titled ' Greening Pakistan -- Promoting Responsible Fatherhood ' on Father 's Day (Sunday June 18, 2017) across Pakistan to promote active fatherhood and responsibility for the care and upbringing of children. Father 's Day is not a public holiday in Pakistan. In Peru, Father 's Day is celebrated on the third Sunday of June and is not a public holiday. People usually give a present to their fathers and spend time with him mostly during a family meal. In the Philippines, Father 's Day is officially celebrated every first Monday of December, but it is not a public holiday. It is more widely observed by the public on the 3rd Sunday of June perhaps due to American influence. In Poland, Father 's Day (in Polish: Dzień Ojca) is celebrated on June 23 and is not a public holiday. Father 's Day ("Dia do Pai '') is celebrated on March 19 (see Roman Catholic tradition below) in Portugal. Father 's Day is not a bank holiday. In the Roman Catholic tradition, Fathers are celebrated on Saint Joseph 's Day, commonly called the Feast of Saint Joseph, March 19, though in certain countries Father 's Day has become a secular celebration. It is also common for Catholics to honor their "spiritual father, '' their parish priest, on Father 's Day. The Law instituting the Father 's day celebration in Romania passed on September 29th, 2009 and stated that Father 's day will be celebrated annually on the second Sunday of May. First time it was celebrated on May 9th 2010. This year it was celebrated on 13 May 2018. The next dates this celebration will take place are: 12 May 2019, 10 May 2020, 9 May 2021, 8 May 2022, 14 May 2023, 12 May 2024 and 11 May 2025.. Russia continues the Soviet Union 's tradition of celebrating Defender of the Fatherland Day instead of Father 's Day. It is usually called "Man 's Day '' and it is considered the Russian equivalent of Father 's Day. In Samoa, Father 's Day is celebrated on the second Sunday in August, and is a recognised national holiday on the Monday following. In Seychelles, Father 's Day is celebrated on June 16 and is not a public holiday. In Singapore, Father 's Day is celebrated on the third Sunday of June but is not a public holiday. In Slovakia, Father 's Day (In slovak: deň otcov) is celebrated on the third Sunday of June. It is not a public holiday In South Africa, Father 's Day is celebrated on the third Sunday of June. It is not a public holiday. In South Sudan, Father 's Day is celebrated on the last Monday of August. President Salva Kiir Mayardit proclaimed it before August 27, 2012. First celebrated on August 27, 2012, Father 's Day was not celebrated in South Sudan in 2011 (due to the country 's independence). Father 's Day, El Día del Padre, is observed on the feast day of Saint Joseph, which is March 19. It is celebrated as a public holiday in some regions of Spain. Father 's Day (In sinhala: Piyawarunge dhinaya, පියවරුන්ගේ දිනය & in Tamil: Thanthaiyar Thinam, தந்தையர் தினம்), is observed on the third Sunday of June. It is not a public holiday. Many schools hold special events to honor fathers. In Sudan, Father 's Day (عيد الأب), is celebrated on the twenty - first of June. In Sweden, Father 's day (Fars dag), is celebrated on the second Sunday of November, but is not a public holiday. In Taiwan, Father 's Day is not an official holiday, but is widely observed on August 8, the eighth day of the eighth month of the year. In Mandarin Chinese, the pronunciation of the number eight is bā, and the pronunciation is very similar to the character "爸 '' "bà '', which means "Pa '' or "dad ''. The eighth day of the eighth month (bā - bā) is a pun for dad (爸爸 or "bàba ''). The Taiwanese, therefore, sometimes refer to August 8 as "Bābā Holiday '' as a pun for "Dad 's Holiday '' (爸爸 節) or the more formal "Father 's Day '' (父親 節). In Thailand, the birthday of the king, is set as Father 's Day. December 5 is the birthday of the late king Bhumibol Adulyadej (Rama IX). Traditionally, Thais celebrate by giving their father or grandfather a canna flower (ดอก พุทธรักษา Dok Buddha Ruksa), which is considered a masculine flower; however, this is not as commonly practiced today. Thai people will wear yellow on this day to show respect for the late king, because yellow is the color of the day for Monday, the day King Bhumibol Adulyadej was born. Thais flood the Sanam Luang, a massive park in front of the palace, to watch the king give his annual speech, and often stay until the evening, when there is a national ceremony. Thais will light candles and show respect to the king by declaring their faith. This ceremony happens in almost every village in Thailand, and even overseas at Thai organizations. It first gained nationwide popularity in the 1980s as part of a campaign by Prime Minister Prem Tinsulanonda to promote Thailand 's royal family. Mother 's Day is celebrated on the birthday of Queen Sirikit, August 12. In Trinidad and Tobago, Father 's Day is celebrated on the third Sunday in June and is not a public holiday. In Turkey, Father 's Day is celebrated on the third Sunday in June and is not a public holiday. In United Arab Emirates, Father 's Day is celebrated on June 21, generally coinciding with midsummer 's day. In the United Kingdom Father 's Day is celebrated on the third Sunday of June. The day does not have a long tradition; The English Year (2006) states that it entered British popular culture "sometime after the Second World War, not without opposition ''. In the US, Father 's Day is celebrated on the third Sunday of June. Typically, families gather to celebrate the father figures in their lives. In recent years, retailers have adapted to the holiday by promoting greeting cards and gifts such as electronics and tools. Schools (if in session) and other children 's programs commonly have activities to make Father 's Day gifts. The U.S. Open golf tournament is scheduled to finish on Father 's Day, as was the 2016 NBA Finals. In Ukraine, Father 's Day is celebrated on the third Sunday of June. In Venezuela, Father 's Day is celebrated on the third Sunday of June and is not a public holiday. Traditionally, as on Mother 's Day, families gather together and have lunch, usually at the grandparents ' house. In recent years, families also started having lunch out, and as on Mother 's Day, it is one of the busiest days for restaurants. At school, children handcraft their present for their fathers. Consumer goods companies have all sorts of special offers for fathers: electronics, suits, and men 's healthcare products. (federal) = federal holidays, (state) = state holidays, (religious) = religious holidays, (week) = weeklong holidays, (month) = monthlong holidays, (36) = Title 36 Observances and Ceremonies Bold indicates major holidays commonly celebrated in the United States, which often represent the major celebrations of the month.