text
stringlengths
5.43k
47.1k
id
stringlengths
47
47
dump
stringclasses
7 values
url
stringlengths
16
815
file_path
stringlengths
125
142
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
4.1k
8.19k
score
float64
2.52
4.88
int_score
int64
3
5
|— City —| |City of Killeen| |Motto: "Where Freedom Grows"| |• City Council||Mayor Scott Cosper |• City Manager||Glenn Morrison| |• Total||54.2 sq mi (140.5 km2)| |• Land||53.6 sq mi (138.8 km2)| |• Water||0.7 sq mi (1.7 km2)| |Elevation||890 ft (270 m)| |Population (2013 est.)| |• Density||2,513/sq mi (970.3/km2)| |Time zone||Central (CST) (UTC-6)| |• Summer (DST)||CDT (UTC-5)| |GNIS feature ID||1360642| Killeen is a city in Bell County, Texas, United States. According to the 2010 United States Census, the city's population was 127,921, making it the 21st most populous city in Texas. It is the "principal city" of the Killeen–Temple–Fort Hood Metropolitan Statistical Area. Killeen is directly adjacent to the main cantonment of Fort Hood, and as such its economy heavily depends on the post and the soldiers (and their families) stationed there. In 1881, the Gulf, Colorado and Santa Fe Railway extended its tracks through central Texas, buying 360 acres (1.46 km2) a few miles southwest of a small farming community known as Palo Alto, which had existed since about 1872. The railroad platted a 70-block town on its land and named it after Frank P. Killeen, the assistant general manager of the railroad. By the next year the town included a railroad depot, a saloon, several stores, and a school. Many of the residents of the surrounding smaller communities in the area moved to Killeen, and by 1884 the town had grown to include about 350 people, served by five general stores, two gristmills, two cotton gins, two saloons, a lumberyard, a blacksmith shop, and a hotel. Killeen expanded as it became an important shipping point for cotton, wool, and grain in western Bell and eastern Coryell counties. About 780 people lived in Killeen by 1900. Around 1905, local politicians and businessmen convinced the Texas legislature to build bridges over Cowhouse Creek and other streams, doubling Killeen's trade area. A public water system began operation in 1914 and its population had increased to 1,300 residents. Until the 1940s Killeen remained a relatively small and isolated farm trade center, but this changed drastically after 1942, when Camp Hood (re-commissioned as Fort Hood in 1950) was created as a military training post to meet the demands of the Second World War. Laborers, construction workers, contractors, soldiers, and their families moved into the area by the thousands, and Killeen became a military boomtown. The opening of Camp Hood also radically altered the nature of the local economy, since the sprawling new military post covered almost half of Killeen's farming trade area. The loss of more than 300 farms and ranches led to the demise of Killeen's cotton gins and other farm-related businesses. New businesses were started to provide services for the military camp. Killeen suffered a recession when Camp Hood was all but abandoned after the end of the Second World War, but when Fort Hood was established as a permanent army post in 1950, the city boomed again. Its population increased from about 1,300 in 1949 to 7,045 in 1950, and between 1950 and 1951 about 100 new commercial buildings were constructed in Killeen. By 1955, Killeen had an estimated 21,076 residents and 224 businesses. Troop cutbacks and transfers in the mid-1950s led to another recession in Killeen which lasted until 1959, when various divisions were returned to Fort Hood. (Elvis Presley lived in Killeen for a time during his stint in the army.) The town continued to grow through the 1960s, especially after the Vietnam War led to increased activity at Fort Hood. By 1970 Killeen had developed into a city of 35,507 inhabitants and had added a municipal airport, a new municipal library, and a junior college (Central Texas College). By 1980, when the census counted 49,307 people in Killeen, it was the largest city in Bell County. By 1990 its population had increased to 63,535, and 265,301 people lived in the Killeen/Temple metropolitan area. In addition to shaping local economic development after 1950, the military presence at Fort Hood also changed the city's racial, religious, and ethnic composition. No blacks lived in the city in 1950, for example, but by the early 1950s the town had added Marlboro Heights, an all-black subdivision, and in 1956 the city school board voted to integrate the local high school. The city's first resident Catholic priest was assigned to the St. Joseph's parish in 1954, and around the same time, new Presbyterian and Episcopal churches were built. By the 1980s the city had a heterogeneous population including whites, blacks, Mexican Americans, Koreans, and a number of other foreign nationals. The year 1991 was a roller coaster year for Killeen. After the Iraqi invasion of Kuwait in the late summer of 1990, the city prepared for war, sending thousands of troops from the Second Armored Division and the First Cavalry Division to the Middle East. On October 16, 1991, George Hennard murdered 23 people and then committed suicide in the Luby's in Killeen (see Luby's shooting). In December 1991, one of Killeen's high school football teams, the Killeen Kangaroos, won the 5-A Division I state football championship by defeating Sugar Land Dulles 14–10 in the Astrodome. By 2000, the census listed Killeen's population as 86,911, and by 2010 it was over 127,000, making it one of the fastest-growing areas in the nation. A large number of military personnel from Killeen have served in the wars in Iraq and Afghanistan. As of April 2008, over 400 of its soldiers had died in the two wars. On November 5, 2009, only a few miles from the site of the Luby's tragedy, a gunman opened fire on people at the Fort Hood military base with a handgun, killing 13 and wounding 32. The gunman, Nidal Malik Hasan, sustained four gunshot wounds after a brief shootout with a civilian police officer, causing paralysis from the waist down, before he was arrested and sentenced to death (see 2009 Fort Hood shooting). In 2011, Killeen got media attention from a new television series called Surprise Homecoming, hosted by Billy Ray Cyrus, about military families that have loved ones returning home from overseas. On April 2, 2014, a second shooting spree occurred at several locations at Fort Hood. Four people were killed, including the gunman, Ivan Lopez, who committed suicide, while sixteen additional people were injured (see 2014 Fort Hood shooting). Killeen is located in western Bell County at It is bordered to the north by Fort Hood and to the east by Harker Heights. Killeen is 16 miles (26 km) west of Belton, the county seat and nearest access to Interstate 35.(31.105591, -97.726586). According to the United States Census Bureau, the city has a total area of 54.2 square miles (140.5 km2), of which 53.6 square miles (138.8 km2) is land and 0.66 square miles (1.7 km2), or 1.24%, is water. |Climate data for Killeen, Texas| |Record high °F (°C)||88 |Average high °F (°C)||58 |Average low °F (°C)||34 |Record low °F (°C)||5 |Precipitation inches (mm)||1.66 As of the census of 2010, there were 127,921 people, 48,052 households, and 33,276 families residing in the city. The population density was 2,458.9 people per square mile (949.3/km²). There were 53,913 housing units at an average density of 999.9 per square mile (386.0/km²). The racial makeup of the city was 45.1% White, 34.1% Black, 0.8% Native American, 4% Asian, 1.4% Pacific Islander, 7.9% from other races, and 6.7% from two or more races. Hispanic or Latino of any race were 22.9% of the population. There were 48,052 households out of which 40.1% had children under the age of 18 living with them, 47.1% were married couples living together, 17.2% had a female householder with no husband present, and 30.8% were non-families. 24.4% of all households were made up of individuals and 3.6% had someone living alone who was 65 years of age or older. The average household size was 2.66 and the average family size was 3.17. In the city the population was spread out with 33.2% under the age of 20, 38.7% from 20 to 39, 22.8% from 40 to 64, and 5.2% who were 65 years of age or older. The median age was 27 years. The median income for a household in the city was $44,370, and the median income for a family was $36,674. The per capita income for the city was $20,095, compared to the national per capita of $39,997. About 11.2% of families and 16.4% of the population were below the poverty line, including 18.5% of those under age 18 and 8.6% of those age 65 or over. According to the city's 2008 Comprehensive Annual Financial Report, the top employers in the city are: |#||Employer||# of Employees| |2||Killeen Independent School District||6,000| |3||Central Texas College||1,360| |5||Fort Hood Exchange||1,218| |6||City of Killeen||1,100| |7||First National Bank||1,000| |8||Sallie Mae (Now Aegis Limited)||936| Killeen Mall serves as the city's main shopping destination, and one of two regional shopping malls in Bell County. Arts and culture Vive Les Arts Theatre Killeen is home to Vive Les Arts Theatre, a full-time arts organization which produces several Main Stage and Children's Theatre shows each year. On November 8, 2011, five members of the Killeen City Council were recalled. As a consequence, the remaining members of the council were not able to achieve a quorum, and the City Council was in effect disbanded until at least three seats were filled. It was believed that this would not occur until May 2012. According to the city’s 2008 Comprehensive Annual Financial Report, the city’s various funds had $133.4 million in revenues, $119.0 million in expenditures, $523.3 million in total assets, $219.9 million in total liabilities, and $90.4 million in cash and investments. The structure of the management and coordination of city services is: |City Manager||Glenn Morrison| |Assistant City Manager||John Sutton| |Building Official||Earl Abbott| |City Attorney||Kathryn H. Davis| |City Secretary||Paula Miller| |Chief of Police||Dennis M. Baldwin| |Director of Aviation||Vacant| |Director of Community Development||Leslie Hinkle| |Director of Convention & Visitor’s Bureau||Connie Kuehl| |Director of Finance| |Director of Fleet| |Director of General Services| |Director of Human Resources||Debbie Maynor| |Director of Information Technology||Vacant| |Director of Library Services||Deanna Frazee| |Director of Planning||Dr. Ray Shanaa| |Director of Public Information||Hilary Shine| |Director of Public Works||Scott Osburn| |Director of Solid Waste and Drainage Services||Vacant| |Director of Street Services||John Koester| |Director of Utility Services||Robert White| |Director of Volunteer Services||Will Brewster| |Director of Water & Sewer||Robert White| |Fire Chief||Jerry Gardner| The Killeen Independent School District (KISD) is the largest school district between Round Rock and Dallas, encompassing Killeen, Harker Heights, Fort Hood, Nolanville, and rural west Bell County. KISD has, thirty-two elementary schools (PK-5), eleven middle schools (6-8), four high schools (9-12), and five specialized campuses. KISD's four high schools and mascots are the Killeen High School Kangaroos (the original city-wide high school), the Ellison High School Eagles, Harker Heights High School Knights, and the Shoemaker High School Grey Wolves. Memorial Christian Academy (K-12) and Creek View Academy (previously Destiny School), a K-9 charter school of Honors Academy, are in Killeen. Colleges and universities Central Texas College was established in 1965 to serve Bell, Burnet, Coryell, Hamilton, Lampasas, Llano, Mason, Mills and San Saba counties in addition to Fort Hood. CTC offers more than 40 associate degrees and certificates of completion. Texas A&M University-Central Texas opened on September 1, 1999, as a branch campus of nearby Tarleton State University. After the campus enrolled 1,000 full-time equivalent students, Tarleton State University-Central Texas became a separate institution within the Texas A&M University System. The university offers bachelor's and master's degrees. Killeen's main newspaper is the Killeen Daily Herald, which has been publishing under different formats since 1890. The paper was one of four owned by the legendary Texas publisher Frank W. Mayborn, whose wife remains its editor and publisher. The Herald also publishes the Fort Hood Herald, an independent publication in the Fort Hood area, not authorized by Fort Hood Public Affairs, and the Cove Herald, a weekly paper for the residents of Copperas Cove. The official paper of Fort Hood is The Fort Hood Sentinel, an authorized publication for members of the U.S. Army that is editorially independent of the U.S. government and military. Killeen is served by a small regional airfield known as Skylark Field (ILE) and the larger Killeen–Fort Hood Regional Airport (GRK). The Hill Country Transit District (The HOP) operates a public bus transit system within the city with eight routes including connections to Temple, Copperas Cove, and Harker Heights. The HOP buses are easily identified by their teal and purple exteriors. The HOP recently purchased new buses with the new color green. Major highways that run through Killeen are U.S. Highway 190 (Central Texas Expressway or CenTex), Business Loop 190 (Veterans Memorial Boulevard), State Highway 195, and Spur 172 (leading into Fort Hood main gate). Interstate 35 is accessible in Belton, 16 miles (26 km) east of the center of Killeen. The Killeen Fire Department is led by the current Chief Jerry Gardner, who has been the Fire Chief since 2006 when he joined KFD after leading the Pasadena Fire Department in the Houston area for many years. Chief Gardner is assisted in his duties by three deputy chiefs: Steve Buchanan, Kenneth Hawthorne, and Brian Brank. In addition to the staff officers, the staff is supplemented and assisted by several secretaries and paid assistants. The Killeen Fire Department is separated into three separate divisions; Training, Fire Prevention, and Operation. The latter is broken into three shifts: A, B, and C. - The Training Division is led by the senior training lieutenant Randy Pearson. He is assisted by junior lieutenant Mikkie Jordan. Together they are responsible for all of the training of on duty personnel, as well as Fire training academies of cadet trainees. The training division hosts two training academies per year for individuals that wish to become Texas Certified Fire Fighters. They also host a two-year program in conjunction with the Killeen Independent School District that allows high school juniors and seniors to become certified firefighters while graduating from high school. The Killeen Fire Department and Killeen Independent School District are the first in the State to have such a program. To date it has been a very successful program resulting in the hiring of many local men and women directly out of high school. - The training division is also responsible for community outreach programs: - Child Safety Seat Class - The Killeen Fire Department holds classes regarding child safety seats every 1st and 3rd Thursday of the month. The class will discuss the values of proper child safety installation, as well as aid in installing your privately purchased seat. Also on a limited basis the Fire Department has Child Safety Seats available to low income families. - Child Immunization - The Killeen Fire Department hosts annual immunization drives. These are no-cost shot clinics aimed at both civilian and military families. They are hosted at the beginning of the school year during the end of summer vacation. They are also hosted on a monthly basis on every second Saturday (except for August) from 10:00-2:00 at the Killeen Fire Training Center. Again these are no-cost to the individual, and it’s aimed at providing a better standard of living for the citizens of central Texas. - The Killeen Fire Department’s Fire Prevention Division is currently helmed by Fire Marshal James Chism. Mr. Chism and his four inspectors are responsible for the inspection of all businesses within the City Limits. They are also responsible for the investigation of all fires, both accidental and malicious. Their arson investigations of have one of the highest conviction rates within Texas, sometimes doubling the rates of similar sized municipalities. The Fire Prevention division attained the rating of Number One in Fire Prevention in the nation in the mid 1970s. - The Third Division is also the largest and most well known, the Operations division. It is responsible for the day-to day operations of the fire department. The Operations Division is responsible for in excess of 12,000 ambulance calls and 6,000 fire calls annually. The Operation Division is led by Deputy Chief Steven Buchanan and is divided equally amongst three shifts each rotating on duty for 24 hours followed by 48 hours off. The schedule is designed so that there is a full complement of personnel 24/7/365. Each shift is further divided into two Battalions which are led by Battalion Captains. Battalion 1 is headquartered at Central Fire Station and is led by BC Joel Secrist (A-shift), BC Leon Adamski (B-shift), and BC Cody Simmons (C-Shift). Battalion-1 encompasses Fire Stations 1, Central, 3, and 4 which protect the older northern portion of the city. Battalion 2 is headquartered at Fire Station #8 and is led by BC Bill Brooks (A-shift), BC Clay Brooks (B-shift), and BC Linda Brooks (C-shift). Battalion-2 encompasses fire stations 5, 6, 7, and 8 protecting the southern portion of the city in addition to providing protection to the extraterritorial jurisdiction in the rural area south of the city limits. Currently the department provides emergency services from 8 fire stations strategically placed throughout the city. Nearly two hundred personnel staff 5 Engine Companies, 2 Ladder Companies, 7 Ambulances, and one Aircraft Rescue Firefighting unit. In addition to the line companies, the two battalion captains are assisted with EMS supervision by the EMS Lieutenant assigned to each shift. KFD recently relocated Fire Station #1 to a new facility on Westcliff Road to provide improved responses in the northern areas of the city and Fire Station #9 is currently being planned on the southwest area of town to improve protection to the growing population in that area. In 2008, there were 885 violent crimes and 4757 non-violent crimes reported in the city of Killeen as part of the FBI's Uniform Crime Reports (UCR) Program. Violent crimes are the aggregation of the UCR Part 1 crimes of murder, forcible rape, robbery and aggravated assault. Non-violent crimes are the aggregation of the crimes of burglary, larceny-theft, and motor vehicle theft. Killeen’s 2008 UCR Part 1 crimes break down as follows: |Crime||Reported offenses||Killeen rate||Texas rate||U.S. rate |Larceny - Theft||2877||2482.2||2688.9||2200.1| |Motor vehicle theft||169||145.8||351.1||330.5| Rates are crimes per 100,000 population. The Killeen rates are calculated using the estimated 2008 population figure of 115,906 as provided by the Texas Department of Public Safety. - Amerie, R&B singer; father was stationed at Fort Hood and she lived in Killeen and attended Ellison High School at one time - Tommie Harris, defensive tackle for the NFL's Chicago Bears; former star at Ellison High School in Killeen - Othello Henderson, defensive back for the NFL's New Orleans Saints - Burgess Meredith, actor; played the Penguin on Batman television series that aired during the 1960s; also played Rocky's manager in the movie Rocky that starred Sylvester Stallone - Tia Mowry and Tamera Mowry, of Sister, Sister fame; father was stationed at Fort Hood and they lived in Killeen for a short time - Elvis Presley, stationed at Fort Hood and lived in Killeen for a short time - Burt Reynolds, actor, lived in Killeen at one time - Darrol Ray, NFL player, Attended Killeen High School where he played quarterback. Drafted in the 2nd round by the New York Jets in 1980. Owner of Ray's Smokehouse BBQ in Norman, Oklahoma. - Terry Ray, American and Canadian football player for Atlanta Falcons, New England Patriots, Edmonton Eskimos, and Winnipeg Blue Bombers. Attended Ellison High School. - Michael Cummings, Quarterback for Kansas Jayhawks, Attended Killeen High School. - Cory Jefferson, (#21)Power Forward for the Brooklyn Nets, he was drafted 60th in the 2014 draft by the San Antonio Spurs and was traded to the Brooklyn Nets. Played at Baylor from 2009-2014 & Attended Killeen High School. Twin towns — Sister cities Osan, South Korea, has been Killeen's Sister City since 1995. - ^ a b "American FactFinder". United States Census Bureau. http://factfinder2.census.gov. Retrieved 2008-01-31. - ^ "US Board on Geographic Names". United States Geological Survey. 2007-10-25. http://geonames.usgs.gov. Retrieved 2008-01-31. - ^ a b "Geographic Identifiers: 2010 Demographic Profile Data (G001): Killeen city, Texas". U.S. Census Bureau, American Factfinder. http://factfinder2.census.gov/bkmk/table/1.0/en/DEC/10_DP/G001/1600000US4839148. Retrieved April 10, 2014. - ^ Beale, Jonathan (2008-04-09). "Grief hangs over Texas army town". BBC News. http://news.bbc.co.uk/2/hi/americas/7336427.stm. Retrieved 2008-04-08. - ^ Herskovitz, Jon (April 2014). "Shooter at Fort Hood Army base in Texas, injuries reported – police". Reuters. http://in.reuters.com/article/2014/04/02/usa-texas-shooting-idINDEEA310JL20140402. Retrieved April 2, 2014. - ^ "Fort Hood shooter snapped over denial of request for leave, Army confirms". Fox News Channel. 7 April 2014. http://www.foxnews.com/us/2014/04/07/fort-hood-shooter-snapped-over-denial-request-for-leave-army-confirms/. Retrieved 12 April 2014. - ^ "US Gazetteer files: 2010, 2000, and 1990". United States Census Bureau. 2011-02-12. http://www.census.gov/geo/www/gazetteer/gazette.html. Retrieved 2011-04-23. - ^ "Monthly Averages for Killeen, TX". Weather.com. The Weather Channel. http://www.weather.com/weather/wxclimatology/monthly/USTX0692. Retrieved August 14, 2013. - ^ Most Expensive and Most Affordable Housing Markets - ^ a b c City of Killeen CAFR Retrieved 2009-07-17 - ^ Retrieved 2011-11-16 - ^ "Contact Us." Creek View Academy. Retrieved on September 6, 2011. "Address: 1001 E. Veterans Memorial Blvd. Ste. 301 Killeen, Texas 76541 " - ^ "Killeen Daily Herald". Killeen Daily Herald. http://www.kdhnews.com/docs/about.aspx. Retrieved August 2, 2012. - ^ "The HOP Urban Time Schedule". Hill Country Transit District. http://www.takethehop.com/UrbanTimes.htm. Retrieved March 16, 2014. - ^ [http://www.killeentexas.gov/index.php?section=168 - ^ a b Texas DPS Crime In Texas 2008, Retrieved 2010-08-27 - ^ Texas DPS Crime In Texas 2008, Retrieved 2010-08-27 - ^ FBI Uniform Crime Reports - 2008 Crime In The US, Retrieved 2010-08-27 - ^ "Osan, South Korea" - ^ "sister cities" - Bell County Historical Commission. Story of Bell County, Texas 2 vols. Austin: Eakin Press, 1988. - Duncan Gra'Delle, Killeen: Tale of Two Cities, 1882–1982. Killeen, Texas: 1984. |Fort Hood||Fort Hood| |Fort Hood, Copperas Cove||Harker Heights, Nolanville| |This page uses content from the English language Wikipedia. The original content was at Killeen, Texas. The list of authors can be seen in the page history. As with this Familypedia wiki, the content of Wikipedia is available under the Creative Commons License.|
<urn:uuid:7b08fbbb-4b4a-446d-b31d-1301aa2d4e1e>
CC-MAIN-2022-33
https://familypedia.fandom.com/wiki/Killeen,_Texas
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00099.warc.gz
en
0.927171
7,082
2.53125
3
Democracy in America |Author||Alexis de Tocqueville| |Original title||De la démocratie en Amérique| |Publisher||Saunders and Otley (London)| De la démocratie en Amérique (French pronunciation: [dəla demɔkʁasi ɑ̃n‿ameˈʁik]; published in two volumes, the first in 1835 and the second in 1840) is a classic French text by Alexis de Tocqueville. Its title translates as Of Democracy in America, but English translations are usually titled simply Democracy in America. In the book, Tocqueville examines the democratic revolution that he believed had been occurring over the past seven hundred years. In 1831, Alexis de Tocqueville and Gustave de Beaumont were sent by the French government to study the American prison system. In his later letters Tocqueville indicates that he and Beaumont used their official business as a pretext to study American society instead. They arrived in New York City in May of that year and spent nine months traveling the United States, studying the prisons, and collecting information on American society, including its religious, political, and economic character. The two also briefly visited Canada, spending a few days in the summer of 1831 in what was then Lower Canada (modern-day Quebec) and Upper Canada (modern-day Ontario). After they returned to France in February 1832, Tocqueville and Beaumont submitted their report, Du système pénitentiaire aux États-Unis et de son application en France, in 1833. When the first edition was published, Beaumont, sympathetic to social justice, was working on another book, Marie, ou, L'esclavage aux Etats-Unis (two volumes, 1835), a social critique and novel describing the separation of races in a moral society and the conditions of slaves in the United States. Before finishing Democracy in America, Tocqueville believed that Beaumont's study of the United States would prove more comprehensive and penetrating. He begins his book by describing the change in social conditions taking place. He observed that over the previous seven hundred years the social and economic conditions of men had become more equal. The aristocracy, Tocqueville believed, was gradually disappearing as the modern world experienced the beneficial effects of equality. Tocqueville traced the development of equality to a number of factors, such as granting all men permission to enter the clergy, widespread economic opportunity resulting from the growth of trade and commerce, the royal sale of titles of nobility as a monarchical fundraising tool, and the abolition of primogeniture. Tocqueville described this revolution as a "providential fact" of an "irresistible revolution," leading some to criticize the determinism found in the book. However, based on Tocqueville's correspondences with friends and colleagues, Marvin Zetterbaum, Professor Emeritus at University of California Davis, concludes that the Frenchman never accepted democracy as determined or inevitable. He did, however, consider equality more just and therefore found himself among its partisans. Given the social state that was emerging, Tocqueville believed that a "new political science" would be needed. According to [I]nstruct democracy, if possible to reanimate its beliefs, to purify its motives, to regulate its movements, to substitute little by little the science of affairs for its inexperience, and knowledge of its true instincts for its blind instincts; to adapt its government to time and place; to modify it according to circumstances and men: such is the first duty imposed on those who direct society in our day. The remainder of the book can be interpreted as an attempt to accomplish this goal thereby giving advice to those people who would experience this change in social states. This section is in a list format that may be better presented using prose. (September 2011) The Puritan Founding Tocqueville begins his study of America by explaining the contribution of the Puritans. According to him, the Puritans established America's democratic social state of equality. They arrived equals in education and were all middle class. In addition, Tocqueville observes that they contributed a synthesis of religion and political liberty in America that was uncommon in Europe, particularly in France. He calls the Puritan Founding the "seed" of his entire work. The Federal Constitution Tocqueville believed that the Puritans established the principle of sovereignty of the people in the Fundamental Orders of Connecticut. The American Revolution then popularized this principle, followed by the Constitutional Convention of 1787, which developed institutions to manage popular will. While Tocqueville speaks highly of the America's Constitution, he believes that the mores, or "habits of mind" of the American people play a more prominent role in the protection of freedom. - Township democracy - Mores, Laws, and Circumstances - Tyranny of the Majority - Religion and beliefs - The Family [how American were in that century and their interactions] - Individualism [later this influenced writers in the Renaissance Era] - Self-Interest Rightly Understood Situation of women Tocqueville was one of the first social critics to examine the situation of American women and to identify the concept of Separate Spheres. The section Influence of Democracy on Manners Properly So Called of the second volume is devoted to his observations of women's status in American society. He writes: "In no country has such constant care been taken as in America to trace two clearly distinct lines of action for the two sexes and to make them keep pace one with the other, but in two pathways that are always different." He argues that the collapse of aristocracy lessened the patriarchal rule in the family where fathers would control daughters' marriages, meaning that women had the option of remaining unmarried and retaining a higher degree of independence. Married women, by contrast, lost all independence "in the bonds of matrimony" as "in America paternal discipline [by the woman's father] is very relaxed and the conjugal tie very strict". Because of his own view that a woman could not act on a level equal to a man, he saw a woman as needing her father's support to retain independence in marriage. Consistent with this limited view of the potential of women to act as equals to men, as well as his apparently missing on his travels seeing the nurturing roles that many men in the United States played, particularly in the Delaware Valley region of cultures where there was a lot of influence by Society of Friends as well as a tradition of male and female equality, Tocqueville considered the separate spheres of women and men a positive development, stating: "As for myself, I do not hesitate to avow that although the women of the United States are confined within the narrow circle of domestic life, and their situation is in some respects one of extreme dependence, I have nowhere seen women occupying a loftier position; and if I were asked, (...) to what the singular prosperity and growing strength of that people ought mainly to be attributed, I should reply,—to the superiority of their women." The primary focus of Democracy in America is an analysis of why republican representative democracy has succeeded in the United States while failing in so many other places. Tocqueville seeks to apply the functional aspects of democracy in the United States to what he sees as the failings of democracy in his native France. Tocqueville speculates on the future of democracy in the United States, discussing possible threats to democracy and possible dangers of democracy. These include his belief that democracy has a tendency to degenerate into "soft despotism" as well as the risk of developing a tyranny of the majority. He observes that the strong role religion played in the United States was due to its separation from the government, a separation all parties found agreeable. He contrasts this to France where there was what he perceived to be an unhealthy antagonism between democrats and the religious, which he relates to the connection between church and state. Tocqueville also outlines the possible excesses of passion for equality among men, foreshadowing the totalitarian states of the twentieth century. Tocqueville observed that social mechanisms have paradoxes, like in what later became known as the Tocqueville effect: "social frustration increases as social conditions improve". He wrote that this growing hatred of social privilege, as social conditions improve, leads to the state concentrating more power to itself. Tocqueville's views on the United States took a darker turn after 1840, however, as made evident in Aurelian Craiutu's Tocqueville on America after 1840: Letters and Other Writings. Democracy in America was published in two volumes, the first in 1835 and the other in 1840. It was immediately popular in both Europe and the United States, while also having a profound impact on the French population. By the twentieth century, it had become a classic work of political science, social science, and history. It is a commonly assigned reading for undergraduates of American universities majoring in the political or social sciences, and part of the introductory political theory syllabus at Cambridge, Oxford, Princeton and other institutions. In the introduction to his translation of the book, Harvard Professor Harvey C. Mansfield calls it "at once the best book ever written on democracy and the best book ever written on America." Tocqueville's work is often acclaimed for making a number of astute predictions. He anticipates the potential acrimony over the abolition of slavery that would tear apart the United States and lead to the American Civil War as well as the eventual superpower rivalry between the United States and Russia, which exploded after World War II and spawned the Cold War. Noting the rise of the industrial sector in the American economy, Tocqueville, some scholars have argued, also correctly predicted that an industrial aristocracy would rise from the ownership of labor. He warned that '...friends of democracy must keep an anxious eye peeled in this direction at all times', observing that the route of industry was the gate by which a newfound wealthy class might potentially dominate, although he himself believed that an industrial aristocracy would differ from the formal aristocracy of the past. Furthermore, he foresaw the alienation and isolation that many have come to experience in modern life. On the other hand, Tocqueville proved shortsighted in noting that a democracy's equality of conditions stifles literary development. In spending several chapters lamenting the state of the arts in America, he fails to envision the literary Renaissance that would shortly arrive in the form of such major writers as Edgar Allan Poe, Henry David Thoreau, Ralph Waldo Emerson, Herman Melville, Nathaniel Hawthorne and Walt Whitman. Equally, in dismissing the country's interest in science as limited to pedestrian applications for streamlining the production of material goods, he failed to imagine America's burgeoning appetite for pure scientific research and discovery. According to Tocqueville, democracy had some unfavorable consequences: the tyranny of the majority over thought, a preoccupation with material goods, and isolated individuals. Democracy in America predicted the violence of party spirit and the judgment of the wise subordinated to the prejudices of the ignorant. Translated versions of Democracy in America and effects on meaning - This translation was completed by Reeve and later revised by Francis Bowen. In 1945, it was reissued in a modern edition by Alfred A. Knopf edited and with an extensive historical essay by Phillips Bradley. Tocqueville wrote to Reeve providing a critique of the translation: "Without wishing to do so and by following the instinct of your opinions, you have quite vividly colored what was contrary to Democracy and almost erased what could do harm to Aristocracy." This statement indicates, first, that Tocqueville believed Reeve's translation to be problematic, and second, that he believed that Reeve's political views induced him, albeit unconsciously, to distort the original book's meaning. - George Lawrence, translated in 1966 with an introduction by J. P. Mayer - Gerald Bevan, translated circa 2003 - This authoritative translation of the text by Tocqueville, published by the Library of America, requires the reader to think more about the text instead of relying on "instant opinions" provided by previous translations. A speech from the translator given at Harvard University provides a keen insight into his development of his translation: To shed light on the possible inaccuracies of the original translation, the title of the text should be "On Democracy in America", however this was changed by Reeve. Although not a complete rewrite, the clarity that Tocqueville wrote with depended on its concreteness and by making words interchangeable at will, it does have an effect on the meaning especially to readers who do not put the effort to research the text or read it in its native French. - James T. Schleifer, edited by Eduardo Nolla and published by Liberty Fund in March 2010 - Bilingual edition based on the authoritative edition of the original French-language text. - Economic sociology - Tocqueville, Democracy in America - The Alexis de Tocqueville Tour: Exploring Democracy in America - de Tocqueville, Alexis (1835). De la démocratie en Amérique. I (1 ed.). Paris: Librairie de Charles Gosselin. Retrieved 24 June 2015.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> via Gallica; de Tocqueville, Alexis (1835). De la démocratie en Amérique. II (1 ed.). Paris: Librairie de Charles Gosselin. Retrieved 24 June 2015.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> via Gallica - de Tocqueville, Alexis (1840). De la démocratie en Amérique. III (1 ed.). Paris: Librairie de Charles Gosselin. Retrieved 24 June 2015.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> via Gallica; de Tocqueville, Alexis (1840). De la démocratie en Amérique. IV (1 ed.). Paris: Librairie de Charles Gosselin. Retrieved 24 June 2015.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> via Gallica - Johri, Vikram. "'Alexis de Tocqueville': the first French critic of the US". The Christian Science Monitor. Retrieved 22 April 2011.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Tocqueville, Alexis de (2000). Democracy in America. Chicago: The University of Chicago Press. p. 13. ISBN 0-226-80532-8.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Tocqueville, Alexis de (2000). Democracy in America. Chicago: University of Chicago Press. ISBN 0-226-80532-8.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Zetterbaum, Marvin (1967). Tocqueville and the problem of democracy. Stanford: Stanford University Press.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Tocqueville, Alexis de (2000). Democracy in America. Chicago: University of Chicago Press. p. 7. ISBN 0-226-80532-8.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Kerber, Linda K. (1988). "Separate Spheres, Female Worlds, Woman's Place: The Rhetoric of Women's History". The Journal of American History. University of North Carolina Press. 75 (1): 9–39. doi:10.2307/1889653.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> Full text available online - Tocqueville, Alexis de (1840). "Chapter XII: How the Americans understand the Equality of the sexes". Democracy in America. London: Saunders and Otley. p. 101.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Tocqueville, Alexis de (1840). "Chapter X: The young Woman in the Character of a Wife". Democracy in America. London: Saunders and Otley. pp. 79–81.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Tocqueville, Alexis de (1840). "Chapter XII: How the Americans understand the Equality of the sexes". Democracy in America. London: Saunders and Otley. p. 106.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - L. Jaume, Tocqueville, Fayard 2008 - Zaleski, Pawel. "Tocqueville on Civilian Society. A Romantic Vision of the Dichotomic Structure of Social Reality". Archiv für Begriffsgeschichte. Felix Meiner Verlag emocratie liberale, Paris, Mare et Martin, 2007. 50. line feed character in |publisher=at position 20 (help); Italic or bold markup not allowed in: - Swedberg, Richard (2009). Tocqueville's Political Economy. Princeton University Press. p. 260. ISBN 9781400830084.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Tocqueville, Alexis de (2000). Democracy in America. Chicago: The University of Chicago Press. ISBN 0-226-80532-8.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - "Tocqueville, Democracy in America, Note on the Translation". Press.uchicago.edu. Retrieved 2012-06-23.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - "From the OED:". People.fas.harvard.edu. 2003-10-30. Retrieved 2012-06-23.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - ASIN 0060956666 - http://www.press.uchicago.edu/ucp/books/book/chicago/D/bo3612682.html[full citation needed] - ASIN 0140447601 - "Democracy in America: Translator's Note – Arthur Goldhammer". Loa.org. Retrieved 2012-06-23.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - "Democracy in America De la Démocratie en Amérique". Libertyfund.org. Retrieved 2012-06-23.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Manent, Pierre. Tocqueville and the Nature of Democracy (1996) - Morton, F. L. "Sexual Equality and the Family in Tocqueville's Democracy in America," Canadian Journal of Political Science (1984) 17#2 pp. 309–324 in JSTOR - Schleifer, James T. The Chicago Companion to Tocqueville's Democracy in America (U of Chicago Press, 2012) - Schneck, Stephen. "New Readings of Tocqueville's America: Lessons for Democracy," Polity (1992) 25#2 pp. 283–298 in JSTOR - Welch, Cheryl B. ed. Cambridge Companion to Tocqueville (2006) excerpt and text search - Zetterbaum, Marvin. Tocqueville and the Problem of Democracy (1967) - Tocqueville, Democracy in America (Arthur Goldhammer, trans.; Olivier Zunz, ed.) (The Library of America, 2004) ISBN 1-931082-54-5 - Tocqueville, Democracy in America (George Lawrence, trans.; J. P. Mayer, ed.; New York: Perennial Classics, 2000) - Tocqueville, Democracy in America (Harvey Mansfield and Delba Winthrop, trans., ed.; Chicago: University of Chicago Press, 2000) - Jean-Louis Benoît, Tocqueville Moraliste, Paris,Honore champion, 2004. - Arnaud Coutant, Tocqueville et la Constitution democratique, Paris, Mare et Martin, 2008. - A. Coutant, Une Critique republicaine de la democratie liberale, Paris, Mare et Martin, 2007. - Laurence Guellec, Tocqueville : l'apprentissage de la liberté, Michalon, 1996. - Lucien Jaume, Tocqueville, les sources aristocratiques de la liberte, Bayard, 2008. - Eric Keslassy, le liberalisme de Tocqueville à l'epreuve du paupérisme, L'Harmattan, 2000 - F. Melonio, Tocqueville et les Français, 1993. |French Wikisource has original text related to this article:| - Democracy in America, complete, at The University of Adelaide Library - 1831 Notes of Alexis de Tocqueville in Lower Canada - Democracy in America, the full book text (Note: this is public domain) - Democracy in America, Volume 1 at Project Gutenberg - Democracy in America, Volume 2 at Project Gutenberg - Booknotes, February 26, 1995 – Interview with Alan Ryan on the writing of the introduction to the 1994 version of Democracy in America - Booknotes, December 17, 2000 – Interview with Harvey Mansfield on the 2000 University of Chicago Press edition of Democracy in America, which he and his wife, Delba Winthrop, translated and edited. (Audio plays upon load; page includes transcript) - Audiolivre : De la démocratie en Amérique. Première partie. (7h 31min) (en français) - Alexis de Tocqueville: Tyranny of the Majority EDSITEment from the National Endowment for the Humanities
<urn:uuid:531dee95-ccb1-4f42-bdf6-a0c29c888c31>
CC-MAIN-2022-33
https://www.infogalactic.com/info/Democracy_in_America
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571090.80/warc/CC-MAIN-20220809215803-20220810005803-00098.warc.gz
en
0.863183
4,994
3.015625
3
Sigmund Freud evidently believed that Friedrich Wilhelm Nietzsche had a more “penetrating self-knowledge” than any other human being has ever had. If Freud’s analysis is accurate, perhaps it explains why Nietzsche’s written output so closely reflects aspect of the human psyche; like our minds, his philosophy is powerful but occasionally difficult to untangle, given apparent contradictions and complications. From among the strands of his thought, however, emerge coherent patterns. For example, although the ideas involved are convoluted, Nietzsche repeatedly asserted that a morality rooted in notions of good and evil originated in weakness, is destructive, and is arbitrary. Instead of morality, Nietzsche affirmed a value system that was based on embracing the worldly, the paradoxical, and the “evil” of noble strength and joyous independence. The “morality” which Nietzsche eschewed included ego-less, will-less, self-denying, and unselfish actions, where an individual denies itself for the betterment of the many. Nietzsche also described destructive morality as one that claims to be absolute and universal, is based on a desire for “the triumph of good and the and annihilation of evil” [The Will To Power, sec 30], and includes a desire for an far-off, idealized perfection such as the Christian “God in Heaven”, the Hindu “Brahma” or the Muslim “Allah”, the scientific “Truth”, or the Marxist “worker’s paradise”. Moral virtues listed by Nietzsche included “industriousness, obedience, chastity, piety, justness” [The Gay Science, sec 21]. He also contemptuously described the moralistic “careful avoidance of the ridiculous, the offensive, the presumptuous, the suppression of one’s virtues as well as of one’s strongest inclinations, [and the] self-adaptation, self-depreciation, [and] submission to orders of rank” [Daybreak, sec 26]. Although Nietzsche was brought up Christian, he extended his discussion of the morality of good and evil more broadly. He included all religion, as well as all philosophy (Schopenhauer was repeatedly used as an example of having a moralistic bent, as were Hegel, Plato, Augustine, some of Nietzsche’s German contemporaries, and even Spinoza and The Buddha). As expressions of the moral, Nietzsche also listed science, socialism-communism, “the authority of reason, the social instinct (the herd), history with an imminent spirit and goal to it” [The Will to Power, sec 20] or any other expression of idealism or transcendentalism. “One has to eradicate, annihilate, wage war: everywhere the Christian-nihilistict[-moralistic] value standard still has to be pulled up and fought under every mask” [The Will to Power, sec 51], he proclaimed, recognizing morality’s many visages. Nietzsche put forth many ideas about how common moral systems develop, both in societies and in individual humans, but all of his ideas maintain that they are a manifestation of sickness and weakness. In this vein, much of his 1887 book On the Genealogy of Morals was devoted to an extended creation myth that explained what he saw as the twisted and unfortunate birth of morality. Nietzsche conceptualized some people as strong, honorable, and noble, figurative “birds of prey,” but most people as unfortunately being weak, detestable, cowering, and “lamb”-like. This split is still true today, he posited, but, it was during prehistory, when societies and social orders first formed, that the former, the beautiful, aristocratic, warlike, and naturally powerful, first dominated the herd of the latter, with their common, cowardly, impotent, lowly, passive, and reactive ways. The strong bathed the weak in their contempt, inventing the word “bad” to describe their inferior ways. The ruling overlords held the “master morality,” which contrasted the “good” (noble, strong, and utilitarian) with the “bad” (plebeian, weak, and ineffective); the herd, however, eventually produced a divergent morality. The domination of the masters forced the weak to bottle the energy of their the naturally-outflowing animal instincts up, a self-violent backlog which eventually gave them a store of “bad conscience” and submerged vengefulness (Nietzsche’s term for this was the French word “ressentiment“). The repressed people soon experienced a “denial of [themselves], of [their] nature, naturalness, and actuality” [On the Genealogy of Morals, essay 2, sec 22], a storehouse of venom that was soon directed by the tortured and treacherous herd not only against themselves but also against the masters, and against any and all life-affirming qualities everywhere. According to Nietzsche, the commoners soon came up with the slave morality, a vision of themselves as “good” and of their weak powerlessness as meritorious: “morality guarded the underprivileged from insignificance by assigning each an infinite value, a metaphysical value” [The Will to Power, sec 55]. The weak further posited that their masters were not “bad” (as in ineffective) but instead “evil” (as in wicked or corrupt). The weak condemned the noble ones’ joyous appetite for life as cruel, lustful, insatiable, selfish, and godless. “Morality consequently taught [humans] to hate and despise most profoundly what is the basic character trait of those who rule: their will to power” [The Will to Power, sec 55], meaning their love of life, their creative zest, their ability to thrive and find health, and their belief in their own basic right to exist. Common morality, then, is, according to Nietzsche, essentially and always an anti-life proposition. Nietzsche believed that the morality of his modern era, especially Christian morality, was predominantly a manifestation of the resentful slave morality of the weak herd. Speaking of many of his fellow Germans, he observed that they monopolize virtue, these weak, hopelessly sick people … they walk among us as embodied reproaches, as warnings to us – as if health, well-constitutedness, strength, pride, and a sense of power were in themselves vicious things for which one must pay someday. [On the Genealogy of Morals, essay 3, sec 14] The slave morality, that morality of good and evil, then, grows out of weakness and unhealth, and it resents strength and wellness. In some books written before On the Genealogy of Morals, Nietzsche explained a different theory of the origin of morality. Here, he explained, morality in a given society began with utilitarianism, an awareness of good and bad, useful and not useful (which, broadly speaking, the author later called “master morality”). Unfortunately, “out of fear or reverence for those who demanded and recommended them, or out of habit because one has seen them done all around one from childhood on, or from benevolence because their performance everywhere produced joy and concurring faces, or from vanity because they were commended” [The Wanderer and his Shadow, sec 40], the simple and realistic utilitarian assessments gave way to rigid traditions that took on the metaphysical labels of “good” and “evil”. Such societies held all moral customs tightly from then onwards, and punished the breaking of any tradition, no matter how “stupid” [Human, All Too Human, sec 92] the tradition in question. Here again, morality is seen as a sign of weakness and unhealth, arising from an unwillingness to face the demands of dynamism. The costs of this sickness include the lost freedom of the individual [Assorted Opinions and Maxims, sec 89] in the face of the “tyranny of the unconditional” [Assorted Opinions and Maxims, sec 26] and the fact that anything new is inevitably seen as “evil,” even things that will eventually be beloved and seen as “good.” Nietzsche stated that, “under the dominion of the morality of custom, [individuality and] originality of every kind has acquired a bad conscience” [Daybreak, sec 9], leading to its lessened fruition. Nietzsche also posited theories concerning how morality arises in individuals, all of them pointing to sickness as its source. Individual moral beliefs, he suggested, may be a manifestation of “the herd instinct” [The Gay Science, sec 115] in an individual person (i.e. the slave morality), or it may be the unexamined psychological remnants of the cruel prohibitions of childhood (what Freud later labeled the “super-ego“, and modern psychology calls “The Inner Critic”), mistaken in later life for the voice of God or duty [The Wanderer and his Shadow, sec 52]. Or individual morality may instead be a cowardly desire for protection from the danger and messiness of human existence, by trying to create a secure and predictable environment [Daybreak, sec 174]; or it may be to artificially create somber “unconditional duties” [The Gay Science, sec 5] that are evoked to justify a melancholy temperament and to demand unconditional support from parishioners. Nietzsche seems to have most thought, however, that individual morality arose from a sense of self-hate; he characterized the strongest advocates of morality as self-despising failures, ashamed of their existence, who seek the “appearance of superiority over more spiritual men, to enjoy, at least in [their] imagination, the delight of consummate revenge [through] morality” [The Gay Science, sec 359]. In sum, then, Nietzsche posits that weakness, unhealth, and helplessness are the source of the superiorist selflessness and the evaluations of good and evil that constitute modern morality, both at the level of society and at the level of individual. Nietzsche’s descriptions of the origins of morality are important because they are prime evidence for his claim that moral values are both arbitrary and destructive. His genealogies propel his “revaluation of all values” [The Will to Power, subtitle]. Nietzsche hoped to show that morality is always an arbitrary construction with no external universality or necessity, even when people are unaware of this fact. He took issue with the Christian claim to knowledge of absolute moral Truth [The Will to Power, sec 4], a grievance which is subsumed under his general lament that humans fail to recognize that they have given our ideas lives of their own and begin to think of these ideas as objective realities [Assorted Opinions and Maxims, sec 26]. “Men of knowledge … are not men of knowledge with respect to” self-awareness [On the Genealogy of Morals, preface, sec 1], he further stated, asserting that the moral philosophies of “categorical imperatives” created by philosophers always derive from autobiographical experience and little else [Beyond Good and Evil, sec 6], despite not being acknowledged as such. In frustration, Nietzsche cried out, is the origin of all morality not to be sought in the detestable petty conclusions: ‘what harms me is something evil (harmful in itself); what is useful to me is something good (beneficent and advantageous in itself)? [Daybreak, sec 102] Nietzsche was frustrated because he believed that his fellow citizens forgot their act of creating moral values when they later believed the same values to be objectively “True”; “how little moral would the world appear without forgetfulness” [Human, All Too Human, sec 92], he exclaimed. Nietzsche was also frustrated that people overlooked the evidence for morality’s subjectivity found in its constant “mutation … moiling and toiling” over time [Daybreak, sec 98]. For example, if one saw that the moral values of a society rise and fall in relation to the interests of those who they benefit [The Will to Power, sec 14], and are not “eternally” stable, one might have less faith in them. Similarly, if one saw that the “universal” and ’’timeless” moral justification that has been made for penal punishment has taken at least eleven different forms [On the Genealogy of Morals, essay 2, sec 13], one might be more reluctant to accept this “eternal” value1. Nietzsche probably wanted his readers to see this transitoriness of morality so that they could then understand morality’s emptiness and arbitrariness, a central point to his philosophy. One reason why morality is empty for Nietzsche is because it assumes free will, culpability, and accountability. Nietzsche, however, claimed that those beliefs are abstractions from reality, linguistic conventions and beliefs of lesser minds, that lack any corporeal analogues or actual existence. His claim was, instead, that there is no actual freedom to choose between moral alternatives, and that people are always acting for the good as best their intellect and awareness allows; “actions are called evil which are only stupid” [Human, All Too Human, sec 107], he explained. In a similar vein, he posited that the distinction between “un-egoistic/good” and “egoistic/evil” is another illusion, given that all actions are inevitably directed towards self-preservation to some extent. Much of the concept of good and evil evaporates, he believed, if one understands the human’s place in the universe. Many moral concepts will also evaporate, Nietzsche believed, if one understands better how humans project human realities onto a valueless universe. Nature itself is not divided up into any good and evil, he believed, seeing how “there are no moral phenomena at all, only a moral interpretation of phenomena” [Beyond Good and Evil, sec 108]. Moral culpability is always a purely human invention, even when all people involved in a given event are convinced of its reality, as at a witch-trial. And while Nietzsche elsewhere lamented the “hyperbolic naiveté” of humanity [The Will to Power, sec 12] in its presumption in claiming knowledge of an Absolute morality, and also its lack of modesty in blowing up its needs into cosmic and metaphysical values [The Will to Power, sec 27], he also expressed hope by saying that when man gave all things a sex he thought, not that he was playing, but that he had gained a profound insight: -it was only very late that he confessed to himself what an enormous error this was, and perhaps even now he has not confessed it completely. -In the same way man has ascribed to all that exists a connection with morality and laid an ethical significance on the world’s back. One day this will have as much value, and no more, as the belief in the masculinity or femininity of the sun has today. [Daybreak, sec 3] Similarly, he explained that what mankind has so far considered seriously have not even been realities but mere imaginings – more strictly speaking, lies prompted by the bad instincts of sick natures that were harmful in the most profound sense – all of these concepts, ‘God,’ ‘soul,’ ‘virtue,’ ‘sin,’ ‘beyond,’ ‘truth,’ ‘eternal life’. [Ecce Homo, Why I Am So Clever, sec 10] For Nietzsche, then, the universe is without moral value; good and evil are projected upon nature by humans in their sickness and their weakness, and in their unawareness of their own mental processes2. As mentioned earlier, when discussing origins, morality for Nietzsche is not only arbitrary and illusory, but it is also actively dangerous. He thought that healthy cultures and individuals leave God and morality behind, and that sick nations, in a mildly “mentally- ill” state, cannot [The Will to Power, sec 47]. With regards to both nations and people, he questioned, “what if a symptom of regression were inherent in the ‘good’, likewise a danger, a seduction, a poison, a narcotic … so that precisely morality was the danger of dangers?” [On the Genealogy of Morals, preface, sec 6], and answered his own question by labeling Jesus, the great moral “redeemer,” the most “dangerous bait” [On the Genealogy of Morals, essay 1, sec 8]. One reason why Nietzsche found conventional morality dangerous was because he believed that its use lacked any real benefits. According to him, morality serves to shield people from despair and nihilism [The Gay Science, sec 214] by giving them a vain justification for their life. Morality also enables one to better compete with one’s neighbors, because they are handicapped by repudiating their natural humanity in an attempt to be “good” [The Gay Science, sec 21]. Beyond these questionable benefits, however, Nietzsche says, traditional morality is an empty bargain. A second reason why Nietzsche found morality dangerous is its demand for pity, which he considered to be a trap. Pity was the perilous test that was overcome by Nietzsche’s poetic alter-ego, Zarathustra, and it is “man’s greatest danger” [The Genealogy of Morals, essay 3, sec 14], more dangerous than evil or immorality. Moralistic pity probably poses such a grave danger for Nietzsche because it tends to level human society, unnaturally causing the strong to surrender their will-to-power and leading to the loss of humanity’s greatest resource. “Supposing [pity] were dominant even for a single day, [hu]mankind would immediately perish of it” [Daybreak, sec 134], warned Nietzsche, who was almost Social Darwinist in stating that “Nature is not immoral when it has no pity for the degenerate” [The Will to Power, sec 52] and suggesting that humans should follow suit. A third reason why Nietzsche sees morality as dangerous is because, as explained in discussing the origins of morality, it holds back the strong, the dynamic, and the creative, labeling all that is new or fully alive as “evil;” “the categorical imperative smells of cruelty”, he accuses [The Genealogy of Morals, essay 2, sec 6]. A final reason why Nietzsche mistrusts morality is because it is life-denying, and therefore implicitly leads to asceticism and to the hopelessness of nihilism. Morality is “a piece of tyranny against ‘nature'” [Beyond Good and Evil, sec 188] that has “exacted a high price” [The Will to Power, sec 7], he warned; “moral value judgments are a way of passing sentence, negations; morality is a way of turnings one’s back on the will to existence” [The Will to Power, sec 11], making it basically unhealthy for the human organism. Morality “offends” nature and existence because it is, as Nietzsche posited it, essentially a defense mechanism, a twisting of life to avoid pain. If such a pose is maintained for extended periods of time, in an individual or a society, sickness results. Large portions of life and self are repudiated and attacked as evil, and the present is always found lacking in comparison with the perfect beyond. One also strives constantly to meet an unreachable ideal, to be all “good” and never “evil,” without ever allowing oneself to express one’s dynamic strength, to experience the relief of acting “as all the world does and ’let [oneself] go’ like all the world” [The Genealogy of Morals, essay 2, sec 24], or to relax and love ones fate exactly as it occurs, all of which are Nietzschian aims of existence. Because of this striving, this denying, and the associated repression of facilities, and because the ascetic need for truth will always inevitably uncover the emptiness of moral valuations and leave one with no objective truths, Nietzsche claimed that idealism, theism, and any other form of otherworldly morality lead to ascetic nihilism, to “the weary pessimistic glance, mistrust of the riddle of life, the icy No of disgust with life” [The Genealogy of Morals, essay 2, sec 7]. Morality leads humanity to becoming the “sick animal” [The Genealogy of Morals, essay 3, sec 13]; he says that it shuts off the will-to-live in this universe, and one is left instead with being ’’’too good’ for this world” [The Genealogy of Morals, essay 3, sec 1], with guilt, shame, bad conscience, with a total lack of meaning, a “Why?” with no answer [The Will to Power, sec 2], with a denaturalized, detached, idealistic condemnation of any action [The Will to Power, sec 37], and with irritation at sensuality, marriage, “riches, princes … women … the light” and anything else one could possibly “do without” [On The Genealogy of Morals, essay 3, sec 8]. Nietzsche felt clear that a crisis of crushing nihilism would expose the “shabby origin” of Christian values [The Will to Power, sec 7], and, after a transitory stage of nihilism, faith in God and then all judgmental morality would disappear. “It is impossible and no longer permissible to accuse and judge the individual, the poor wave in the necessary wave play of becoming,” he stated [Assorted Opinions and Maxims, sec 33]. Nietzsche clearly wished to see the introduction of a stronger species of post-morality humans come of all of this, even if it meant extinction of the homo sapien species as it currently exists [Daybreak, sec 456, On the Genealogy of Morals, essay 2, sec 12]. This would perhaps ensure a victory for ancient Rome over Biblical Jerusalem, an end to the ageless battle between what Nietzsche saw as the “good” and “bad” noble morality of the former against the “good” and “evil” slave morality of the latter [On the Genealogy of Morals, essay 1, sec 16]. It should be emphasized that Nietzsche repeatedly denied alignment with any Darwinian or Hegelian notions of evolution, and that he expressed belief in constant change, but not in any sense of global progress. Zarathustra did, however, present a clear paradigm for “moral” progress for a given individual [Thus Spoke Zarathustra, book 1, “Of Three Metamorphoses”]. He presented a poem in which a transcendent individual starts first as a weight-bearing spirit, a camel, a being who was saturated with challenges and with other people’s values. This camel eventually transforms into a fierce lion who has slain the dragon of “thou shalt,” emancipating the spirit from the burden of categorical morality. Having achieved this freedom, the spirit finally became a child, “innocence and forgetfulness, a new beginning, a sport, a self-propelling wheel, a first motion, a sacred Yes.” While Nietzsche clearly not only rejected but attacked almost all of the moral systems he encountered, Zarathustra’s story illustrated the fact that he was clearly not without “morality” or “values,” in a broader sense. Nietzsche was, after all, a philosopher, a writer, and an academic; how could he not have some guidelines and signposts for the living of life? But Nietzsche also had more than just a few suggestions, he actually had a well-developed “morality,” one without God and without good and evil, as evidence by the beautiful image of the value-creating infant. He has been seen as a forerunner to the existentialism of the twentieth century because he believed in finding one’s own path3, and creating one’s own morals that are “life- advancing, life-preserving, species-preserving”. He believed in affirming this here-and-now Earthly reality, with all of its messy complication, as emphasized by his doctrine of eternal recurrence, and despising any perfect beyond. He believed following a will-to-health, an overcoming of illness and handicap, and a self-overcoming where one harnesses intelligence and passions towards creative goals like übermenchen such as Da Vinci and Goethe. He believed in not holding resentments or debts, in honesty and straighforwardness, and in being strong enough to hear and tell ugly truths. He believed in adventure and experimentation, he believed in a “Russian fatalism” where one accepted and loved whatever fate one was dealt. He believed in saying “I teach No to all that makes weak – that exhausts ,[and saying] Yes to all that strengthens, that stores up strength” [The Will to Power, sec 54]. He believed in small things, like climate and cuisine, over what he considered the falseness of metaphysics [Ecce Homo, Why I am So Clever, sec 10], and he believed in victory over God and over nothingness [Ecce Homo, essay 2, sec 24]. He believed in “war and victory … conquest, adventure, danger, and even pain … the keen air of heights … ice and mountains in every sense … sublime wickedness, an ultimate, supremely self-confident mischievousness in knowledge … great health!” [Ecce Homo, essay 2, sec 24]. Nietzsche’s morality system is also remarkable among those of Western philosophers due, not only to it orgiastic life- affirming, but to its related phenomena of its “naughtiness” and holism. It dares to laugh, to test, to do the forbidden and to affirm, “let’s try it” to any challenge. Further, being beyond good and evil, he considered it to embrace the whole without exclusion; it can see the strengths of its enemies (“Man would rather will nothingness than not will”), and it can see that the virtuous and the evil are not as distinct as many have long supposed. The Swiss Psychiatrist and student of Sigmund Freud, Carl Gustav Jung, was, like Freud, an avid reader of Nietzsche. When Jung said that Christianity was more of a Quadrinity than a Trinity, since one must include Satan as part of the conception of a Divine whole, one can perfectly see the influence of the off-beat, inclusive, and reality-affirming morality that the foe of traditional morality Friedrich Nietzsche did embrace. 1. Nietzsche’s commitment to multiple, dynamic, and non-eternal interpretations, and to the idea that “it is absolutely not desirable that all men should act in the same way [Human, All Too Human, sec 25] helps explain his toying with polytheism, and with the “multiplicity of norms” implied by multiple deities [The Gay Science, sec 143] 2. A belief in the non-reality of morality is just a fraction of Nietzsche’s overall commitment to perspectivism. He believed that humans can never come to an absolute eternal Truth, and that all beliefs are appearance and “lies” that are constantly changing and self-overcoming without ever arriving at universal or ultimate Truths. 3. Zarathustra took the Zen master-like step of demanding that his disciples leave him and question his teachings [Thus Spoke Zarathustra, part 1].
<urn:uuid:3d1ee0d5-e06f-40bf-8f82-c94834327628>
CC-MAIN-2022-33
https://intromeditation.com/Wordpress/friedrich-nietzsche-and-morality/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572286.44/warc/CC-MAIN-20220816090541-20220816120541-00297.warc.gz
en
0.954789
5,942
3.0625
3
an article by Niels Christian Hvidt "On Holy Saturday believers gather in great crowds in the Church of the Holy Sepulchre. For on this day fire comes down from Heaven and puts fire on lamps in the church." This ceremony, described above by one of the many pilgrims visiting Jerusalem during Easter, has occurred yearly for centuries and includes the unexplainable event of the Holy Fire igniting candles and oil lamps. Orthodox Christians cherish it as the greatest of miracles and see it as a continuous reminder of the Lord's resurrection. The author and his companions travelled to Israel to be present at this ceremony and to speak to some of the persons who have witnessed the miracle and whose lives it has inspired. The ceremony of the Holy Fire has taken place ever year for nearly fifteen centuries, at the same time, in the same manner, and at the same location. It draws ever-growing crowds of pilgrims to the Holy City each Easter season. The ceremony surrounding "The Miracle of the Holy Fire" appears to be one of the oldest recurring Christian ceremonies in the world. From the fourth century AD all the way up to our own time, sources recall the ceremony. The Church Historian, Eusebius, writes in his Vita Constantini which dates from around 328 about an interesting occurrence in Jerusalem of Easter in the year 162. When the churchwardens were about to fill the lamps to make them ready to symbolise the resurrection of Christ, they suddenly noticed that there was no more oil left to pour in the lamps. Upon this, Bishop Narkissos ordered the candles to be filled with water. He then told the wardens to ignite them. In front of the eyes of all present every single lamp burned as if filled with pure oil. The Orthodox Church believes that this miracle, which predates the construction of the Holy Sepulchre in the fourth Century, is related to the Miracle of the Holy Fire. They admit that the two differ, as the former was a one-time occurrence while the Miracle of the Holy Fire occurs every year. However, they have in common premise that God has produced fire where there logically speaking should have been none. Around 385 Etheria, a noble woman from Spain, traveled to Palestine. In the account of her journey, she speaks of a ceremony by the Holy Sepulchre of Christ, where a light comes forth (ejicitur) from the small chapel enclosing the tomb, by which the entire church is filled with an infinite light (lumen infinitum). It is not clear, whether her words refer to an alleged miraculous occurrence or to the bishop, who emerged from the tomb with the flame, possibly ignited from a perpetual flame inside the sepulchre chapel. Things become clearer in an itinerary written by a monk named Bernhard after his journey to Jerusalem in the year 870. He describes an angel who came down after the singing of the "Kyrie Eleison" and ignited the lamps hanging over the burial slab of Christ, whereupon the Patriarch passed the flame to the bishops and to everyone else in the church. In 926 the Arabic historian Ma'sûdi travelled to Palestine, and his account describes a similar event: on Easter Saturday Christians gathered from the entire country at the sepulchre, as on that particular day fire came down from heaven igniting the candles of those present. "The Chapel with the tomb where the fire first proceeds." Different sources reveal varying practices around the ceremony of the Holy Fire. Ancient and modern sources alike relate that pilgrims see the fire not only inside the Holy Sepulchre but also in Saint James Church next to the sepulchre itself, although the basic elements of the miraculous ignition of candles remain the same. The Russian abbot Daniel, in his itinerary, written in the years 1106-07, presents the "Miracle of the Holy Light" and the ceremonies that frame it in detail. According to Daniel, the night before the miracle, churchwardens cleaned the church and all the lamps inside it. They then filled the lamps with pure oil and left them darkened. Daniel reports that the tomb was sealed with wax at the second hour of the night, and remained sealed with the closed oil lamps standing on the tomb… "the Greek lamps being there where the head lay, and that of Saint Sabas and all the monasteries in the position of the breast." While the doors remained sealed, the entire church waited for the Holy Fire. The next day, after the fire had come, the "Bishop, followed by four deacons, then opened the doors of the Tomb and entered with the taper of Prince Baldwin so as to light it first with the Holy Light". Daniel concludes, "We lighted our tapers from that of the Prince, and so passed on the flame to everyone in the church". It appears that during some vigils pilgrims waited for hours for the fire to come, as it did not always appear at the same hour. Thus Theoderich, who wrote his account in 1172, says that sometimes the Holy Fire appeared about the first hour, sometimes about the third hour, the sixth, the ninth hour, or even so late as the time for Compline. Also Theoderich-admits that the fire would appear first in a variety of places-sometimes in the Holy Sepulchre, sometimes in the Temple of the Lord, and sometimes in the Church of St. John outside the Holy Sepulchre itself. "Orthodox Christians have celebrated the ceremony of the Holy Fire for many centuries. It is considered the greatest of miracles." The number of similar testimonies have increased along with the growing number of pilgrims going to the Holy Land, producing in an uninterrupted flow of first-hand accounts right to our times. However, the report written by the English chronicler, Gautier Vinisauf, deserves special attention as it relates a very interesting anecdote about the ceremony as it occurred in the year 1192. In 1187, the Saracens under the direction of Sultan Salah ad-Dîn took Jerusalem. In that year, the Sultan desired to be present at the celebration, even though he was not a Christian. Gautier Vinisauf tells us what happened: "On his arrival, the celestial fire descended suddenly, and the assistants were deeply moved. The Christians demonstrated their joy by chanting the greatness of God, the Saracens on the contrary said that the fire which they had seen to come down was produced by fraudulent means. Salah ad-Dîn, wishing to expose the imposture, caused the lamp, which the fire from heaven had lighted, to be extinguished, but the lamp relit immediately. He caused it to be extinguished a second time and a third time, but it relit as of itself. Thereupon, the Sultan confounded cried out in prophetic transport: 'Yes, soon shall I die, or I shall lose Jerusalem.' This prophecy was accomplished, for Salah ad-Dîn died the following Lent." But what exactly happens in the Holy Sepulchre Church on Easter Saturday? Why does it have such an impact on the Orthodox tradition? And why does it seem as if nobody has heard anything about this miracle in Protestant and Catholic countries when it in many ways is more stunning than many Western miracles ? In fact, the miracle still occurs today in the Church of the Holy Sepulchre in much the same manner as medieval sources reported it. It is no coincidence that millions of believers consider this the holiest place on earth: theologians, historians and archaeologists believe it includes both Golgotha, the little hill on which Jesus Christ was crucified, as well as the "new tomb" near Golgotha that received Christ's dead body, according to the Gospel account. It is on this same spot that Christ rose from the dead. Since Constantine the Great built The Holy Sepulchre Church in the middle of the fourth century, the church has been destroyed many times. The Crusaders constructed the church that we see today. Around Jesus' tomb was erected a little chapel with two rooms, one little room in front of the tomb and the tomb itself, which holds no more than four people. It is this chapel that is the centre of the miraculous events. Being present at the celebration fully justifies the term "event," for on no other day of the year is the Holy Sepulchre Church so packed than on Orthodox Easter Saturday. If one wishes to enter it, one has to reckon with six hours of queuing, and each year hundreds of people cannot enter because the crowds are so large. Pilgrims come from all over the world, the majority from Greece, but in recent years increasing numbers from Russia and the former Eastern European Countries have also come. In order to be as close to the tomb as possible, the pilgrims camp around the tomb-chapel on Good Friday afternoon in anticipation of the wonderful event on Holy Saturday. The miracle happens at 2:00 PM, but by 11:00 AM the Church is bustling with activity. Every year, small fights occur between the different groups of Christians in the Church. If one finds no other reason why Christians ought to seek greater unity it would be enough to go to Jerusalem for the ceremony of the Holy Fire to observe the confusion and lack of peace that reigns in the Holy Sepulchre among the many Christian denominations present. "Prior to the arrival of the miraculous fire, Palestinian Christians dance according to their custom at the ceremony." From around 11:00 AM till 1:00 PM, the Christian Arabs sing traditional songs in loud voices. These songs date back to the Turkish occupation of Jerusalem in the 13th Century, a period in which Christians were not allowed to sing their songs anywhere but in the Churches. "We are the Christians, this we have been for centuries and this we shall be for ever and ever, Amen!" they sing at the top of their voices, accompanied by the sound of drums. The drum-players sit on the shoulders of others who dance ferociously around the Sepulchre Chapel. But at 1:00 PM the songs fade out, leaving silence-a tense and loaded silence electrified by the anticipation of the manifestation of God that all are waiting to witness. "Every year Israeli authority check the tomb so it does not conceal any lights, whereafter it is sealed until the arrival of the patriarch." At 1:00 PM a delegation of the local authorities elbows through the crowds. Even though these officials are not Christian, they are part of the ceremonies. In the times of the Turkish occupation of Palestine they were Moslem Turks; today they are Israelis. For centuries the presence of these officials has been an integral part of the ceremony, as their function is to represent the Romans in the time of Jesus. The Gospels speak of Romans that went to seal the tomb of Jesus, so his disciples would not steal his body and claim he had risen. In the same way the Israeli authorities on this Easter Saturday come and seal the tomb with wax. Before they seal the door it is customary that they enter the tomb to check for any hidden source of fire, which could produce the miracle through fraud. Just as the Romans were to guarantee that there was no deceit after the death of Jesus, likewise the Israeli Local Authorities are to guarantee that there will be no trickery in the year 2000. The Patriarch enters and encircles the tomb chapel three times in procession. After the tomb has been checked and sealed, all people in the Church chant the Kyrie Eleison. At 1:45 PM the Patriarch enters the scene. In the wake of a large procession holding liturgical banners, he circles the tomb three times and then stops in front of its entrance. Then he takes off his royal liturgical vestments, leaving upon himself only his white alba as a sign of humility and respect in front of the portent of God that he is about to witness. All the oil-lamps have been blown out prior to the ceremony, and now all remains of artificial light are extinguished, so that the Church is enveloped in darkness. Holding two large unlighted candles, the patriarch enters the Chapel of the Holy Sepulchre - first into the small room in front of the tomb and from there into the tomb itself. To understand what happens when the patriarch enters the inner room, we need to hear his personal testimony. The following testimony is that of His Beatitude, Pattiarch Diodorus I. Three former patriarchs in the long chain of those Greek-Orthodox leaders who first receive the flame. Interview with His Beatitude Patriarch Diodorus I on the Miracle of the Holy Fire His Beatitude Patriarch Diodorus I was born in 1923. He first came to Jerusalem in 1938 and assisted the Miracle of the Holy Fire ever since. In 1981 he was elected Patriarch and was thus the key witness to the Holy Fire 19 times until his death in December 2000, as the Greek-Orthodox patriarchs always enter the little tomb chapel where the flame first occurs. I spoke with him at the Orthodox Easter, 2000. His Beatitude Diodorus I during interview with the author. "Your Beatitude, what actually occurs when you enter the tomb on Holy Saturday during the ceremony of the Holy Fire?" "After all the lights are extinguished, I bow down and enter the first chamber of the tomb. From here I find my way through the darkness to the inner room of the tomb where Christ was buried. Here, I kneel in holy fear in front of the place where our Lord lay after his death and where he rose again from the dead. Praying in the Holy Sepulchre in itself for me is always a very holy moment in a very holy place. It is from here that he rose again in glory and spread his light to the world. John the Evangelist writes in the first chapter of his gospel that Jesus is the light of the World. Kneeling in front of the place where he rose from the dead, we are brought within the immediate closeness of his glorious resurrection. Catholics and Protestants call this church "The Church of the Holy Sepulchre." We call it "The Church of the Resurrection". The resurrection of Christ for us Orthodox is the centre of our faith, as Christ has gained the final victory over death, not just his own death but the death of all those who will stay close to him. "I believe it to be no coincidence that the Holy Fire comes in exactly this spot. In Matthew 28:3, the Gospel says that when Christ rose from the dead, an angel came, dressed all in a fearful light. I believe that the intense light that enveloped the angel at the Lord's resurrection is the same light that appears miraculously every Easter Saturday. Christ wants to remind us that his resurrection is a reality and not just a myth; he really came to the world in order to offer the necessary sacrifice through his death and resurrection so that man could be re-united with his creator. "In the tomb, I say particular prayers that have been handed down to us through the centuries and, having said them, I wait. Sometimes I may wait a few minutes, but normally the miracle happens immediately after I have said the prayers. From the core of the very stone on which Jesus lay an indefinable light pours forth. It usually has a blue tint, but the colour may change and take on many different hues. It cannot be described in human terms. The light rises out of the stone as mist may rise out of a lake - it almost looks as if the stone is covered by a moist cloud, but it is light. This light behaves differently each year. Sometimes it covers just the stone, while other times it gives light to the whole sepulchre, so that people who are standing outside the tomb and look into it see the tomb filled with light. The light does not burn - I have never had my beard burnt in all the sixteen years I have been Patriarch in Jerusalem and have received the Holy Fire. The light is of a different consistency than the normal fire that burns in an oil-lamp. "At a certain point the light rises and forms a column in which the fire is of a different nature, so that I am able to light my candles from it. When I thus have received the flame on my candles, I go out and give the fire first to the Armenian Patriarch and then to the Coptic. Thereafter I give the flame to all people present in the Church." "The Patriarch proceeds from the tomb with the ignited candles." "How do you yourself experience the miracle and what does it mean to your spiritual life?" "The miracle touches me just as deeply every single year. Every time it is another step towards conversion. For me it is of great comfort to consider Christ's faithfulness towards us, which he displays by giving us the holy flame every year in spite of our human frailties and failures. We experience many wonders in our churches, and miracles are nothing strange to us. It happens often that icons cry, when Heaven wants to display its closeness to us; likewise we have saints, to whom God gives many spiritual gifts. But none of these miracles have such a penetrating and symbolic meaning for us as the Miracle of the Holy Fire. The miracle is almost like a sacrament. It makes the resurrection of Christ as real to us as if he had died only a few years ago." "Spreading of the flame" After the Patriarch passes the fire to the Armenian and Coptic metropolitans, they in turn pass it through holes in the walls of the tomb chapel to runners who are ready to carry it swiftly to the various quarters of the denominations in the church. Thus, the fire spreads like brush-fire. While the patriarch remains inside the chapel kneeling in front of the stone, outside the tomb it is dark but far from silent. One hears a rather loud mumbling, and the atmosphere is very tense. When the Patriarch finally emerges with the lit candles shining brightly in the darkness, a roar of jubilee resounds in the Church. As with any other miracle, the Miracle of the Holy Fire is a matter of faith and conviction, and there are those, both non-orthodox and Orthodox, who do not believe it actually happens. Both Greek and Latin authors have proposed the idea that the miracle is fraud and nothing but a masterpiece of Orthodox propaganda. They suggest that the Patriarch has a lighter or matches inside of the tomb and lights his candles himself. Such understandable criticism is, however, are confronted with a number of problems. Matches and other means of ignition are recent inventions. Not many decades ago, lighting a fire was an undertaking that lasted much longer than the few minutes during which the Patriarch is inside the tomb. One could suggest that he had an oil lamp burning inside, from which he kindled the candles, but the Israeli authorities always have confirmed that they have checked the tomb and found no light inside it. The best arguments against fraud, however, are not the testimonies of the various patriarchs but the thousands of independent pilgrims who during the centuries have written of how they saw the blue light outside the tomb spontaneously lighting the candles in front of their eyes without any possible explanation. Often closed oil-lamps hanging in different places in the church beyond the reach of the pilgrims caught fire by themselves. And the person who experiences the miracle at close range, seeing the fire igniting the candle or the blue light swaying through the church, usually leaves Jerusalem changed. For many pilgrims I spoke to who attended the ceremony, there was a "before and after" the Miracle of the Holy Fire. "Pilgrims express profound joy over the yearly arrival of the Holy Fire." Several books have been written in Greek containing testimonies of those who experienced the miracle. However, none of these contain testimonies from recent decades. Archbishop Alexios of Tiberias (Italiano Tiberiade) has taken upon himself this task of collecting more current testimonies from pilgrims who had miraculous experiences during the ceremony of the Holy Fire. During four years, he has gathered these testimonies, signed by the pilgrims, and his aim is to publish these in the near future. Archbishop Alexios, who has participated in the ceremony every year since 1967 decided to do this work after an experience related to the Holy Fire in 1996. "Archbishop Alexios of Tiberias who participated at the ceremony for over 35 years. "After the ceremony, I went home to my apartment situated in the Greek Orthodox Patriarchate, up the hill west of the Holy Sepulchre", he explains. "From here I looked out of my window, and suddenly I saw a great luminous red cross over the Dome of the Holy Sepulchre. I blinked my eyes and looked again, yet the cross remained. I rushed on to the roof of the house, thinking the cross might be the product of the sun's reflections in the golden cross standing on the roof. However, once I had arrived on the dome, I saw the same phenomenon that I had seen from the window: Many meters above the dome's golden cross, another cross of red light was hovering, extending its rays far beyond the dome itself. "This experience was very profound for me," Archbishop Alexios continues. "I have assisted at the ceremony since I was young and seen and experienced many unexplainable things there. But this sign was so clear that I today can never doubt God's miraculous interventions. If people say they don't believe in the Miracle of the Holy Fire, I am not the one to try to correct them, but I know they are wrong." Metropolitan Vasilis, Delegate of Jerusalem Greek Orthodox Patriarch Diodorus I, confirms Archbishop Alexios' affirmation. "I have been in Jerusalem since 1939 when I came to the city at the age of fifteen. I have attended the ceremony of the Holy Fire during all these years, and have thus been a witness to the miracle 61 times. For me it is not a question of whether I believe in the miracle or not. I know it is true. Like many other believers, I testify that the Holy Fire does not burn. Many times I passed the Holy Flame under my beard it was not burned. Year after year, I have seen the immediate and spontaneous lighting of the candles that the believers held enclosed in their hands, and I have heard many testimonies of people who either had their candles lit or saw the miraculous flame as it passed through the church of the Holy Sepulchre. To me the miracle is very important, especially as a memorial of the resurrection of Our Lord. The Holy Bible says that when the Lord rose from the dead, his tomb was bright, shining as if it were day. I believe it is in memory of this most central element of our faith that the Lord gives this marvelous sign, and so that it may never be forgotten!" "The flame is received with the same enthusiasm every year. Each year, pilgrims have reported to have seen the blue flame moving and acting freely, igniting closed candles and oil lamps in the Church. Mr. Souhel Thalgieh, a young engineer from Bethlehem, is another witness. Mr. Thalgieh has been present at the ceremony of the Holy Fire since his early childhood. In 1996 he was asked to film the ceremony from the balcony of the dome of the Church. Present with him on the balcony were a nun and four other believers, including the mother of Metropolitan Timothy. The nun stood at the right hand of Thalgieh. On the video one can see how he aims the camera down down at the crowds. At the designated moment, all lights are turned off and the Patriarch enters the tomb to receive the Holy Fire. While the Patriarch is still inside the tomb one suddenly hears a scream of surprise and wonder originating from the nun standing next to Thalgieh. The camera begins to shake, and one hears the excited voices of the other people present on the balcony. The camera then turns to the right, capturing the cause of the emotion: A large candle, held in the hand of the Russian nun, caught fire in front of all people present apparently before the patriarch came out of the tomb. With shaking hands she holds the candle while over and over making the sign of the Cross in awe of the portent she has witnessed. "Orthodox pilgrims consider the flame a great treasure". In another of the many testimonies, Archimedes Pendaki of Athens, Greece, reports that the experience of the miracle became the impetus that eventually led him to become an Orthodox priest. Father Pendaki experienced the miracle in 1983. In the preceeding years, he had drifted further and further away from the Orthodox faith of his family, and only rarely did he enter a church. His mother, who was very religious, convinced him after much arguing to come to Jerusalem and witness the Miracle of the Holy Fire. While mother and son were standing in the Holy Sepulchre Church it so happened that the candle of Pendaki's mother lighted spontaneously before their eyes. Archimedes at first raged at her, accusing her of trickery to make him believe, but deep inside he knew very well that she would never invent such a thing. Furthermore she was not able to produce the portent herself. The event continued to disturb his thoughts until he could not ignore it any more, and the need to explore the faith of his youth in depth led him to the Holy Mountain of Athos. After some years, he decided to become a priest. In the year 2000, the blue flame again lighted the candles of many people. According to Archbishop Alexios, a monk was standing close to the door of the sepulchre. While the patriarch was still inside the tomb, the monk received the flame on his candle to the great astonishment of the people standing around him. From his candle, the fire spread on the side of the tomb. A young man from the Greek island of Rhodos testified that he saw the fire coming as a cloud above the monk, descending to light his candles. Fire and The Presence of God The Orthodox Christians are not the only ones to associate light with the presence and activity of God. In the Biblical writings, light often accompanies great miraculous works of God. About Moses' meetings with God on Mount Sinai the Bible says: "Mount Sinai was entirely wrapped in smoke, because Yahweh had descended on it in the form of fire. The smoke rose like smoke from a furnace and the whole mountain shook violently" (Ex 19,18 ff). Later in Exodus, it says: "To the watching Israelites, the glory of Yahweh looked like a devouring fire on the mountain top" (Ex 24,17). After Moses had stood face to face with God, his face shone so powerfully that he had to cover it, lest the people get hurt (Ex 34,29 ff). When Jesus was transfigured in front of the disciples on Mount Tabor, "the aspect of his face changed and his clothing became sparkling white" (Lk. 9,29). Likewise, after Jesus' resurrection, the women met by the grave "two men in brilliant clothes" (Lk 24,4). Light and the mighty works of God go hand in glove. "The fire is considered holy." The Church Fathers considered light to be a symbol of God, especially of God's love. Thus Gregory the Great (530-604) writes: "God is called light because he embraces the flames of his love-the souls in which he abides." In the same way, Orthodox Christians consider the Miracle of the Holy Fire a manifestation of God's power and of His presence. "We believe the flame to be holy", says Archbishop Alexios, "almost as a sacrament, ontologically related directly to God himself. The pilgrims move their hands back and forth over the flame and caress their faces with the hands that touched the flames." The Pan-Orthodox and Ecumenical Significance of the Ceremony The miracle is important not only to the individual Christians whose faith it strengthened, but also because it plays a very important ecumenical role. The ceremony takes place every year on the Orthodox Easter Saturday and is celebrated together with all the Orthodox Christian communities. There are many types of Orthodox Christians: Syrian, Armenian, Russian and Greek Orthodox as well as Copts. In the Holy Sepulchre Church alone there are seven different Christian Denominations, and all, except the Catholics, take part in the ceremony. The Orthodox Easter date is fixed according to the Julian Calendar, which means that their Easter normally falls on a different date than the Protestant and Catholic Easter which is determined by the Eurpoean Gregorian calendar. Thus in the year 2000 the Orthodox Easter fell one week after the Easter of the West. Since the schism between East and West in 1054 the "Two lungs of the Body of Christ," as Pope John Paul II describes the Orthodox and Catholic communities, have lived separate existences. But for the first two hundred years after the schism, the Miracle of the Holy Fire had such unifying power that it gathered Catholics and Orthodox to celebrate the event together despite their differences. Only after 1246, when Catholic Christians left Jerusalem with the defeated Crusaders, did the Miracle of the Holy Fire become a purely Orthodox ceremony as the Orthodox remained in Jerusalem even after the Turks' occupation of Palestine. Metropolitan Timothy, who was the Patriarchate's representative to the recent ecumenical celebration of the opening of the Holy Doors of Saint Paul's Cathedral in Rome, said to me that the ecumenical and unifying power of the Holy Fire is quite exceptional. "Until the thirteenth Century the entire church celebrated the ceremony of the Holy Fire," he says. "Even after the Catholics left Jerusalem with the crusaders it has remained a unifying ceremony for those of us who stayed here, that is, for all the different branches of the Orthodox world. "The flame first comes in a miraculous way from Christ to the Greek Orthodox Patriarch inside the Tomb. He gives it to the Armenian and Coptic metropolitans, who hand it on to the remaining communities and they in turn, spread it to their own people. "From them it passes beyond the Holy Sepulchre to every corner of the Orthodox world. After the ceremony is over, believers from all Israel and Palestine carry it to the homes of their relatives. Pilgrims who come from far away make provisions, buying special oil-lamps with which they carry the flame to their countries. Olympic Airways helps the Orthodox to distribute the flame to many countries, especially to Alexandria in Egypt and to Russia, but also to Georgia, Bulgaria, and the USA. Each year we write letters of recommendation to the Israeli Ministry of Religious Affairs, which in turn assist pilgrims who carry the lanterns with the Holy Fire through customs and into their respective aircrafts. This is how important the spreading of the flame is to us. It is holy, and it keeps reminding us of how the one Holy Spirit is present in all the parts of the Body of Christ. Like blood being pumped by the heart into all members of a body, so the fire spreads from Jerusalem to all parts of the Orthodox community, reminding the faithful of the origins and unity of their faith. It has a tremendous unifying power to the Orthodox communion," Metropolitan Timothy concluded. "Archbishop Alexios of Tiberias brings the Holy Fire to Athens and is received with the honours of a statesman." Unknown in the West One might ask the question why the Miracle of the Holy Fire is hardly known in Western Europe. In the Protestant areas it may be explained by the fact that there is little traditional teaching regarding miracles; people don't really know how to approach them, and they don't take up much space in newspapers. The Catholic Church, however, has a long tradition of miracles, so why is the Miraculous Fire not better known amoung Western Catholics? One important reason may be that the ceremony is performed only by Orthodox Christians, on the Orthodox Holy Saturday; hence, Christians of other communities may consider it an internal Orthodox affair. Also, apologetic motives could play a role. Some Orthodox might insist that the miracle occurs in the presence of Orthodox Christians because the Orthodox Church is the only legitimate Church of Christ in the world. This tendentious explanation would cause a certain uneasiness in Catholic and Protestant circles. However, Archbishop Alexios disagrees with this stance: "The miracle does not prove anything of the sort. It is not a weapon of proselytism, creating division. It is not a proof that we are the only legitimate Christians. Rather, for us Orthodox, the miracle is a source of joy as it leads to greater unity in the Orthodox world, uniting us around this event. But not only this. I personally hope that the miracle can augment the awareness among Catholic Christians of how God is alive and active in the Orthodox Church, just as we are aware that he is present and active in the Catholic Church. Christ is one and works wonders for all his children. How I wish that this awareness of the oneness of Christ and his wondrous creativity would be an incentive towards full unity between us Christians." Meinardus, Otto. The Ceremony of the Holy Fire in the Middle Ages and to-day. Bulletin de la Société d'Archéologie Copte, 16, 1961-2. Page 242-253 Klameth, Dr. Gustav. Das Karsamstagsfeuerwunder der heiligen Grabeskirche.
<urn:uuid:eac07f0b-d162-4614-ba71-e14d4f18c939>
CC-MAIN-2022-33
http://www.davidtlig.org.uk/miracle.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570921.9/warc/CC-MAIN-20220809094531-20220809124531-00299.warc.gz
en
0.967251
7,002
2.90625
3
There is rediscovered interest in understanding the potential for recycling industrially processed food waste to create new safe and effective products for other applications; e.g., energy, pharma, nutraceuticals and cosmetics, to name the top few.1 An increase in food processing in the last 50 years has slowly and consistently generated a large amount of non-edible by-products, including fruit and vegetable peels, seeds and leaves. Additionally, disposing of waste water from food processing in a manner that avoids environmental pollution is a management issue.2 The idea of being able to use every part of a raw material without discarding any of its elements is not new. In 1993, Paul Hawken cited in his controversial and revolutionary book, “Ecology of Commerce,” examples of production integration in manufacturing. He described industrial hubs created around specific raw materials and their by-products to use every part of materials without waste.3 Hawken called this process industrial ecology, where “pollution is eliminated by tailoring manufacturing by-products so that they become the raw materials of subsequent processes.” Different industries developing products for diverse markets created local consortiums, where by-products were recycled in different finished goods. In developed countries, recycling ingredients is not an option anymore; it’s an obligation. Increasing consumption is associated with limited raw material options, which pushes the sustainability print of a raw material to the edge. Indeed, if different industries were to leverage the use of a raw material, it would limit the need to source the same raw material multiple times and it would increase the sustainability of it. As an example, the cosmetic industry could extract polyphenol-rich materials from parts of the fruit not used by the food industry. These plant ingredients provide antioxidant benefits. Companies producing apple juice that remove the apple skin and seeds could sell these materials to companies producing polyphenol-rich apple extracts for cosmetic, cosmeceutical, nutraceutical or pharma applications. In the early 2000s, research exploded in this area at different universities funded by local governments all over the world. The point is to understand and develop processing protocols to value these wasted by-product ingredients. A recent example is the BioWaste program funded by the European Commission Department of Agriculture. This program includes projects such as Apopros and Transbio. Apopros aims to “develop eco-efficient, bio-mechanical processing solutions to enrich intermediate fractions from industrial high protein and oil-containing process residues originating from agriculture sub-products.” Transbio is focused on developing new products from the fruit and vegetable processing industry using environmentally friendly biotechnology solutions.4 By-products including seeds, stems, leaves and skins—not used pulp—are usually discarded. The total amount of these by-products can be as little as 3% to as high as 60% of the total plant food, e.g., in the case of artichoke.5 The challenge in recovering these by-products is finding the best and most environmentally friendly extraction technique possible to achieve the maximum yield without compromising the stability of the extract and its components. Analytical chemistry of the waste and procedures to valorize it would then follow. The chemical composition of a by-product is similar to its edible parts,6 so it did not take long to discover that most of the discarded natural by-products have a similar, if not higher, health value than their edible processed counterparts.7, 8 Research on the properties of by-products and their applications has been published; some is presented here. Examples of food processing waste for cosmetics is given for coffee, tomato, olive and citrus. These ingredients have significant consumption worldwide, therefore their waste has a negative impact on the environment and economy. The possibility to recycle this waste could lower this impact and increase the sustainability of these ingredients. Coffee is one of the world's most popular drinks and it has a strong commercial value. It is only second to petroleum as the most traded commodity worldwide.9 The coffee industry generates a hefty amount of waste, including unused coffee beans, spent coffee grounds and silver skin/husks.5 Like other food waste,10 spent coffee grounds were investigated to produce energy.11, 12 In some cases, wet coffee processing waste is not properly disposed, causing serious environmental and health issues. Therefore, its conversion into bio-fuel would not only help the economy, but also prevent damage to the environment and reduce health problems.13 To investigate the use of coffee waste for medicinal and cosmetic purposes, several labs have performed analytical studies to identify the major components in unused coffee beans, spent coffee grounds and silver skin/husks. A series of healthy molecules was found, particularly phenols and polyphenols such as caffeoylquinic acids, caffeic acid and ferulic acid.14-17 Additionally, the waste contained 15% of oil and is rich in linoleic acid17, 18 and phytosterols.17 Further studies have demonstrated the strong biological activity of these molecules, particularly as antioxidants.17, 19 Subsequently, the antioxidant properties of coffee waste extract have been assessed: in vitro, to protect against accelerated aging;20 in vivo, in animal models, to protect against UVB, limit photo-aging and/or stimulate skin repair;21, 22 and clinically, in a finished product, to increase skin hydration.23 Increasing consumption is associated with limited raw material options, which pushes the sustainability print of a raw material to the edge. About one-fourth of the world’s industrial tomato processing is for tomato paste, and peeled and unpeeled tomatoes either chopped or in purees, juices, ketchup, soups, etc.5, 24 From this, tomato by-products including unused pulp, skin and seeds are produced. Tomato pomace is a popular by-product rich in tomato skin and water. It represents 4% of the fruit weight.24 Although apart from water the main components of tomato pomace are fibers (60% of its dry weight), pomace also contains proteins, pectins, fat, minerals and healthy antioxidants, including the carotenoid lycopene.25, 26 Commercially interesting ingredients such as proteins, pectins, antioxidant molecules—e.g., caffeic acid, ferulic acid, chlorogenic acids, quercetin-3-β-O-glycoside, quercetin and the aforementioned lycopene—have been recovered from the pomace through different means;26-29 albeit with a series of challenges, especially for an unstable material such as lycopene.26 Ingredients extracted from tomato waste demonstrate biological activity as antioxidants; in vitro, they also modulate cell growth and impart anti-mutagenic properties.29, 30 Another interesting application of protein-rich tomato waste is its fermentation into amino acids and peptides with antioxidant and anti-inflammatory activities.31 Investigations have identified the anti-inflammatory properties come from naringenin-chalcone in the tomato skin. This molecule also was shown to effectively reduce edema in animal models.32 The production of olive oil also generates massive amounts of waste, including olive oil mill waste water (OMWW), olive pomace and filter cake. The Mediterranean region produces 95% of the world’s olive oil. Here, OMWW alone represents more than 30 million cubic meters of waste worldwide for just two to three months of olive oil production.33 Olive oil waste is heavily rich in polyphenols.34, 35 In fact, during olive oil production, only 2% of the total phenols of olive fruit partition in the oil, while most phenols partition in the waste—53% in the OMWW and 45% in the solid pomace. This is due to the olive fruit phenols being more water-soluble.36 Traditionally, the waste is discarded in soil or marine water, building up toxic concentrations of polyphenols from 0.1–18 g/L. Furthermore, the presence of ammonium and phosphorus in the waste also affect the bio-system, inhibiting plant and microorganism growth.37, 38 The relatively low cost of phenols from olive by-products makes it worth considering recovering them from environment, which will further decrease toxic concentrations of waste. This could also sustainably stimulate economy by recovering healthy phenols and concentrating them to sell for other applications. A complete analysis of the polyphenols recovered from olive fruit waste identified: hydroxytyrosol, tyrosol, caffeic acid, vanillic acid, verbascoside, oleuropein, ferulic acid and p-coumaric acid.39 These polyphenols were studied for their biological properties, especially oleuropein and hydroxytyrosol. They are strong antioxidants39, 40 and their activity is often associated with the healthy benefits of the Mediterranean diet.41 Further research has proven these polyphenols to inhibit cancer cell proliferation and protect DNA from oxidative damage.42 A fraction isolated from olive mill waste water, containing mainly hydroxytyrosol, verbascoside and tyrosol, also completely inhibited the growth of Gram-positive and Gram-negative bacteria.43 Additional studies have shown the antimicrobial potency of phenol-rich olive pomace powder,44 suggesting the use of olive oil production by-products for natural preservation. Finally, polyphenols from olive waste were tested on skin and showed a series of beneficial anti-aging effects, including the stimulation of collagen production, antioxidant activity and the inhibition of melanogenesis.45, 46 In conclusion, it is possible to recycle olive waste and its phenolic content to reduce environmental impact, while using it for applications in human health and skin care. Citrus production worldwide totaled 135 million tons in 2013. This included mandarin, lemon/lime and grapefruit, representing 28.6, 15.1 and 8.4 million tons, respectively.47 The edible part represents around 44% of this total; the remaining, non-edible 66% consists mainly of peel.5 The waste therefore represents a considerable volume, and being mostly solid, it is difficult to eliminate or recycle so it is used mostly as cattle food. Recent investigations have attempted to use citrus waste peel as a possible biofuel, after its decomposition at high temperatures.48 Researchers in Florida also have developed systems to recover several by-products from the same citrus peel; this waste can be fermented to produce ethanol with essential oil D-limonene obtained as the co-product.49 Citrus waste including peel, molasses, seeds and leaves have been found to contain flavonoids, carotenoids, phenolic compounds, vitamin E, phytosterols and essential oils.50-54 Many of these components have strong antioxidant activities,53-55 along with other biological properties. Peel extract, for example, has shown immune-stimulating activity in T lymphocytes.55 And compared with other peel extracts, citrus also has the strongest antimicrobial activity, especially against Gram-negative bacteria.56 This suggests using citrus peel extract as a preservative. Additionally, several studies have shown the capacity of citrus waste to protect or inhibit a series of mechanisms in skin models. In particular, an orange peel extract rich in flavonoids protected skin cells from UV-induced inflammation.57 Also, citrus waste-derived nobiletin inhibited MMP-9 activity in human dermal fibroblasts.58 Researchers in Korea assessed mandarin peel waste from juice processing and found it exhibited antioxidant, anti-melanogenesis and anti-inflammatory activities.59 The same researchers evaluated extract from a waste-derived citrus pressed cake and found that it blocked specific melanogenesis pathways.60 Citrus peel extracts have also shown anti-elastase and anti-collagenase activity in vitro, suggesting applications for anti-aging skin care.61 Further, animal models have shown significant anti-inflammatory effects in murine dermal inflammation and wound-healing by the orange-peel derived terpene d-Limonene and its metabolite perillyl alcohol.62 In conclusion, citrus waste derivatives are promising ingredients for skin care, protection and repair products. A common approach to neutralizing toxic compounds that also transforms food waste into useful compounds, e.g., polysaccharides and phenolic compounds, is to incubate them with microorganisms. Pros and Cons of Recycled Skin Care A lack of resources and the need for sustainable processes is becoming today’s reality more and more. And the technology is available to eliminate waste derived from industrially processed food to reduce its environmental impact and recycle valuable materials. But the reality is, this comes at a cost. The waste must be collected and processed from its final use, then treated, transformed and/or extracted. And in the cases of coffee and olive, the waste obtained after processing can contain toxic compounds derived from oxidation; such as phytosterol-oxidized products (POP) from coffee silverskins17 or olive pruning residues (OPR) from olive processing.63 These waste derivatives would need to be neutralized or eliminated before recycling the waste for human applications. One common approach to neutralizing toxic compounds—which also transforms food waste into useful compounds such as polysaccharides and phenolic compounds—is to incubate them with microorganisms.63, 64 The bioactive compounds also must be extracted and concentrated, and to do so, some techniques require specific equipment, which can be costly.65 Although recent investigations have highlighted a cost advantage of extracting bioactive compounds from processed food waste,66 it is important to assess the same cost for the cosmetic industry, especially considering the need for pure molecules or concentrated bio-active fractions. It is worth noting that in this review, the ingredients identified for skin applications have mostly included phenolic compounds. However, other compounds from food processing waste include polysaccharides such as cellulose, pectins, oligosaccharides, etc.67, 68 These entities also exhibit antioxidant activity68, 69 and can function as prebiotics for skin care applications.69, 70 In conclusion, waste from food processing is rich in healthy compounds that can be recovered and used into cosmetic formulations for a series of skin benefits. Recycling this waste would be a more sustainable approach to using raw materials, reducing the costs of disposal and reducing environmental impact, while bringing added value to the cosmetics industry. - Laufenberg G, Kunz B, Nystroem M. Transformation of vegetable waste into value added products: (A) the upgrading concept; (B) practical implementations. Bioresource Technology 87(2):167–198, 2003 - Ferrentino G, Asaduzzaman M, Scampicchio MM. Current technologies and new insights for the recovery of high valuable compounds from fruits by-products. Crit Rev Food Sci Nutr May 31, 2016 - Hawken P. The ecology of commerce. Harper Collins Publishers, New York, NY, 1993 - KBBE.2011.3.4-01 - BioWASTE - Novel biotechnological approaches for transforming industrial and/or municipal biowaste into bioproducts – SICA http://cordis.europa.eu/programme/rcn/16978_en.html - Barbulova A, Colucci G, Apone F. New trends in cosmetics: by-products of plant origin and their potential use as cosmetic active ingredients. Cosmetics 2:82-92, 2015 - Mullen W, Nemzer B, Stalmach A, Ali S, Combet E. Polyphenolic and hydroxycinnamate contents of whole coffee fruits from China, India, and Mexico. Agric Food Chem 61(22):5298-309, 2013 - Ribeiro da Silva LM, Teixeira de Figueiredo EA, Silva Ricardo NM, Pinto Vieira IG, Wilane de Figueiredo R, Brasil IM, Gomes CL. Quantification of bioactive compounds in pulps and by-products of tropical fruits from Brazil. Food Chem 143:398-404, 2014 - Ilahy R, Piro G, Tlili I, Riahi A, Sihem R, Ouerghi I, Hdider C, Lenucci MS. Fractionate analysis of the phytochemical composition and antioxidant activities in advanced breeding lines of high-lycopene tomatoes. Food Funct 7(1):574-83, 2016 - Akbas MY, Stark BC. Recent trends in bioethanol production from food processing byproducts. J Ind Microbiol Biotechnol 43(11):1593-1609, 2016 - Kondamudi N, Mohapatra SK, Misra M. Spent coffee grounds as a versatile source of green energy. J Agric Food Chem 56(24):11757-60, 2008 - Battista F, Fino D, Mancini G. Optimization of biogas production from coffee production waste. Bioresour Technol 200:884-90, 2016 - Woldesenbet AG, Woldeyes B, Chandravanshi BS. Bio-ethanol production from wet coffee processing waste in Ethiopia. Springerplus 5(1):1903, 2016 - Murthy S, Naidu M. Sustainable management of coffee industry by-products and value addition—A review. Resour Conserv Recycl 66: 45–58, 2012 - Monente C, Ludwig IA, Irigoyen A, De Peña MP, Cid C. Assessment of total (free and bound) phenolic compounds in spent coffee extracts. J Agric Food Chem 63(17):4327-34, 2015 - Bravo J, Juániz I, Monente C, Caemmerer B, Kroh LW, De Peña MP, Cid C. Evaluation of spent coffee obtained from the most common coffeemakers as a source of hydrophilic bioactive compounds. J Agric Food Chem 60(51):12565-73, 2012 - Toschi TG, Cardenia V, Bonaga G, Mandrioli M, Rodriguez-Estrada MT. Coffee silverskin: characterization, possible uses, and safety aspects. J Agric Food Chem 62(44):10836-44, 2014 - Obruca S, Petrik S, Benesova P, Svoboda Z, Eremka L, Marova I. Utilization of oil extracted from spent coffee grounds for sustainable production of polyhydroxyalkanoates. Appl Microbiol Biotechnol 98(13):5883-90, 2014 - Andrade KS, Gonçalvez RT, Maraschin M, Ribeiro-do-Valle RM, Martínez J, Ferreira SR. Supercritical fluid extraction from spent coffee grounds and coffee husks: antioxidant activity and effect of operational variables on extract composition. Talanta 88:544-52, 2012 - Iriondo-DeHond A, Martorell P, Genovés S, Ramón D, Stamatakis K, Fresno M, Molina A, Del Castillo MD. Coffee silverskin extract protects against accelerated aging caused by oxidative agents. Molecules 21(6): E721, 2016. - Choi HS, Park ED, Park Y, Han SH, Hong KB, Suh HJ. Topical application of spent coffee ground extracts protects skin from ultraviolet B-induced photoaging in hairless mice. Photochem Photobiol Sci 15(6):779-90, 2016 - Affonso RC, Voytena AP, Fanan S, Pitz H, Coelho DS, Horstmann AL, Pereira A, Uarrota VG, Hillmann MC, Varela LA, Ribeiro-do-Valle RM, Maraschin M. Phytochemical composition, antioxidant activity, and the effect of the aqueous extract of coffee (Coffea arabica L.) bean residual press cake on the skin wound healing. Oxid Med Cell Longev 2016:1923754, 2016 - Rodrigues F, Sarmento B, Amaral MH, Oliveira MB. Exploring the antioxidant potentiality of two food by-products into a topical cream: stability, in vitro and in vivo evaluation. Drug Dev Ind Pharm 42(6):880-9, 2016 - Del Valle M, Cámara M, Torija ME. Chemical characterization of tomato pomace. J Sci Food Agric 86: 1232–1236, 2006 - Borguini RG, Torres EA. Tomatoes and tomato products as dietary sources of antioxidants. Food Rev Int 25: 313–325, 2009 - Topal U, Sasaki M, Goto M, Hayakawa K. Extraction of lycopene from tomato skin with supercritical carbon dioxide: effect of operating conditions and solubility analysis. J Agric Food Chem 54(15):5604-10, 2006 - Grassino AN, Brn`´ci´c M, Viki´c-Topi´c D, Roca S, Dent M, Brn`´ci´c SR. Ultrasound assisted extraction and characterization of pectin from tomato waste. Food Chem 198:93-100, 2016 - Moayedi A, Hashemi M, Safari M. Valorization of tomato waste proteins through production of antioxidant and antibacterial hydrolysates by proteolytic Bacillus subtilis: optimization of fermentation conditions. J Food Sci Technol 53(1):391-400, 2016 - Valdez-Morales M, Espinosa-Alonso LG, Espinoza-Torres LC, Delgado-Vargas F, Medina-Godoy S. Phenolic content and antioxidant and antimutagenic activities in tomato peel, seeds, and byproducts. J Agric Food Chem 62(23):5281-9, 2014 - Staj`´ci´c S, ´Cetkovi´c G, Canadanovi´c-Brunet J, Djilas S, Mandi´c A, Cetojevi´c-Simin D. Tomato waste: Carotenoids content, antioxidant and cell growth activities. Food Chem 172:225-32, 2015 - Moayedi A, Mora L, Aristoy MC, Hashemi M, Safari M, Toldrá F. ACE-Inhibitory and antioxidant activities of peptide fragments obtained from tomato processing by-products fermented using Bacillus subtilis: effect of amino acid composition and peptides molecular mass distribution. Appl Biochem Biotechnol Jul 26, 2016 - Yamamoto T, Yoshimura M, Yamaguchi F, Kouchi T, Tsuji R, Saito M, Obata A, Kikuchi M. Anti-allergic activity of naringenin chalcone from a tomato skin extract. Biosci Biotechnol Biochem 68(8):1706-11, 2004 - Ledesma-Escobar CA, Luque de Castro MD. Coverage exploitation of by-products from the agrofood industry. In: Chemat F, Strube J, editors. Green extraction of natural products: theory and practice. Weinheim: Wiley-VCH, 2015. - Kalogerakis N, Politi M, Foteinis S, Chatzisymeon E, Mantzavinos D. Recovery of antioxidants from olive mill wastewaters: a viable solution that promotes their overall sustainable management. J Environ Manage 128: 749-758, 2013 - Frankel E, Bakhouche A, Lozano-Sánchez J, Segura-Carretero A, Fernández-Gutiérrez A. Literature review on production process to obtain extra virgin olive oil enriched in bioactive compounds. Potential use of byproducts as alternative sources of polyphenols. J Agric Food Chem 61: 5179–5188, 2013 - Rodis PS, Karathanos VT, Mantzavinou A. Partitioning of olive oil antioxidants between oil and water phases. J Agric Food Chem 50(3): 596-601, 2002 - Saadi I, Laor Y, Raviv M, Medin S. Land spreading of olive mill wastewater: effects on soil microbial activity and potential phytotoxicity. Chemosphere 66:75–83, 2007 - Pavlidou A, Anastasopoulou E, Dassenakis M, Hatzianestis I, Paraskevopoulou V, Simboura N, Rousselaki E, Drakopoulou P. Effects of olive oil wastes on river basins and an oligotrophic coastal marine ecosystem: a case study in Greece. Sci Total Environ 497-498:38-49, 2014 - Azaizeh H, Halahlih F, Najami N, Brunner D, Faulstich M, Tafesh A. Antioxidant activity of phenolic fractions in olive mill wastewater. Food Chem 134(4):2226-34, 2012 - Cardinali A, Cicco N, Linsalata V, Minervini F, Pati S, Pieralice M, Tursi N, Lattanzio V. Biological activity of high molecular weight phenolics from olive mill wastewater. J Agric Food Chem 58(15):8585-90, 2010 - Visioli F, Galli C. Biological properties of olive oil phytochemicals. Crit Rev Food Sci Nutr 42(3): 209–221, 2002 - Obied HK, Prenzler PD, Konczak I, Rehman AU, Robards K. Chemistry and bioactivity of olive biophenols in some antioxidant and antiproliferative in vitro bioassays. Chem Res Toxicol 22(1):227-34, 2009 - Tafesh A, Najami N, Jadoun J, Halahlih F, Riepl H, Azaizeh H. Synergistic antibacterial effects of polyphenolic compounds from olive mill wastewater. Evid Based Complement Alternat Med 2011: 431021, 2011 - Friedman M, Henika PR, Levin CE. Bactericidal activities of health-promoting, food-derived powders against the foodborne pathogens Escherichia coli, Listeria monocytogenes, Salmonella enterica, and Staphylococcus aureus. J Food Sci 78(2):M270-5, 2013 - Aissa I, Kharrat N, Aloui F, Sellami M, Bouaziz M, Gargouri Y. Valorisation of antioxidants extracted from olive mill wastewater. Biotechnol Appl Biochem May 26, 2016 - Kishikawa A, Ashour A, Zhu Q, Yasuda M, Ishikawa H, Shimizu K. Multiple Biological Effects of Olive Oil By-products such as Leaves, Stems, Flowers, Olive Milled Waste, Fruit Pulp, and Seeds of the Olive Plant on Skin. Phytother Res 29(6):877-86, 2015 - Food and Agriculture Organization of the United Nations. http://faostat.fao.org/site/567/ - Santos CM, Dweck J, Viotto RS, Rosa AH, de Morais LC. Application of orange peel waste in the production of solid biofuels and biosorbents. Bioresour Technol 196:469-79, 2015 - Zhou W, Widmer W, Grohmann K. Developments in Ethanol Production from Citrus Peel Waste. Proc Fla State Hon Soc 121:307-310, 2008 - Yang X, Kang SM, Jeon BT, Kim YD, Ha JH, Kim YT, Jeon YJ. Isolation and identification of an antioxidant flavonoid compound from citrus-processing by-product. J Sci Food Agric 91(10):1925-7, 2011 - Aghel N, Ramezani Z, Beiranvand S. Hesperidin from Citrus sinensis cultivated in Dezful, Iran. Pak J Biol Sci 11(20):2451-3, 2008 - Kuroyanagi M, Ishii H, Kawahara N, Sugimoto H, Yamada H, Okihara K, Shirota O. Flavonoid glycosides and limonoids from Citrus molasses. J Nat Med 62(1):107-11, 2008 - Jorge N, Silva A, Aranha CP. Antioxidant activity of oils extracted from orange (Citrus sinensis) seeds. An Acad Bras Cienc 88(2):951-8, 2016 - Loizzo MR, Tundis R, Bonesi M, Sanzo GD, Verardi A, Lopresto CG, Pugliese A, Menichini F, Balducchi R, Calabrò V. Chemical Profile and Antioxidant Properties of Extracts and Essential Oils from Citrus × limon (L.) Burm. cv. Femminello Comune. Chem Biodivers 13(5):571-81, 2016 - Diab KA. In Vitro Studies on Phytochemical Content, Antioxidant, Anticancer, Immunomodulatory, and Antigenotoxic Activities of Lemon, Grapefruit, and Mandarin Citrus Peels. Asian Pac J Cancer Prev 17(7):3559-67, 2016 - Rakholiya K, Kaneria M, Chanda S. Inhibition of microbial pathogens using fruit and vegetable peel extracts. Int J Food Sci Nutr 65(6):733-9, 2014 - Yoshizaki N, Fujii T, Masaki H, Okubo T, Shimada K, Hashizume R. Orange peel extract, containing high levels of polymethoxyflavonoid, suppressed UVB-induced COX-2 expression and PGE2 production in HaCaT cells through PPAR-γ activation. Exp Dermatol 23 Suppl 1:18-22, 2014 - Kim JJ, Korm S, Kim WS, Kim OS, Lee JS, Min HG, Chin YW, Cha HJ. Nobiletin suppresses MMP-9 expression through modulation of p38 MAPK activity in human dermal fibrobalsts. Biol Pharm Bull 37(1):158-63, 2014 - Kim SS, Lee JA, Kim JY, Lee NH, Hyun CG. Citrus peel wastes as functional materials for cosmeceuticals. J Appl Biol Chem 51(1):7–12, 2008 - Kim SS, Kim MJ, Choi YH, Kim BK, Kim KS, Park KJ, Park SM, Lee NH, Hyun CG. Down-regulation of tyrosinase, TRP-1, TRP-2 and MITF expressions by citrus press-cakes in murine B16 F10 melanoma. Asian Pac J Trop Biomed 3(8):617-22, 2013 - Apraj VD, Pandita NS. Evaluation of Skin Anti-aging Potential of Citrus reticulata Blanco Peel. Pharmacognosy Res 8(3):160-8, 2016 - Alessio PA, Mirshahi M, Bisson JF, Bene MC. Skin repair properties of d-Limonene and perillyl alcohol in murine models. Antiinflamm Antiallergy Agents Med Chem 13(1):29-35, 2014 - Koutrotsios G, Larou E, Mountzouris KC, Zervakis GI. Detoxification of Olive Mill Wastewater and Bioconversion of Olive Crop Residues into High-Value-Added Biomass by the Choice Edible Mushroom Hericium erinaceus. Appl Biochem Biotechnol 180(2):195-209, 2016 - Gonçalves C, Lopes M, Ferreira JP, Belo I. Biological treatment of olive mill wastewater by non-conventional yeasts. Bioresour Technol 100(15):3759-63, 2009 - Xynos N, Abatis D, Argyropoulou A, Polychronopoulos P, Aligiannis N, Skaltsounis AL. Development of a sustainable procedure for the recovery of hydroxytyrosol from table olive processing wastewater using adsorption resin technology and centrifugal partition chromatography. Planta Med 81(17):1621-7, 2015 - Delisi R, Saiano F, Pagliaro M, Ciriminna R. Quick assessment of the economic value of olive mill waste water. Chem Cent J 10:63, 2016 - Gómez B, Gullón B, Yáñez R, Parajó JC, Alonso JL. Pectic oligosacharides from lemon peel wastes: production, purification, and chemical characterization. J Agric Food Chem 61(42):10043-53, 2013 - Jeddou KB, Chaari F, Maktouf S, Nouri-Ellouz O, Helbert CB, Ghorbel RE. Structural, functional, and antioxidant properties of water-soluble polysaccharides from potatoes peels. Food Chem 205:97-105, 2016 - Nadour M, Laroche C, Pierre G, Delattre C, Moulti-Mati F, Michaud P. Structural characterization and biological activities of polysaccharides from olive mill wastewater. Appl Biochem Biotechnol 177(2):431-45, 2015 - Gómez B, Gullón B, Remoroza C, Schols HA, Parajó JC, Alonso JL. Purification, characterization, and prebiotic properties of pectic oligosaccharides from orange peel wastes. J Agric Food Chem 62(40):9769-82, 2014
<urn:uuid:c251a64b-d85b-4d6f-9ceb-d803f1867a2c>
CC-MAIN-2022-33
https://www.cosmeticsandtoiletries.com/research/methods-tools/article/21836720/garbage-to-glamour-recycling-food-byproducts-for-skin-care
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571950.76/warc/CC-MAIN-20220813111851-20220813141851-00699.warc.gz
en
0.846031
7,164
3.578125
4
Origins of Dental Crowding and Malocclusions: An Anthropological Perspective Jerome C. Rose, PhD; Richard D. Roblee, DDS, MS The study of ancient Egyptian skeletons from Amarna, Egypt reveals extensive tooth wear but very little dental crowding, unlike in modern Americans. In the early 20th century, Percy Raymond Begg focused his research on extreme tooth wear coincident with traditional diets to justify teeth removal during orthodontic treatment. Anthropologists studying skeletons that were excavated along the Nile Valley in Egypt and the Sudan have demonstrated reductions in tooth size and changes in the face, including decreased robustness associated with the development of agriculture, but without any increase in the frequency of dental crowding and malocclusion. For thousands of years, facial and dental reduction stayed in step, more or less. These analyses suggest it was not the reduction in tooth wear that increased crowding and malocclusion, but rather the tremendous reduction in the forces of mastication, which produced this extreme tooth wear and the subsequent reduced jaw involvement. Thus, as modern food preparation techniques spread throughout the world during the 19th century, so did dental crowding. This research provides support for the development of orthodontic therapies that increase jaw dimensions rather than the use of tooth removal to relieve crowding. Tremendous advancements have been made in orthodontic diagnostics and treatment in the last 150 years. However, significant limitations still remain in predictably treating some malocclusions to optimal function, health, esthetics, and long-term stability. The need for overcoming these limitations is vast, with nearly two-thirds of the US population having some degree of malocclusion1 (Figure 1). In contrast, most of modern society’s ancestors naturally had ideal alignment without malocclusion and their third molars were fully erupted and functioning. A common denominator today in the most difficult orthodontic problems appears to be a discrepancy between the volume of alveolar bone and tooth mass (Figure 2a, Figure 2b, Figure 2c and Figure 2d). In adults, these problems traditionally require longer treatment times in which the orthodontist may have to compromise relationships, esthetics, and stability through either the extraction of teeth or by positioning the teeth outside the confines of their supporting structures (Figure 2d). To develop better treatment options, determining whether these discrepancies are a tooth-mass excess problem or an alveolar bone deficiency is needed first. Some of the solutions to orthodontic limitations may be found through a better understanding of the causes for the increase of dental crowding and malocclusions in modern society. An Archaeological Dig The reasons for the origins of high malocclusion rates today prompted exploration of Egypt and the Nile Valley where thousands of skeletons—from more than 10,000 years of human history—have been excavated and analyzed. Although dental data is available from a number of Egyptian sites, this paper’s specific examples are drawn from the Amarna Project excavations in the Egyptian desert, along the Nile River halfway between Cairo in the north and Luxor in the south (Figure 3). Amarna is the ancient capital of Pharaoh Akhenaton who reigned from 1353 BC to 1333 BC and built his city on empty desert for the monotheistic worship of the sun god the Aten. Three years of excavation in the recently discovered commoners’ cemetery yielded 94 individual remains (Figure 4, Figure 5, Figure 6 and Figure 7). Except for the occasional slight incisor crowding and rotation, observation of the teeth indicated that they were well-aligned with very-good-to-excellent occlusion (Figure 8 and Figure 9), in general. Thorough analysis of dental data from the Amarna Project has shown that Egyptian and most ancient teeth have extensive tooth wear with dentin exposure on the occlusal surfaces of even the youngest individuals. Malocclusion is rare in Amarna but very common in America; tooth wear is extensive in Amarna yet rare in America. For almost a century, these contrasting observations have stimulated the search for causes of malocclusion among ancient skeletons. The Begg Philosophy Percy Raymond Begg, an innovative Australian orthodontist who trained at the Angle College of Orthodontia in California from 1924 to 1925, wondered why his orthodontic treatments lacked stability even though he followed the methods and philosophy of his mentor, Edward Angle. Angle’s idea that malocclusion was a disease of modern society led Begg in the 1920s to study the teeth and jaws of modern and prehistoric Native Australians.2 Ultimately, Begg found only 13% of approximately 800 Native Australian skulls had Class II malocclusion, while 3% exhibited Class III.2 He decided that extensive tooth wear with complete loss of cusps and exposure of dentin is the natural condition for humans; this wear transforms the incisor overbite into an edge-to-edge articulation; and interstitial wear reduces the mesiodistal diameters of all teeth so that mesial drift can shorten the tooth arch sufficiently. This enables all the teeth to fit within the jaw.2 Within three years of returning to Australia and having only begun his research on ancient teeth, he began extracting teeth from his patients’ jaws to provide the necessary space for his orthodontic manipulations. In the next decade, Begg completed his research on ancient teeth, promoted his theories on the development of malocclusion, and created a number of innovative treatment materials and techniques.2,3 The Amarna teeth illustrate the rationality of Begg’s theory. The mandible illustrated in Figure 8 shows good alignment of the teeth, no evidence of crowding, and extensive wear exposing the dentin. The speed of wear was documented by all occlusal enamel having been removed from the first molars, while only the cusp tips were worn on the third molars. Although rapid, this wear was slow enough so that the odontoblasts could keep pace with filling in the pulp chamber with reparative dentin. Thus, virtually no pathologic consequences of this heavy wear exist at Amarna or elsewhere in the ancient findings. The dentin exposure on all the incisors is the result of an edge-to-edge bite that develops as the incisors erupt and wear both occlusally and interstitially. This high rate of wear also is shown in the maxillary teeth of the 20- to 25-year-old male in Figure 9. Again, good alignment and no crowding are evident. The right central incisor (Figure 10) is loose in the socket from postmortem breakage of the alveolar bone during ancient grave robbing. The incisors are relatively vertical and articulate in an edge-to-edge bite. The first molars were worn flat, while the cusps of the third molars were barely rounded. Figure 10 is a photo of a skull that shows the tooth surfaces worn flat and very good spacing within robust faces. Critical to Begg’s interpretation was the extensive interstitial wear that reduced the mesiodistal diameters of all the teeth and hence the jaw space needed to hold the teeth. This loss of interstitial enamel can be seen clearly among teeth Nos. 2 to 5 in Figure 9. Observations such as these prompted Begg to conclude that, without extensive attrition, individuals with a “preponderance of tooth substance over bone substance” would develop malocclusion, while people with high attrition would not.2 He further justified his unorthodox technique by stating that the removal of teeth to increase space is “not empirical expediency, but a rational procedure with a sound etiological basis.”2 As logical as Begg’s notions appear about the Amarna teeth, anthropologists know that even feral monkeys and apes have as much as 30% malocclusion when slight variations of incisor and premolar rotation are included.4 In primates and ancient people, a small but significant proportion exists of malocclusions caused by inherited anomalies, developmental disturbances, and other known causes. Thus, it is logical that orthodontic textbooks attribute malocclusion to specific causes, such as teratogens, growth disturbances, developmental anomalies, genetic influences (eg, inherited disproportions between the jaws), genetic admixture of people from many parts of the world, and behaviors (eg, thumb sucking and tongue thrusting).1 However, most modern malocclusions are caused by disparity between jaw size and total tooth-arch length. Such malocclusions are rare in Amarna and among ancient people worldwide. To see the flaw in Begg’s argument, clinicians need to realize that while the degree of occlusal attrition is directly related to the coarseness of the diet (eg, amount of grit and fiber), the amount of interstitial wear needed to shorten the tooth row is caused by the chewing forces exerted during mastication of food because this wear is caused by enamel rubbing on enamel as the teeth move up and down in their sockets. Again the Nile Valley might provide answers for causes of dental arch to jaw disparity. David Greene studied the teeth of skeletons excavated in the Sudan just south of Egypt along the Nile and documented a long-term trend in dental-size reduction for the 10,000-year period.5 He suggested this reduction in tooth size was from changes in diet and methods of food processing as agriculture was adopted and refined. Analysis of more samples by numerous researchers has established this general trend in tooth-size reduction that is associated with changes in diet. As the diet has become more refined, the consequent increase in dental decay selected for smaller and less complex teeth has moved distally in relation to the skull, such that the body of the mandible now protrudes forward underneath the alveolar bone producing a chin.6 Because teeth have become smaller without producing excess room in the jaws, other evolutionary mechanisms must have been at work on the alveolar bone and supporting structures of the maxilla and mandible. While it was common to use cranial measurements to document migrations, ancient Egyptian skulls also were employed to demonstrate that the development of Egyptian civilization was produced by the arrival of a “dynastic race” that had a different skull shape.7 To contradict this racial approach, Carlson and Van Gerven proposed the masticatory function hypothesis, which maintains that changes in the face and skull between the Mesolithic and Christian periods (10,000-year span) in the southern Nile Valley were caused by dietary changes initiated by the adoption of agriculture and changing food processing technology8 (Figure 11). The maxilla and mandible have moved posteriorly, rotating underneath the forehead, while also becoming less robust. Furthermore, the tooth rows have moved distally in relation to the skull, such that the body of the mandible now protrudes forward underneath the alveolar bone, producing a chin. This description and the associated skull drawings have been so frequently republished that they are now iconic with publication in the most widely used osteology texts and through these have entered the orthodontic literature.1,9 Carlson and Van Gerven argued most of the facial changes were not the result of genetic changes but caused by reduced chewing stress during development.6 Furthermore, in contrast to Begg, they contended that the switch to modern diets had so reduced chewing stress that the jaws did not develop to a sufficient size to hold all the teeth and thus malocclusion became common. However, many clinicians and anatomists today still maintain that facial robustness is genetically controlled.1 Into this fray stepped Robert Corruccini—with his seminal 1991 book chapter for dental anthropologists and subsequent 1999 book for orthodontists—who marshaled 20 years of research on cross-cultural differences in occlusal anomalies to support the masticatory functional explanation of malocclusion.10,11 Corruccini and his colleagues favored the explanation that reduced chewing stress in childhood produced jaws that were too small for the teeth despite the ubiquitous trend in dental size reduction.10 Because genetic explanations for malocclusion were common, Corruccini reviewed previously published studies from eight geographic regions that demonstrated a significant increase in malocclusion when a switch occurred from that of a coarser traditional diet consumed by an older generation to a more refined commercial diet of a younger generation. He documented a clear genetic continuity between the two age groups in populations, such as Americans in rural Kentucky, Punjabi and Bengali Indians, Solomon Islanders, Pima Native Americans, rural and urban African Americans, and Native Australians. Corruccini also documented a clear association of alveolar bone growth with the functional stimulation of chewing forces10 that includes measurements of bite-force variation between generations of Eskimos and experimental studies showing changes in mandibular growth of rats and primates between groups consuming hard and soft diets.10 For example, Lieberman et al raised hyraxes on either cooked or raw foods and showed an approximate 10% difference in facial growth.12 They not only supported the idea that diet-associated reduction in chewing stress resulted in decreased growth of the mandibular and maxillary arches, but also that animal studies, in general, show both facial reduction and increased malocclusion in the low-force groups. Not only is basic research continuing into the 21st century on all of the components of the malocclusion story, but recently, anthropologists and orthodontists reprised the entire issue of Begg’s contributions to understanding the causes of malocclusion. Published in The American Journal of Physical Anthropology (the major journal for biologic anthropology reviews), Kaifu et al noted the virtual absence of dental wear in modern populations fails to explain the increase in malocclusion as Begg contended.13 However, underdevelopment of the maxillary and mandibular alveolar bone is clearly implicated.13 They essentially support some of Begg’s concepts but criticize many of his other ideas, while acknowledging Begg’s pioneering work. The researchers conclude that human teeth are designed to accommodate very heavy wear without impairing oral health; however, given adequate growth of the jaws, normal occlusion can be achieved without heavy wear. The critical conclusion provided for the clinician is that “attritional occlusion should not be regarded as a treatment model for contemporary dentistry.”13 In other words, therapies designed for reducing tooth substance, which occurs naturally in ancient and traditional populations, clearly are misdirected. Conversely, following the lead of the functional approach, clinicians should move forward on therapies that would provide expansion of the jaws to the appropriate size to fit the teeth. Although true tooth-mass excess problems exist that require tooth-mass reduction therapy (extractions or reshaping) optimally, it now appears that most dental crowding and malocclusion problems actually are alveolar bone deficiencies. The entire interdisciplinary team should understand this and be able to properly diagnose this underlying problem to predictably treat to optimal long-term function, health, and esthetics. The dental profession also needs to improve current methods and develop new techniques for expanding dental arches and increasing alveolar bone volume. The effects of dietary consistency on the dental arch must be expressed early in life because dental-arch dimensions are established at a young age.1 The last time alveolar bone volume increases naturally is during the eruption of the teeth. That is why the best approach for increasing the volume of alveolar bone supporting the teeth and expanding the dental arches is with orthodontics and dentofacial orthopedics during growth and development1 (Figure 12a, Figure 12b, Figure 12c and Figure 12d). However, a congenitally missing tooth or one that is extracted at an early age can significantly complicate problems by creating a permanent defect in the already deficient alveolar bone14 (Figure 13). These complications can be minimized by moving another tooth into the area relatively rapidly. Options for correcting alveolar bone deficiencies in adults are much more limited. After teeth eruption, the cortical plates establish the boundaries for orthodontic development of the dental arches.15 In fact, some refer to the cortical plates buccal and lingual to the apices of the teeth as “orthodontic walls.”16 Encroaching on these walls during traditional orthodontic tooth movement can not only lead to unstable results, but also iatrogenic tissue loss of the involved tooth, bone, and periodontium1,17-19 (Figure 2d). These problems are obviously more common when alveolar bone development is lacking because there is less area to move teeth in the alveolar trough between the cortical plates (Figure 2B). Orthodontic correction can be further complicated in severe alveolar deficiencies by cortical plates and the dentoalveolar complex developing in an improper relationship to its skeletal base20 (Figure 2a, Figure 13, Figure 14a and Figure 14b). Some excellent options are available for treating alveolar bone discrepancies. When teeth are moved in the absence of periodontal disease, they bring alveolar bone with them.1 Because of this, orthodontic extrusion or orthodontic tooth movement through the alveolar trough between the cortical plates can be used sometimes to create the alveolar bone needed to support an implant to replace a missing tooth.21,22 (Please turn to page 250 to read Management of Dentoalveolar Ridge Defects for Implant Site Development: An Interdisciplinary Approach.) In the past, though, there has not been a predictable method for overall development of alveolar bone and dental arches in adults. However, new and exciting procedures are becoming popular; they surgically facilitate orthodontic therapy to increase alveolar bone volume and allow correction of the relationship of the dentoalveolar complex to its skeletal base20 (Figure 13, Figure 14a, Figure 14b, Figure 14c and Figure 14d). (Please turn to page 264 to read Surgically Facilitated Orthodontic Therapy: A New Tool for Optimal Interdisciplinary Results.) These procedures use corticotomies, interdental osteotomies, and the principles of distraction osteogenesis to greatly accelerate tooth movement and directly address the issues caused by alveolar bone deficiency. Surgically facilitated orthodontic therapy can optimally resolve dental crowding and malocclusion problems that traditional orthodontics alone could not, and, as a result, has a more robust and esthetic dentoalveolar complex (Figure 14c). They also can decrease treatment time, minimize the indications for dental extractions, and increase stability of the result. Anthropologists believe increases in dental crowding and malocclusion occurred with the transition from a primitive to modern diet and lifestyle, to the point that Corrucini labeled malocclusion a “disease of civilization.”10 The resultant underlying problem from the adaptations to the changes in diet appears to be an alveolar bone deficiency. All dental professionals should consider alveolar bone discrepancies as a leading cause of dental crowding and malocclusion. When indicated, treatment should focus on the development of alveolar bone and dental arches and not the reduction of tooth structure. 1. Proffit WR, Fields HW, Sarver DM. Contemporary Orthodontics. 4th ed. St. Louis, MO: Mosby; 2006. 2. Begg PR. Stone Age man’s dentition: with reference to anatomically correct occlusion, the etiology of malocclusion, and a technique for its treatment. Am J Orthod 1954;40(5): 373-383. 3. Begg PR, Kesling PC. Orthodontic Theory and Technique. 3rd ed. Philadelphia, PA: W.B. Saunders Company; 1977. 4. Mills JRE. Occlusion and malocclusion of the teeth of primates. In: Brothwell DR, ed. Dental Anthropology. New York, NY: Macmillan; 1963:29-51. 5. Greene DL. Environmental influences on Pleistocene hominid dental evolution. Bioscience. 20(5):276-279. 6. Calcagno JM. Mechanisms of Human Dental Reduction: A Case Study from Post-Pleistocene Nubia. Lawrence, KS: University of Kansas Publications in Anthropology; 1989. 7. Petrie WMF. The dynastic invasion of Egypt. Syro-Egypt. Notes on Discovery. 2:6-9. 8. Carlson DS, Van Gerven DP. Masticatory function and post-Pleistocene evolution in Nubia. Am J Phys Anthropol. 1977;46(3):495-506. 9. Larsen CS. Bioarchaeology: Interpreting Behavior from the Human Skeleton. Cambridge, UK: Cambridge University Press; 1997. 10. Corruccini RS. Anthropological aspects of orofacial and occlusal variations and anomalies. In: Kelley MA, Larsen CS, eds. Advances in Dental Anthropology. New York, NY: Wiley-Liss Inc.; 1991:295-323. 11. Corruccini RS. How Anthropology Informs the Orthodontic Diagnosis of Malocclusion’s Causes. Lewiston, NY: The Edwin Mellen Press; 1999. 12. Lieberman DE, Krovitz GE, Yates FW, et al. Effects of food processing on masticatory strain and craniofacial growth in a retrognathic face. J Hum Evol. 2004;46(6):655-677. 13. Kaifu Y, Kasai K, Townsend GC, et al. Tooth wear and the “design” of the human dentition: a perspective from evolutionary medicine. Am J Phys Anthropol. 2003;suppl 37:47-61. 14. Kennedy DB, Joondeph DR, Osterberg SK, et al. The effect of extraction and orthodontic treatment on dentoalveolar support. Am J Orthod. 1983;84(3):183-190. 15. Edwards JG. A study of the anterior portion of the palate as it relates to orthodontic therapy. Am J Orthod. 1976;69:249-273. 16. Handelman CS. The anterior alveolus: its importance in limiting orthodontic treatment and its influence on the occurrence of iatrogenic sequelae. Angle Orthod. 1996;66(2):95-109. 17. Sharpe W, Reed B, Subtelny JD, et al. Orthodontic relapse, apical root resorption, and crestal alveolar bone levels. Am J Orthod Dentofacial Orthop. 1987;91(3):252-258. 18. Melsen B. Limitations in adult orthodontics. In: Melsen B, ed. Current Controversies in Orthodontics. 1st ed. Hanover Park, IL: Quintessence Publishing Co, Inc.; 1991:147-180. 19. Kaley J, Phillips C. Factors related to root resorption in edgewise practice. Angle Orthod. 1991;61(2): 125-132. 20. Bolding SL, Roblee RD. Optimizing orthodontic therapy with dentoalveolar distraction osteogenesis. In: Bell WH, Guerrero CA, eds. Distraction Osteogenesis of the Facial Skeleton. Hamilton, Ontario: BC Decker Inc.; 2007:167-186. 21. Mantzikos T, Shamus I. Forced eruption and implant site development: soft tissue response. Am J Orthod Dentofacial Orthop. 1997;112(6): 596-606. 22. Kinzer GA, Kokich VO Jr. Managing congenitally missing lateral incisors. Part III: single-tooth implants. J Esthet Restor Dent. 2005;17(4):202-210. About the Authors Jerome C. Rose, PhD Professor of Anthropology University of Arkansas Richard D. Roblee, DDS, MS Adjunct Associate Professor Department of Orthodontics and Department of Restorative Sciences Baylor College of Dentistry
<urn:uuid:27db7fef-7ee7-43e6-b8e3-56953b2dceba>
CC-MAIN-2022-33
https://www.aegisdentalnetwork.com/cced/2009/06/interdisciplinary-analysis-origins-of-dental-crowding-and-malocclusions-an-anthropological-perspective
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570921.9/warc/CC-MAIN-20220809094531-20220809124531-00299.warc.gz
en
0.91624
5,083
3.421875
3
It is widely acknowledged that the military alliance between the United States and France, established in 1778, was responsible not only for a number of American victories over the British, but also for the end of the Revolutionary War. While much has been written about this topic as well as the events that occurred between 1777 and 1778, which led to the alliance, far less is known about the factors that took place in 1775 and 1776 that contributed to the initial need for the alliance, as well as the factors that culminated in the eventual signing of the alliance. Between June 5, 1775, when George Washington became commanding general of the Continental Army and the end of that year, the British and Americans had engaged in seventeen important battles, skirmishes, and naval confrontations, of which twelve were won by the Americans. Although from a military perspective the Continental Army was a reasonably effective fighting force, the colonies not only had hoped to free themselves from England’s dominance either by winning the war or through negotiations, but also had hoped to become an effective trading partner with many European nations. With these dual objectives in mind, in the latter part of 1775, they began to court France, an acknowledged world power, for additional military support as well as for the political acceptance they needed to gain the trust required by these other nations. Without some acknowledgement of their legitimacy, the colonies were merely rebels, traitors, and pirates; recognition [by France] would transform them from criminals to statesmen, diplomats, and privateers. Other European powers would quickly follow French recognition. It would afford the Americans opportunities for trade relations, loans, and alliances through Europe that were essential to securing and maintaining independence. In addition to these dual objectives, though, the immediate reason for pursuing France in 1775 was the introduction on November 20 of the Prohibitory Act by Lord North, an act which is said to have been an instrument tantamount to a declaration of war between Britain and its American colonies. While John Adams felt that the Act “throws the thirteen Colonies out of the Royal Protection, levels all Distinctions and makes us independent in spite of all our supplications and Entreaties,” he also felt obliged to conclude his remarks by stating that, “it is very odd that Americans should hesitate at accepting such a gift.” In short, although Adams firmly believed that now was the time for the colonies to declare independence, the problem he clearly recognized was that the mood in Congress in 1775 was simply not compatible with need to “accept such a gift.” As these events developed, the expansion of hostilities gradually prompted the colonies to seek French assistance, beginning with an unsuccessful attempt in 1775 to circumvent the British naval blockade. The British Naval Blockade “All American Vessels found on the Coast of Great Britain or Ireland are to be seized & confiscated on the first Day of January —all American Vessels sailing into or out of the ports of America after the first of March are to be seized & confiscated- all foreign Vessels trading to America after the first of June to be seized . . . All Captures made by British Ships of War or by the Officers of the Kings Troops in America [will be] adjudged by this Act to be lawful Prizes and as such Courts of Admiralty to proceed in their Condemnation.” This British naval order stemmed from provisions near the end of the Prohibitory Act and was found in “Some Newspapers and private Letters . . . stowed away by a Passenger in the Bottom of a Barrel of Bread . . . which escaped Search.” The information eventually was received by the Maryland Council of Safety in a letter dated February 27, 1776. Though not mentioned by name in any of the Congressional minutes that took place following Lord North’s introduction of the Act, Congress must already have been aware of these provisions, since as early as January 6, 1776, it had also approved a resolution to compensate American seamen who took part in the capture of any British ships “as lawful prizes” of war. That the Commander in chief [of any American naval vessels] have one twentieth part of the said allotted prize-money . . . [and that the] captain of any single ship have two twentieth parts for his share . . . that surgeons, chaplains, pursers, boatswains, gunners, carpenters, masters’ mates, and the secretary of the fleet, share together two twentieth parts and one half of one twentieth part divided amongst them equally . . . (and the rest of the ship’s company) at the time of the capture receive eight twentieths, and one half of a twentieth, be divided among them equally. Indeed, owing to this British “declaration of war,” the North Atlantic in 1776 had truly become a virtual highway for British military vessels. Between December 31, 1775, and December 31, 1776, 895 ships had sailed from England transporting British troops along with their provisions to the North American colonies. In fact, to cope with this problem, as early as January, 1776, Congress had purchased eight ships and ordered thirteen others that “could carry as many as 120 guns and crews up to 1,000.” In the case of any American ships destined to leave American ports, Congress had also issued messages for the ship owners to warn their captains “to take every possible precaution to avoid all British men of war and cutters on the voyage.” In view of what was obviously becoming a steadily worsening military and maritime situation, it is not surprising that as early as September 18, 1775, Congress had formed a committee known initially as the Secret Committee, the sole purpose of which was to establish overseas contracts for “the importation and delivery of quantities of gunpower . . . brass field pieces, six pounders . . . twenty thousand good plain double bridled musquet locks . . . and ten thousand strand of good arms.” The Evolution of the Secret Committee Although the Secret Committee’s original mandate was solely to procure military supplies, shortly after it was established, and as a result of the blockade, its mandate was broadened to cope with what had become an extremely serious financial problem for many local merchants who were engaged in domestic as well as foreign trade. To help overcome this problem the committee’s name was changed to the Secret Committee on Trade because it was also asked to consider how best to establish trade connections on both sides of the Atlantic. As an example of domestic trade, on October 2 the Secret Committee introduced the following recommendation: To encourage the internal Commerce of these Colonies, your Committee thinks Provision should be had to facilitate Land Carriage, and therefore are of the opinion that it should recommend by this Congress to the several provincial Conventions and Assembles, to put their Roads in good Repair, and particularly the great Roads that lead from Colony to Colony. Next, the Secret Committee was asked to devise a plan “for carrying on a trade with the Indians, and the ways and means for procuring goods proper for that trade.” Such action was considered essential to prevent the Indians from joining forces with the British as well as to maintain the Indian’s longstanding wish to remain neutral throughout the war. Owing to this further increase in responsibility, the committee’s name then became the Secret Committee on Trade and Commerce. In essence, and with this final role in mind, the overall mandate of the Secret Committee needed to satisfy three major goals: (1) obtain foreign military assistance, (2) establish foreign and domestic commercial trade connections, and (3) enhance Indian trade relations. To achieve these goals, all of which stemmed in one way or another from the Prohibitory Act, a nine-member panel was selected with Thomas Willing as chair. Since the focus of two of the three committee goals was on trade and commerce, it is not surprising that of this number, six of the committee members (John Alsop, Philip Livingston, Silas Deane, Samuel Ward, and John Langdon), along with the committee chair, were all highly successful merchants, many of whom also had developed considerable experience forming important overseas trading connections. Although Willing resigned shortly after the panel was formed, he was replaced by Robert Morris who was Willing’s partner in one of the largest and most successful overseas shipping companies in the colonies. The first overture of the Secret Committee took place on December 12, 1775. During a meeting held in America with the French foreign minister, the comte de Vergennes, the committee was told that “France is well disposed to you; if she should give you aid, as she may, it will be on just and equitable terms. Make your proposals and I will present them.” The committee was also told not to move forward until Vergennes let them know when and how it would be best to proceed. With these thoughts in mind the committee then began to develop plans to initiate talks not only with France but also with other European governments who might be interested in establishing military and trade relations with the united colonies. Because of its highly sensitive mission, Congress had resolved that the business of the committee needed “to be conducted with as much secrecy as the nature of the service will possibly admit,” which meant that many of its records were destroyed. For this reason, much of the following was distilled from the personal letters of the committee members who played a central role in the unfolding events: Morris and Deane. While Morris, as committee chair, remained in America and served as Deane’s major contact, Deane was selected to implement the committee’s overseas plans. Among the reasons given for Deane, he was well known to all of the other members of the committee, had many foreign contacts as the result of his highly successful commercial business in Connecticut, and, perhaps of even greater importance, Deane was the only committee member who was not an elected delegate to Congress. On your arrival in France you will [appear] . . . in the Character of a Merchant, which we wish you continually to retain among the French in general, it being probable that the Court of France may not like it should it be known publicly, that any [congressional] Agent from the Colonies is in that Country [to conduct business]. The first set of instructions Deane received appeared in a letter from Morris, dated February 19, 1776. We deliver you herewith one part of a Contract made with the Secret Committee of Congress for exporting Produce of these Colonies to Europe & Importing from France Certain Articles suitable for the Indians . . . We [also] deliver to you herewith Sundry letters of introduction to respectable Houses in France which we hope will place you in the respectable light you deserve to appear & put you on a footing to purchase the Goods wanted on the very best terms . . . We think it prudent thus to divide the remittances that none of the Houses may know the Extent of your Commission but each of them will have orders to Account with you for the Amount of what comes into their hands for this purpose . . . The Vessel [we hired to deliver the goods] is on Monthly pay. Therefore, the sooner you dispatch her back the better & you will give this captain . . . suitable directions for approaching this Coast on their return [to avoid the blockade]. The same letter also contained the following information, which indicates how purchasing arrangements were to be made. That the sum of $200,000 in continental money now advanced and paid by the said Committee of Secrecy to the said John Alsop, Francis Lewis, Philip Livingston, Silas Deane and Robert Morris, shall be laid out by them in the produce of these Colonies and shipped on board proper vessels, to be by them chartered for that purpose, to some proper port or ports in Europe (Great Britain and British Isles excepted) and there disposed of on the best terms. . .(the proceeds from the sales of this produce should then be used to purchase) such goods, wares or merchandise as the Committee of Secrecy shall direct and shipped for the United Colonies to be landed in some convenient harbor or place within the same and notice thereof given as soon as conveniently may be to the said Committee of Secrecy. Deane then received a second set of instructions from Morris that he was to implement when he arrived in Paris. To maintain the secrecy of his visit, he was told to inform those whom he would initially meet that he was only in Paris as a tourist (“it is scarce necessary to pretend any other business at Paris, than the gratifying of that Curiosity which draws Numbers thither yearly, merely to see so famous a City”) and that only when the time seemed most appropriate was he to request a meeting with the French foreign minister. Initiating the Alliance Deane was also told that upon meeting Vergennes, his message should be flattering, convincing, and contain no information that would allow anyone to know that he and Vergennes had previously met in America. The words in Morris’ letter were carefully crafted and designed to convey these exact points. you had been dispatched by the Authority [of Congress] to apply to some European Power for a supply [of arms] . . . if we should [as there is great appearance we shall] come to a total Separation from Great Britain, France would be looked upon as the Power, whose Friendship it would be fittest for us to obtain & cultivate . . . it is likely that a great part of our Commerce will naturally fall to the Share of France, especially if she favors us in this Application as that will be a means of gaining & securing the friendship of the Colonies—And, that as our Trade rapidly increasing with our Increase of People & in a greater proportion, her part of it will be extremely valuable . . . That the supply we at present want is Clothing & Arms for 25,000 Men, with a suitable Quantity of Ammunition & 100 field pieces . . . That we mean to pay for the same by Remittances to France, Spain, Portugal & the French Islands, as soon as our Navigation can be protected by ourselves or Friends. The last set of instructions to Deane prior to his departure also dealt with arrangements that had been made for his passage from the colonies to France. Although scheduled to leave Philadelphia on March 8, due to many unforeseen delays, Deane finally set sail on May 3 and arrived at Bordeaux on June 6. Once in France Deane received a further set of instructions from Morris dated July 8, 1776. It was only at this point that Deane was able to make clear to Vergennes, that to satisfy a major condition as stipulated by France for receiving French military aid, the united colonies had finally broken away from Britain through the ratification of the Declaration of Independence and therefore was now able to negotiate on its own terms with all foreign nations. With this [letter] you will receive the Declaration of Congress for a final separation from Great Britain . . . You will immediately communicate the piece to the Court of France, and send copies of it to the other Courts of Europe. It may be well also to procure a good translation of it into French, and get it published in the gazettes. It is probable that, in a few days, instruction will be formed in Congress directing you to sound the Court of France on the subject of mutual commerce between her and these States. It is expected you will send the vessel back as soon as possible with the fullest intelligence of the state of affairs, and of everything that may affect the interest of the United States. And we desire that she may be armed and prepared for defense in the return. On October 1 Morris wrote again, but this time he informed Deane that the committee had received nothing further from him since his departure at the beginning of May. Throughout the letter Morris expressed his considerable anguish over this lack of communication coupled with his concern over this lengthy passage of time. It would be very agreeable and useful to hear from you just now in order to form more certain the designs of the French Court respecting us and our Contest especially as we learn by various ways they [the British] are fitting out a considerable Squadron . . . they may now strike at New York. Twenty Sail of the line would take the whole Fleet there consisting of between 4 & 500 Sail of Men of War, Transports, Stores, Ships, and prizes . . . alas we fear the Court of France will let slip the glorious opportunity and go to war by halves as we have done. We say go to war because we are of the opinion they must take part in the war sooner or later and the longer they are about it, the worse terms will they come in upon . . . The Fleet under Ld. Howe you know is vastly Superior to anything we have in the Navy way; consequently wherever Ships can move they must command; therefore it was long foreseen that we could not hold either Long Island or New York. Adding to his concerns, in an earlier letter Morris had also described to Deane the devastating impact that the blockade itself was having on all colonial commercial shipping. I have bought a considerable quantity of Tobacco but cannot get suitable Vessels to carry it. You cannot conceive of the many disappointments we have met in this respect . . . So many of the American Ships have been taken, lost, sold, [or] employed abroad [as the result of the blockade] that they are now very scarce in every part of the Continent which I consider a great misfortune, for ship building does not go on as formerly. In addition to the blockade, and contrary to the previous year, of the twelve battles and skirmishes waged between the British and the American forces between August 27 and mid-December 1776, the British were victorious in all but two and in a number of these, the American losses, in contrast to the British, were often substantial. For example, on August 27 the British defeated Washington at the Battle of Brooklyn. Whereas the British suffered 337 wounded or missing and 63 killed, the Americans suffered 1,079 wounded or missing and 970 killed. Then on December 1, under Washington’s command, the Americans arrived at the Delaware River, crossed into Bucks County, Pennsylvania, and shortly thereafter it was anticipated by the Americans that Philadelphia would soon be attacked. In view of these events it is fitting that this period has been referred to as “one of the lowest points of the war for the patriots.” On October 23, to prevent an anticipated invasion of New York, Morris further requested Dean “to procure Eight Line of Battle Ships either by Hire or purchase. We hope you will meet immediate success in this application and that you may be able to influence the Courts of France & Spain to send a large Fleet at their own Expense to Act in Concert with these Ships.” Although at first glance this last request by Morris may seem surprising because it called upon France as well as Spain to now engage in an act of war against Britain, the request was clearly in line with Article 4 in a September 24, 1776, congressionally-approved “Plan for a Treaty” to be negotiated by the Americans with France. It is also the case that despite the very large number of articles in the plan, it was only this article, along with Article 3 that the treaty negotiators were informed “must be insisted upon” during the course of negotiations. In short, because the plan was approved by Congress in September 1776 and because France’s initial offer of assistance to the colonies in their dispute with England took place in December 1775, Congress must have expected France to become active in the colonies’ military engagements once the Declaration of Independence had been ratified. As the events outlined above steadily unfolded, it is not surprising that the members of Congress found themselves in an increasingly desperate situation. With no other help to call upon, it is also perhaps not surprising that on December 11 Congress approved the following Resolve. That it be recommended to all the United States, as soon as possible, to appoint a day of solemn fasting and humiliation; to implore of Almighty God the forgiveness of the many sins prevailing among all [military] ranks, and to beg the countenance and assistance of his Providence in the prosecution of the present just and necessary war. . . . It is left to each state to issue out proclamations fixing the days that appear most proper within their several bounds. To ensure that this message was clearly understood by all concerned, the Resolve also called upon the members of the military itself, including the military hierarchy, to act in accordance with the Almighty’s wishes. all members of the United States and particularly the officers civil and military under them, [to practice] the exercise of repentance and reformation; and further, require of them the strict observation of the articles of war, and particularly, that part of the said articles, which forbids profane swearing, and all immorality. Finally, on December 30, 1776, Congress issued its last attempt of the year to avoid total defeat by providing France with the following enticement to come to its aid: “should the Independence of America be supported [by France], Great Britain . . . would at once be deprived of one third of her power and Commerce; and that this in a great Measure would be added to the Kingdom of France.” In the event this enticement failed to achieve its objective, Congress then also threatened France with the consequences that would result if it did not immediately enter the war on behalf of the Americans colonies. in Case Great Britain should succeed against America, a military Government will be established here [in America] and the Americans already trained to arms, will, however unwilling, be forced into the Service of his Britannic Majesty, whereby his [Majesty’s] power will be greatly augmented and may hereafter be employed [to take over] the French and Spanish islands in the West Indies. Unfortunately, given the prevailing international climate in 1776 as dictated by Britain and Spain, France elected to offer only secret financial and limited material aid in support of the colonies, and not the type of aid being requested by Congress. Therefore, France refused to go beyond what it felt, at that time, was most appropriate in satisfying its own best interests and chose to remain officially out of the war. Culminating the Alliance The situation described above suddenly changed in the fall of 1777. On October 31, Congress sent a letter with the following information to its delegates in Paris. We have the pleasure to enclose the capitulation, by which General Burgoyne and his Whole army surrendered themselves [at Saratoga as] prisoners of war . . . We rely on your wisdom and care to make the best and most immediate use of this intelligence to depress our enemies and produce essential aid to our cause in Europe. With this information in mind, the American delegates in France who “were attempting to play upon fears [told the French representatives] that an accommodation between Great Britain and the revolting colonies was [now] possible and even imminent.” The significance of these two factors and the anxiety they must have generated among the French was fully captured in the following words by Bemis. The fear that the British Ministry, staggering under the blow of Saratoga, was about to offer to the Colonies peace terms generous but short of independence had an immediate effect in France. Anxious lest such terms might be accepted by the war-weary Americans . . . the French Ministry felt that if something were not done quickly, the long-awaited chance, at last at hand, for sundering the British Empire might pass and be gone forever. The Treaty of Amity and Commerce along with the Treaty of Alliance, both of which together are often referred to as the French Alliance, were finally signed on February 6, 1778. A question that still remained, though, was how would the Kingdom of France cover the costs associated with supplying all the military aid America needed to win the war? Anne-Robert Jacques Turgot, France’s Minister of Finance, repeatedly warned the King that “the first gunshot will drive the state to bankruptcy.” The answer can be found in the following material. On July 16, 1782, Benjamin Franklin, Minister Plenipotentiary of the United States of North America, agreed and certified that the sums advanced by His Majesty to the Congress of the United States . . . under the title of a loan, in the years 1778, 1779, 1780, 1781 and the present 1782, [to repay] the sum of eighteen million livers, money of France . . . on the 1st of January, 1788, at the house of the Grand Banker at Paris . . . with interest at five per cent per annum. To prevent a French financial catastrophe it appears that Congress had authorized Franklin to underwrite a series of French loans to cover the cost of the French military help it needed to achieve victory over Great Britain. While on the surface it would seem that France was taking a considerable risk in agreeing to this procedure, the reality of the situation suggests that it had no other choice. If the United States had lost the war, France’s fears of a British takeover of its territory could very well have been realized, whereas, if the United States won, the loans would have been repaid and France would have been able to maintain its position as a European power. Although the agreement was indeed a gamble, it was a gamble that France was simply forced to take. Despite the fact that the Alliance had been signed on February 6, 1778, it is equally important to note that, due to the naval blockade, Congress had received no further word on this matter from its overseas delegates since May 1777. As a result, Congress was faced with an additional problem as expressed on April 30, 1778, in a letter to its Paris representatives. We have read a letter written by a friend dated Feb. 13, 1778, in which we are told that “you had concluded a Treaty with France and Spain which was on the Water towards us.” Imagine how solicitous we are to know the truth of this before we receive any proposals from Britain in consequence of the scheme in Ld. North’s speech and the two Draughts of Bills now sent to you. The “proposals” in this letter referred to the terms for reconciliation that Lord North had authorized in March 1778 for the Carlisle Peace Commission to use as a means for negotiating an end to the war with America. The difficulty Congress now faced, however, stemmed not only from Lord North’s proposals, but also from two Congressional counterproposals drafted by Samuel Huntington and by Henry Drayton, respectively. Henry Laurens, who at the time was president of the Congress, was extremely troubled over this issue and expressed his personal concern in a letter to his son. Some of our people here have been exceedingly desirous of throwing abroad in addition to the Resolutions an intimation of the willingness of Americans to treat with G Britain upon terms not inconsistent with the Independence of these States or with Treaties with foreign powers. I am averse. We have made an excellent move on the Table—rest until we see or learn the motions on the other side—the whole World must know we are disposed to treat of Peace & to conclude one upon honorable terms. To Publish [anything on this matter at present is] therefor unnecessary [and] it would be dangerous to Act, encourage our Enemies & alarm our friends. Stated more succinctly, Laurens’ concern stemmed from the possibility of reaching too hasty a conclusion without a full understanding of the overall ramifications in the different sets of proposals. To behave in this manner would simply not have been in the best interests of the United States. Although Congress did debate the matter at the end of April, as the result of the fact that Silas Deane had arrived at York on May 2, the debate only lasted two days. With official versions of the Treaty of Alliance and the Treaty of Amity and Commerce now in hand, Deane was able to show that the French Alliance had indeed been signed in February, which meant that closure had been achieved and no further debate was required. For the members of Congress, their long sought-after goal of French military aid could now finally be considered as secure. Milton C. Van Vlack, Silas Deane, Revolutionary War Diplomat and Politician (Jefferson, NC: McFarland & Company, 2013), 77-78; Barbara A. Mann, George Washington’s War on Native America (Westport, CN: Praeger, 2005), 10. Richard J. Werther, “Opposing the Franco-American Alliance: the Case of Anne-Robert Jacques Turgot,” allthingsliberty.com/2020/06/opposing-the-franco-american-alliance-the-case-of-anne-robert-jacques-turgot/. Contract between the King and the Thirteen United States of North America, signed at Versailles July 16, 1782,” avalon.law.yale.edu/18th century/fr-1782.asp.
<urn:uuid:517630db-c8d9-4f2d-aaa9-c7db4b40875b>
CC-MAIN-2022-33
https://allthingsliberty.com/2022/07/emergence-of-the-french-alliance-the-beginning-and-final-phases/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572033.91/warc/CC-MAIN-20220814113403-20220814143403-00299.warc.gz
en
0.978709
6,175
3.859375
4
Background: As the war situation worsened, the Nazi propaganda apparatus returned to earlier methods of propaganda that had been successful during the struggle for power. One of these was the evening discussion meeting (Sprechabend), where party members came together to discuss a variety of issues. This document provides guidelines for holding such meetings in Gau Sachsen. These were small meetings, from a several people to perhaps a dozen or so. They, in turn, were to use what they learned in conversations with friends and work mates. For further material on the theme, see guidelines for discussion evenings issued by the Reichspropagandaleitung. The source: Informationsdienst für die Zellensprechabende im November 1944, issued by the Gauleitung der NSDAP Sachsen together with the Gaupropagandaamt and the Gauschulungsamt. May the German nation never forget that the hardness of a people is never tested when the leadership has visible successes, but rather during the hour of apparent failures. During the month of October the German people withstood the test. While this Information Service was in press, there were storms on all fronts. The goal of German defense is, as before, to prevent deep incursions by our enemy into Reich territory. The loss of Finland led to changes in the Baltic front. The enemy suffered enormous losses as we withdrew. The battles in East Asia show that Japan still has its full fighting strength and that our enemies must also be prepared for heavy fighting in this theater of the war. German armaments production is in full motion. Once our measures for a total war effort in our armaments industry take effect, it will be possible to give a more precise summary of the military situation. Since Churchill was unable to fulfill his promise to defeat Germany in October, many political and military questions apparently made a meeting with Stalin necessary. Churchill’s bitter hatred does not stop him from making agreements about Europe with Stalin, i.e., Bolshevism, that would mean Germany’s death. It is striking that America was not part of this conference. The NSDAP has recognized the danger of Bolshevism since its beginning. That danger is now evident in Finland, Romania,Bulgaria, Italy, France, and Belgium. A group of communists once asked Lenin the question: “What is communist morality?” Lenin answered: “murder, destruction, shattering everything if it serves the revolution. On the other hand, stroke a person on his head and tell him he is Alexander the Great, if that best serves the revolution.” Churchill was treated in Moscow as if he were Alexander the Great. The Führer’s saying that he who gives himself to Bolshevism will be devoured by it will also prove true in England. We know that earlier rulers of old Russia, insofar as they were Germany’s enemies, acted according to Peter the Great’s principle: “A forest that is not completely cut down grows back.” Germany must, therefore, be wiped out or else it will grow back again. Added to that is Bolshevist lack of conscience and the brutality of a Stalin. One summer evening he said to several of his friends: “To prepare a blow to a victim to the last detail, to enjoy pitiless revenge, then go to sleep. There is nothing more beautiful in this world.” We do not need to wonder to Bolshevism has shown not the least regard for Finland, Poland, Romania, and the other peoples. Germany will be stamped out and crushed. Anglo-American hatred and arrogance have already decided to throw Germany along with the other European peoples to Bolshevism to be devoured. Let no one deceive himself. The American Soldateska is no better than the Bolshevist horde. During the last war British agitation funds succeeded in inculcating a burning hatred of Germany in the American public within a few weeks. Today all Jewish drive for power is concentrated in New York. Enemy agitation has made sure that anything German is persecuted with the worst dirty methods. The result is blind hatred on the part of Anglo-American soldiers. We want to protect Germany from these horrors by joining closely with the Führer and fighting on the knife’s edge for Germany’s freedom. “If our will is so strong that in cannot be overcome by anything, then our will and our German steel will overcome everything and win.” Adolf Hitler on 1.9.1939 Against the enemy’s will to destroy us, we set our tough and bitter will. The enemy may storm against our borders with superior masses and matériel, but we fight for a better idea and will continue the battle until its victorious conclusion. We know what will happen to us if we become weak. We look to the peoples of states that were until recently allied with us, how they were betrayed by their leadership and given over to Bolshevism. “War shows whether an individual and a whole people has a great thought, a central idea, that is strong enough to withstand enmity and hatred. The individual and the people will face a choice: do they want to take up the battle for freedom, life, and history, or do they refuse to act and perish.” Kurt Eggers That great thought, that central idea, is lacking in these people and their leadership. Our people will not succumb to the storm of the enemy and their threats, for we know what we are fighting for. Adolf Hitler gave our people the National Socialist worldview, an idea that is worth fighting and dying for. We know that all life is struggle and we affirm the life law of battle. “He who wants to live fights, and he who will not fight in this world of eternal struggle does not deserve to live.” Adolf Hitler (Mein Kampf) We know that to reach the Reich we strive for, one where social justice prevails, we must fight a world full of enemies. Battle leads to a life in which our highest values are honor, loyalty, sacrifice, and freedom. We know that if the Americans plunged into our Reich, our lot would be no better than under the Bolshevists. These plutocratic warmongers know only the ideas of the Jews, of destroying all ideals and enslaving humanity. This battle is a battle of worldviews. He who wants to destroy and steal a German’s highest values hits nerve, takes his very life away. In times of the greatest danger, the people’s soul breaks out with elemental force to defend against the death blow, to defend itself. This spiritual power is greater than any numerical and material superiority. Readiness increases to the highest degree. What does the life of an individual mean when the holy life of the people is threatened! He gives his life to win life. Superhuman things will be done and the word “impossible” is not heard. We are fighting this battle with everything we have. We must become hard, we must hate because we love our people. In a time when our enemies are using every method to destroy us, any sentimentality must be torn from our hearts. We will win if each German uses his whole will and entire strength. Life is hard; it knows nothing of “humanity.” The law of life is that only the strong survive. We are the stronger if we are the harder. He who believes that he can hide in a comfortable corner commits a crime against out people. He who still believes that he can stand aside is a traitor. Each individual must be ready to sacrifice his life for our people. Each life finds its fulfillment. To die fighting for people and fatherland his the highest end. The steel-crowned crosses of our fallen heroes are memorials for coming generations. They, too, will be ready to sacrifice their lives. Only so will Germany be eternal. Information for cell discussion evenings for November 1944 is provided by the Reichsorganisationsleiter der NSDAP — Hauptschulungsamt — in cooperation with the Reichspropagandaleitung der NSDAP — Chef der Propagandastabes. The following slogans to be handled in the discussion evenings have been published in a Sprachabend-Eildienst. The Sprechabend-Eildienst is already in the hands of the local groups. Guidelines for using the material are in the Information Service for Cell Discussion Evenings in October 1944. The discussion evening is intended for conversation, as its name suggests. The discussion reveals the problems that should be dealt with according to fundamental National Socialist principles. He who understands these fundamental principles is in the position to conduct a discussion evening even without a propaganda speaker or educational speaker. Many local group leaders or cell leaders are under the false idea that an evening must always have a speaker. The opposite is the case: Real life prevails in a cell or local group only when as many party members as possible ask questions and participate in the debate. A discussion evening is something like a town meeting, in which everything is brought up that needs to be discussed. The town council joins in the discussion. The leader of the discussion evening, of course, must have a political goal for each evening. It is good for the leader of a discussion evening to mention in advance the theme to be treated to several party members and ask these to bring along supporting material. Such supporting material can be letters from soldiers that describe conditions at the front. From these letters he can then shift to the military-political situation. Or perhaps a woman within the cell or local group has been convicted of having an abortion. The sentence can be announced, it can be talked about, and then a transition made to political questions about population policy. Bolshevism: Recently a large number of former Marxist functionaries were arrested. This theme is particularly appropriate. There are many who have not entirely understood this. We all know that Marxism has a finely woven web that covers the whole world, Germany not excepted. It is always better to watch and investigate a former Marxist for a few weeks that to allow secret Putschists to run about. He who is innocent will be released. Jewish Problem: Someone talks about a problem he once had with Jews, or a bible verse is read that shows the Jews want world domination, and that the former view protected Jewry since each Jew could be baptized. Another example: A woman or girl has had sexual relations with a foreigner. That can be officially announced and talked about. Many will be able to speak about things that they have seen. From there one can talk in general about questions regarding foreigners. Basically, sexual relations with foreign countries is prohibited. However, in conclusion one can explain which peoples can be seen as racially related. Often it will be the case that a solution cannot be found during a discussion evening, since no one is there who can answer the question. In this case a party member will be delegated to pursue the question with an expert in the Kreisleitung so that there is a basic for discussion at the next discussion evening. Questions on women in the labor force: Open discussion about individuals in question who refuse. One can hear the opinion of other party members. Often factual misunderstandings about individual people’s comrades can be clarified. In short, things that are not particularly confidential and secret all belong at a discussion evening. After discussing various matters, one must get to the real theme. You receive enough material that can be used: In conclusion I stress again that the discussion evening leader must not do all the talking. One party comrade reads a letter, asks a question, or reports on a certain event of experience, after which a discussion must result that leads to some theme. And the theme does not have to be stubbornly adhered to, since it often turns out that another theme is more interesting that the one chosen. The discussion evening must be carried out even if the attendance is very small; when only a few are present, they are filled with a particularly close sense of belonging that they will see as dependability. Where a discussion evening has a good turnout, it is always good to begin with a lively song. That, too, brings a fresh spirit to the community. Where possible, accompany the song with a piano. People must have the feeling a such evenings that they belong to a team. At times with serious news, e.g. from the fronts, attendance must not lag because many begin to doubt, but rather this team must be so strong that especially at such times people come to the discussion evening to meet with others who share their sentiments, to discuss things with each other, and return home with a feeling of the strength of the community. In local groups that follow these principles, the party membership is firm and reliable, as countless examples prove. During the struggle for power it was always true that after hard days we drew even closer together and felt almost like brothers. The party has given proof that it can give strength to the German people. But this can flow only with the block, the cell, and the local group grow into a spiritually powerful community. People’s comrades often have the false impression that the Americans are more moderate than the Bolshevists. That means that our propaganda about Anglo-American depravity must be strengthened. Recently an exchanged prisoner who spent a year in various hospitals in North America, in the East, South, and West, told me that American depravity cannot be exaggerated. The hate agitation, the agitation films, and the filthy literature are filled with such depravity that the German sense of shame would never be able to write or speak about them. That is how Anglo-American soldiers behave. Where Anglo-American occupation occurs, a wave of persecution of others follows. Farmers, workers, and civil servants are arrested. One gives Bolshevism a free hand. Murder and shootings are the order of the day. American soldiers behave in a swinish manner, no better than the Bolshevist Soldateska. Spiritual Bolshevism already prevails in American agitation in the press, film, and books. That proves clearly that Bolshevism rules and that behind the plutocratic form of government of the Anglo-Americans, the Jew stands with his Old Testament hatred. The Reichspropagandaleiter has ordered that education about the Anglo-Americans is to be greatly intensified. The German press and radio will carry sufficient material for National Socialist propagandists. Above all it should be stressed that Italy has been plagued by starvation for over a year since its occupation. The bubonic plague is raging in Algeria. Queen Wilhelmina announced starvation in a speech for Holland. France is facing a terrible winter. There is no gas or electricity, or at most for a half hour or an hour a day in Paris. Parisian citizens received only three sacks of coal for the entire winter. As a result, Bolshevism is constantly growing stronger. If these developments continue, and that seems probable, Stalin will not need to send the Soviet army into Western Europe to bring about Bolshevism. The only way to keep such things from happening is Germany is to apply our hardest will to resist that must enable the homeland to devote all possible strength to armaments. The chaps resulting from Bolshevist or Anglo-American occupation, even if only in parts of Germany, would be vastly greater than in France, for Germany is more thickly populated. The Anglo-Americans have neither the ability nor the will to provide sufficiently for the population, nor to build as smoothly functioning an organization as the German Reich Farmers Estate. The result of an Anglo-American occupation would be the deportation of men to Siberia, separated from the women. The children, according to the above-mentioned exchange prisoner from the USA, would be taken from their mothers at the age of three and raised abroad. Jewish agitation and American arrogance has succeeded in inciting whole peoples such that we can expect mercy neither from the Bolshevists nor the Anglo-Americans. The false conclusion that it would be better under the Anglo-Americans than the Bolshevists would be even more dangerous if it became part of the political opinion of the German population. Such an opinion, therefore, is to be opposed by positive opinion formation in all local groups, membership meetings, discussion evenings, and other meetings. Terms such as partisans or similar negative terms may never be used in our propaganda. Under the present conditions, the German people is gathering its whole defensive strength, which is an enormous and previously untouched reserve. The reserves that remain in the homeland will be gathered into the Volkssturm by order of the Führer. The task of the Volkssturm is to use its abilities unconditionally to make life for our enemies on German soil impossible. To prevent misunderstandings about members of the Volkssturm, terms such as partisans, guerillas, or terrorists are not to be used. The members of the Volkssturm are combatants under international law and are to be designated as such. The German Volkssturm shows the unshakable will of the German people not to surrender its freedom under any circumstances. It is the only foundation for our future. The strength that is mobilized through it is enormous and will present our enemy with insoluble problems. The Volksturm is a multiplication of the previous Wehrmacht in the Reich. If the enemy attempts to drive into this or that place in the interior of the Reich, this force will grow with each kilometer. The enemy will hardly be in the position even with its industry and economy to send as rapidly and as many 16- to 60-year-olds to attack Germany as we will be able to gather with lightning speed in the event of acute danger in the Volkssturm. That is where the great military value and meaning of the National Socialist Volkssturm is to be seen. Now the task is to organize, train, and lead the companies and battalions. Our meetings must especially emphasize the growing strength of our Wehrmacht as a result of the Volksturm in the event of approaching danger. Some understood the Reichsführer SS’s speech to mean that we should attack the enemy with flails and scythes, as the Army of Liberation was told to do in 1813. These critics do not note that in the same breath the Reichsführer mentioned that the Volkssturm be trained and equipped with infantry and anti-tank weapons. Here, too, the goal is to make life as difficult as possible for the enemy if he succeeds in entering German territory here or there. He has to pause for breath, requisition food or bring it in, rest, repair machines and weapons. The goal is to make every movement difficult and hinder any recuperation. Each building must be defended with every means and using every art of war.— That and that only is how the Reichsführer SS’s words are to be understood. It is also good for everyone to participate enthusiastically in training to become a good shot and a good fighter. Men with the right spirit, when they have a weapon in their hands, will be a fighting force the enemy cannot overcome, The situation of the German people today is no worse than it was in 1939. In fact, one can say that it is significantly worse for the enemy, for he faces the fanatic resistance of a German people that is fighting for its life. Back then, some of our armies were still being formed and others were not fully equipped, but they faced a world of enemies. With the force of the German attack, the German Wehrmacht fought battles that won us five full years of time. This gain of time is so great that even today we stand at the North Cape, on the borders of East Prussia and the West, in the General Gouvernment, on the Hungarian plains, and in Italy. The enemy has suffered great and irreplaceable losses in matériel. The enemy has fought for five full years and suffered the greatest losses, but has been unable to defeat the Reich. We have succeed in what seemed impossible in 1939, and with the second great use of the strength of the German people we will succeed in defending what we set out to do: Away with the Treaty of Versailles, which enslaved the German people and led to its collapse, unification of all tribes in a Greater German Reich, securing the food supply of the German people, and a final victory of German arms, against which the hate of the enemy will break. [Page copyright © 2015 by Randall Bytwerk. No unauthorized reproduction. My e-mail address is available on the FAQ page.] Go to the 1933-1945 Page. Go to the German Propaganda Home Page.
<urn:uuid:b5f3335b-9ed3-41cd-9b61-cc538f41ffc2>
CC-MAIN-2022-33
https://research.calvin.edu/german-propaganda-archive/informationdienst-11-1944.htm
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571692.3/warc/CC-MAIN-20220812105810-20220812135810-00095.warc.gz
en
0.962167
4,306
2.78125
3
by Doug Bandow How nurses can help relieve spiraling health-care costs Medicare isn't the only part of America's health-care system where costs are spiraling out of control. Doctors have created a cartel by confining the delivery of treatment solely to M.D.s and by regulating the number and activities of M.D.s. This suppresses the supply of health-care professionals, raising costs and reducing choice. State governments could significantly lower both public and private health-care costs by reducing physicians' stranglehold over medical care and moving towards a freer market. For its part, the federal government could, if it is willing to use its vast power under the commerce clause of the Constitution, preempt state rules that hamper the cost-effective delivery of medical services. The Clinton administration recognized the problem of supply, but sought to remedy it by manipulating federal funding to force more doctors to become general practitioners. Similarly, the Council on Graduate Medical Education has urged educational changes to change the ratio of primary-care physicians to specialists from 30:70 to 50:50 by the year 2040. The critical question, however, is not what percentage of doctors should provide primary care, but who should be allowed to provide primary care. Doctors are not the only professionals qualified to treat patients, yet most states needlessly restrict the activities of advanced-practice nurses (A.P.N.s) (who include nurse practitioners, nurse-midwives, clinical nurse specialists, and nurse anesthetists), registered nurses (R.N.s), licensed practical nurses (L.P.N.s), physicians assistants (P.A.s), nurse's aides, and similar professionals. Even today, these providers dramatically outnumber doctorsthere are 2.2 million R.N.s, three times the number of M.D.s, and nearly 1 million L.P.N.s alone, while the number of A.P.N.s, at well over 100,000, is about half the number of physicians providing primary care. Ellen Sanders, a vice-president of the American Nurses Association, estimates that 300,000 R.N.s could become A.P.N.s with an additional year or two of training. Although A.P.N.s, R.N.s, and L.P.N.s are capable of handling many simple and routine health care procedures, most states, at the behest of physicians, allow only M.D.s to perform "medical acts." According to Arthur Caplan, director of the Center for Biomedical Ethics at the University of Minnesota, "You have highly trained people doing things that could be done by others." Doctors perform what A.P.N.s could do, A.P.N.s do what registered nurses could handle, and registered nurses handle what nurse's aides could perform. "I can take care of a patient who has broken an arm," complains Maddy Wiley, a nurse practitioner in Washington state, "treat them from top to bottom, but I can't give them an adequate painkiller." Instead, patients can receive such treatment only through the government-created doctors' oligopoly, into which entry is tightly restricted. Observes Michael Tanner of the Cato Institute: "In most states, nurse practitioners cannot treat a patient without direct physician supervision. Chiropractors cannot order blood tests or CAT scans. Nurses, psychologists, pharmacists, and other practitioners cannot prescribe even the most basic medications." The problem is exacerbated by the nature of the medical marketplace, where the expansion of services is expensive. Much of the necessary capital already exists -- there are, for instance, a lot of unfilled hospital beds. The practice of medicine, however, has become increasingly labor intensive. The National Center for Policy Analysis figures that, because of the high cost of training medical personnel, "moving capital and labor from other sectors requires a price increase for medical services that is six times higher than that needed to expand other goods and services." As a result, the NCPA estimates, 57 cents of every additional dollar in U.S. medical expenditures is eaten away by higher prices rather than added services. Physicians have shown unyielding resistance to alternative professionals. Medical societies have tried to prevent chiropractors, for instance, from gaining privileges at local hospitals. M.D.s have similarly opposed osteopaths and podiatrists. Working through state legislatures, physicians have won statutory protection from competition. Many states ban midwives from handling deliveries. Optometrists are usually barred from such simple acts as prescribing eye drops. Half of the states permit only physicians to perform acupuncture. Overregulation of pharmaceuticals, which prevents patients from self-medicating, also acts as a limit on health-care competition. Allowing over-the-counter sales of penicillin, for instance, could save patients about $1 billion annually. A recent episode in Georgia illustrates the arbitrariness of most occupational licensure regulations. According to Tanner, state legislation was introduced at the behest of dentists to prevent dental hygienists from cleaning teeth. Then an amendment was added for the ophthalmologists to bar optometrists from performing laser eye surgery. In the end, the bill prohibited anyone but physicians, veterinarians, podiatrists, and dentists from performing any procedure that pierced the skin, effectively outlawing nurses from drawing blood or giving injections. This unintended outcome would have brought most hospitals to a halt, and a court had to block its enforcement. Examples abound of legal restrictions promoted by self-serving professionals and harmful to consumers. In general, professional licensure has reduced the number of potential caregivers, cut the time spent with patients, and raised prices. The second manifestation of physicians' monopoly power is the anticompetitive restrictions that the profession places upon itself. The doctors' lobby has helped drive proprietary medical schools out of business, reduced the inflow of new M.D.s, and for years prevented advertising and discouraged members of local medical associations from joining prepaid plans. Until the early 1980s, the American Medical Association attempted to restrict walk-in clinics that advertised themselves as providing "emergency" or "urgent" care. Explained John Coury, who was then chairman of the AMA, "Some of these facilities were set up by nonmedical people as money-making propositions" -- as if doctors don't seek to make money. Moreover, federal immigration law and state requirements limit the entry of foreign doctors into the country and often prevent them from finding work. None of these rules has much to do with consumer protection. Allowing nurses to provide services for which they are qualified would expand people's options, allowing patients to decide on the more cost-effective course of their treatment. Some states have begun to allow greater competition among health-care providers. Mississippi does not regulate the practice of P.A.s. Nearly half the states, including New York, already allow nurse practitioners to write at least some prescriptions, while a handful, such as Oregon and Washington, give A.P.N.s significant autonomy. The Florida Department of Health and Rehabilitative Services is encouraging the training of nurse-midwives. In this area, at least, the Clinton administration wanted to move in the right direction, pledging to "remove inappropriate barriers to practice." The Clinton proposal would have eliminated state laws that ban A.P.N.s from offering primary careprenatal services, immunizations, prescription of medication, treatment of common health problems, and management of chronic but standard conditions like asthma -- and to receive insurance reimbursement for such services. Even these modest efforts did not go unchallenged: The California Medical Association attacked the Clintons' proposal as "dangerous to the public's health," and an AMA report argued that expanding the role of nurses would hurt patients, fragment the delivery of care, and even raise costs. There is, however, no evidence that the public health would be threatened by allowing non-M.D.s to do more. Professionals should be allowed to perform work for which they are well trained -- without direct supervision by a doctor. At the very least, states should relax restrictions in regions, particularly rural areas, that have difficulty in attracting physicians. In this way, those with few health-care options could choose to seek treatment from professionals with less intensive training. A recent Gallup poll found that 86 percent of Americans would accept a nurse as their primary-care practitioner. Why not give them that option? Says Leah Binder of the National League of Nursing, "Let the 'invisible hand' determine how much it should cost to get a primary-care checkup." Physicians assistants, for instance, receive two years of instruction to work directly for doctors and could perform an estimated 80 percent of the primary-care tasks conducted by doctors, such as taking medical histories, performing physical exams, and ordering tests. Similarly, the Office of Technology Assessment figures that nurses with advanced practices could provide 60 to 80 percent of the clinical services now reserved for doctors. Explains Arthur Caplan of the University of Minnesota, nurse practitioners are "an underutilized, untapped resource that could help reduce the cost of health care significantly." Len Nichols, a Wellesley economist, estimates that removing restrictions on A.P.N.s could save between $6.4 billion and $8.8 billion annually. Mary Mundinger, the dean of Columbia University's School of Nursing, contends that nurse practitioners have been providing primary care for decades and no research,even that conducted by doctors, has ever documented any problems. Lonnie Bristow, the chairman of the AMA, admits as much, but responds that those nurses were working under a doctor's supervision. But that supervision is often quite loose. Nurses regularly perform many simple aspects of primary care far more often than doctors and, as a result, are better qualified to handle them in the future, with or without the supervision of an M.D. None of the AMA's arguments withstands analysis. For instance, the officia l AMA report claims that because nurses want to serve all populations and not just "underserved" groups in rural and inner-city areas, "there is virtually no evidence to support" the claim that empowering other medical professionals would improve access to care. But increasing the quantity of primary health-care providers would necessarily make additional medical professionals available to every area. Moreover, poor rural communities would likely be better able to afford the services of A.P.N.s, whose median salary nationwide is $43,600, than a general-practice M.D.s, with a median salary of $119,000. Even if allowing nurses to do more increased competition only in wealthier areas, it would thereby encourage some medical professionals, including doctors, to consider moving to underserved regions where the competition is less intense. The most compelling argument against relaxing restrictions on nurses is that Americans' health care might somehow suffer. "A nurse with four to six years of education after high school does not have the same training, experience, or knowledge base as a physician who has 11 to 16 years," complains Daniel Johnson, the Speaker of the AMA's House of Delegates. True enough, but so what? No one is suggesting that nurses do anything but the tasks nurses are trained to do. In fact, the OTA study judged A.P.N. care in a dozen medical areas to be better than that of M.D.s. The problem of occupational licensure is not confined to doctors. The nursing profession behaves the same way when it has a chance. Under severe cost pressures, hospitals have increasingly been relying on L.P.N.s, nurse's aides, and "patient-care assistants." The cost savings can be great: Nurses typically receive two to four times as much training as licensed practical nurses and command salaries 50 percent greater. Yet in many hospitals they still bathe and feed patients. Stanford University Hospital has saved $25 million over the last five years by reducing the share of R.N.s among patient-care employees from 90 percent to 60 percent. The consulting firm of APM, Inc. claims that, since 1987, it has assisted 80 hospitals in saving some $1 billion. Alas, professional groups like the American Nurses Association have opposed these efforts. To bring competition to the medical profession, patients should also be allowed greater access to practitioners of unorthodox medicine. In 1990, a tenth of Americans -- primarily well-educated and middle- to upper-income -- went to chiropractors, herbal healers, massage therapists, and the like. Health insurance covered few such treatments. Some of these procedures may seem spurious, but then, practices like acupuncture were once regarded similarly before gaining credibility. The most important principle is to allow patients free choice to determine the medical treatments they wish to receive. This means relaxing legal restrictions on unconventional practitioners and creating a health-insurance system that would allow those inclined toward alternative treatments to acquire policies tailored to their preferences. Most important, states should address the obstacles to becoming and practicing as an M.D. This nation suffers from an artificial limit on physicians. Observes Andrew Dolan of the University of Washington, the argument that occupational licensing is necessary "to protect patients against shoddy care" is "unproven by almost any standard." Experience suggests that licensure reflects professional rather than consumer interests. At the least, states should eliminate the most anti-competitive aspects of the licensing framework, particularly barriers to qualifying as doctors and to competition. These include that power of doctors to control entry into their own profession and to restrict competitive practices. As the National Center for Policy Analysis's John Goodman and Gerald Musgrave explain, "Virtually every law designed to restrict the practice of medicine was enacted not on the crest of widespread public demand but because of intense pressure from the political representatives of physicians." Although licensure is defended as necessary to protect patients, local medical societies spent years fighting practices (such as advertising, discounting, and prepaid plans) that served patients' interests, as well as imposing fixed-fee schedules on their members. No existing licensing requirement should escape critical review. More far-reaching reform proposals include substituting institutional licensure of hospitals and establishing a genuine free market in health care (backed by private certification and testing and continuing malpractice liability). Such approaches seem shocking today only in the context of the vast regulatory structure that has been erected over the years. If we are serious about increasing access to and reducing the expense of medical care, we should give careful consideration to full deregulation. Such steps would do much to achieve the Clinton administration's goal of encouraging more primary-care physicians and more physicians from racial minorities. The federal government shares some of the blame for clogging the pipeline of medical professionals, because its Medicaid and Medicare reimbursement rules encourage needlessly large and over-trained medical staffs. Medicare, for instance, requires hospitals to use only licensed laboratory and radiological technicians, and engage a registered nurse to provide or supervise the nursing in every department. Only nurse practitioners operating in nursing homes or rural areas can be reimbursed under Medicare. Only 18 states allow Medicaid reimbursement for A.P.N.s. Non-hospital facilities such as community health centers, which play a particularly important role in poor and rural areas, also face tough staffing requirements. These sort of restrictions hamper the shift to less expensive outpatient services. With enough political will, the federal government could play a role in easing state licensure, just as the Federal Trade Commission fought professional strictures against advertising. The collapse of the Clinton campaign for radical reform was welcome, but the American medical system still needs fixing. The supply side would be a good place to start. Rising costs require us to look for cost-effective alternative providers. Even more important: Patients should have the largest possible range of options when determining their health care. It's time to integrate the practice of medicine into the market economy. |โดย: aa [19 ก.พ. 54 2:14] ( IP A:184.108.40.206 X: )| แพงอีกก็ไม่เกิน 3 % คลอด รามา ศิริราช 5000-10000 บาท ถ้าฝากพิเศษก็ต้องใส่ซอง 5000 -10000 อีกต่างหาก จะคลอดก็ไม่ได้ต้องรอหมอ ให้พยาบาลเอามืออุดไว้ก่อน (เขาว่ากันอย่างนั้น) พวกคิดเว่อร์ก็จะประมาณ 50000-1 แสน |โดย: ค่าหมอต่างหากที่แพง [19 ก.พ. 54 9:37] ( IP A:220.127.116.11 X: )| หมอที่ชอบขู่ชาวบ้าน ขู่รัฐบาลก็ดีแต่ปาก ไม่กล้าเปิดเผยตัว พวกนี้ขี้ขลาด ดีแต่ขู่ |โดย: ไม่มีใครกลัวหรอก [19 ก.พ. 54 21:51] ( IP A:18.104.22.168 X: )| ว้าว หมอตำแย แพทย์ทางเลือก |โดย: thailand only [21 ก.พ. 54 8:33] ( IP A:22.214.171.124 X: )| ถึงคุณ คห. 8 คุณสนใจจะเปิดเผยตัวก่อนเลยดีไหม ไม่ได้ท้านะ ผมถามเฉยๆ |โดย: กล้าๆหน่อย [24 ก.พ. 54 2:08] ( IP A:126.96.36.199 X: )| ถึงคุณคห.7 ก็ถ้าค่าหมอแพง แล้วมันจะทำไมเหรอ คุณก็มีทางเลือกไปรพ.รัฐ อยู่แล้วนี่ ถ้ารพ.รัฐคุณว่าแพง ก็ไปหมอตำแยอย่างที่คุณว่าสิ หรือไม่ก็รักษาตัวเองไปเลย ทำคลอดเอง (ฝรั่งเค้ายังทำกันได้เลย คุณลองไหมล่ะ) เงินรพ.รัฐก็ขาดทุนอยู่แล้ว คงต้องพรบ.ออกมานั่นแหละ แล้วควบคุมคุณภาพอย่างที่พรบ.ต้องการ ตรวจ OPD วันละไม่กี่คน ตอนนั้นแหละ จะได้เหลือกำไรบ้าง (และไม่ต้องชดเชย คนที่เจ็บป่วยหนักขึ้นเพราะตรวจไม่ทัน เพราะถ้าในกรณ๊นั้น รวมอยู่ในการชดเชยด้วย ก็คงต้องฟ้องกันวันละหลายหมื่นทั่วประเทศ ใครจะว่างมารับเรื่อง) |โดย: ออกมาเลย จะได้ตายกันเยอะๆ [24 ก.พ. 54 2:16] ( IP A:188.8.131.52 X: )| จะคลอดก็ไม่ได้ต้องรอหมอ ให้พยาบาลเอามืออุดไว้ก่อน (เขาว่ากันอย่างนั้น) >>> เขาไหน??? ถ้าจะคลอด พยาบาลจะเป็นคนทำคลอดอยู่แล้ว ไม่ต้องทำถึงขนาดเอามืออุดไว้หรอก |โดย: thailand only [24 ก.พ. 54 9:08] ( IP A:184.108.40.206 X: )| แต่ดูดูแล้ว ยากหรือเปล่า หมอตำแย เขาก็ทำกันมานาน |โดย: น่าคิด [26 ก.พ. 54 12:36] ( IP A:220.127.116.11 X: )| |โดย: thailand only [28 ก.พ. 54 8:58] ( IP A:18.104.22.168 X: )| ไม่เคยสนใจดูแลตัวเอง แต่พอมาแล้วตายจะมาเรียกร้องกับหมอ จะต้องให้หมอมาเลี้ยงดู ผมเป็นคนนะครับ ไม่ใช่เทวดา ผมทำงาน 48 ชั่วโมงไม่ได้นอน ผมนอนไป 15 นาที ออกมาตรวจตอนตี 3 ยอมรับว่าคิดอะไรไม่ออก ง่วงมาก ตื้อไปหมด... ใครจะช่วยเพิ่มกฎหมายคุ้มครองห้ามไม่ให้แพทย์ที่นอนไม่ถึง 8 ชั่วโมงต่อวันมาตรวจด้วยได้มั้ยครับ.... |โดย: คนผ่านมา [28 ก.พ. 54 21:11] ( IP A:22.214.171.124 X: )|
<urn:uuid:7bf62189-8008-404d-ba93-81b954b0491a>
CC-MAIN-2022-33
https://www.pantown.com/board.php?id=12163&area=4&name=board12&topic=1928&action=view
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00297.warc.gz
en
0.776535
7,817
2.78125
3
Irish Free State |Irish Free State| |Dominion of the British Empire (until 1931)| "Amhrán na bhFiann" "The Soldiers' Song" Irish Free State until 8 December 1922 (dark & light green) Irish Free State after 8 December 1922 (dark green) 53°21′N 6°16′W / 53.350°N 6.267°W |Government||Parliamentary constitutional monarchy| |•||1936–1937|| Arguably George VI| |•||1922–1927||Timothy Michael Healy| |•||1932–1936||Domhnall Ua Buachalla| |President of the Executive Council| |•||1922–1932||W. T. Cosgrave| |•||1932–1937||Éamon de Valera| |•||Anglo-Irish Treaty||6 December 1921| |•||Constitution of the Irish Free State||6 December 1922| |•||Northern Ireland opt-out||8 December 1922| |•||Constitution of Ireland||29 December 1937| |•||Until 8 December 1922||84,000 km² (32,433 sq mi)| |•||After 8 December 1922||70,000 km² (27,027 sq mi)| |Currency|| Pound sterling (1922–27)| Saorstát pound (1928–37) |Today part of|| Ireland| The Irish Free State (Irish: Saorstát Éireann [sˠiːɾˠsˠˈt̪ˠaːt̪ˠ eːɾʲən̪ˠ]; 6 December 1922 – 29 December 1937) was an independent state established in 1922 under the Anglo-Irish Treaty of December 1921. That treaty ended the three-year Irish War of Independence between the forces of the self-proclaimed Irish Republic, the Irish Republican Army (IRA), and British Crown forces. The Free State was established as a Dominion of the British Commonwealth of Nations. It comprised 26 of the 32 counties of Ireland. Northern Ireland, which comprised the remaining six counties, exercised its right under the Treaty to opt out of the new state. The Free State government consisted of the Governor-General, the representative of the king, and the Executive Council, which replaced both the revolutionary Dáil Government and the Provisional Government set up under the Treaty. W. T. Cosgrave, who had led both of these governments since August 1922, became the first President of the Executive Council. The legislature consisted of Dáil Éireann (the lower house) and Seanad Éireann, also known as the Senate. Members of the Dáil were required to take an Oath of Allegiance, swearing fidelity to the king. The oath was a key issue for opponents of the Treaty, who refused to take the oath and therefore did not take their seats. Pro-Treaty members, who formed Cumann na nGaedheal in 1923, held an effective majority in the Dáil from 1922 to 1927, and thereafter ruled as a minority government until 1932. In the first months of the Free State, the Irish Civil War was waged between the newly established National Army and the anti-Treaty IRA, who refused to recognise the state. The Civil War ended in victory for the government forces, with the anti-Treaty forces dumping its arms in May 1923. The anti-Treaty political party, Sinn Féin, refused to take its seats in the Dáil, leaving the relatively small Labour Party as the only opposition party. In 1926, when Sinn Féin president Éamon de Valera failed to have this policy reversed, he resigned from Sinn Féin and founded Fianna Fáil. Fianna Fáil entered the Dáil following the 1927 general election, and entered government after the Irish general election, 1932, when it became the largest party. De Valera abolished the Oath of Allegiance and embarked on an economic war with Britain. In 1937 he drafted a new constitution, which was passed by a referendum in July of that year. The Free State came to an end with the coming into force of the new constitution on 29 December 1937. Under the new constitution the Irish state was named Ireland. The Easter Rising of 1916, and particularly the execution of fifteen people by firing squad, the imprisonment or internment of hundreds more, and the imposition of martial law caused a profound shift in public opinion towards the republican cause in Ireland. Meanwhile, opposition increased to Ireland's participation in World War I in Europe and the Middle East. This came about when the Irish Parliamentary Party supported the Allied cause in World War I in response to the passing of the Third Home Rule Bill in 1914. Many people had begun to doubt whether the Bill, passed by Westminster in September 1914 but suspended for the duration of the war, would ever come into effect. Due to the war situation deteriorating badly on the Western Front in April 1918, which coincided with the publication of the final report and recommendations of the Irish Convention, the British Cabinet drafted a doomed "dual policy" of introducing Home Rule linked to compulsory military service for Ireland which it eventually had to drop. Sinn Féin, the Irish Party and all other Nationalist elements joined forces in opposition to the idea during the Conscription Crisis of 1918. At the same time the Irish Parliamentary lost in support on account of the crisis. Irish republicans felt further emboldened by successful anti-monarchical revolutions in the Russian Empire (1917), the German Empire (1918), and the Austro-Hungarian Empire (1918). In the December 1918 General Election, Sinn Féin won a large majority of the Irish seats in the Westminster parliament of the United Kingdom of Great Britain and Ireland: 73 of the 105 constituencies returned Sinn Féin members (25 uncontested). The Sinn Féin party, founded by Arthur Griffith in 1905, had espoused non-violent separatism. Under Éamon de Valera's leadership from 1917, it campaigned aggressively and militantly for an Irish republic. On 21 January 1919, Sinn Féin MPs (who became known as Teachta Dála, TDs), refusing to sit at Westminster, assembled in Dublin and formed a single-chamber Irish parliament called Dáil Éireann (Assembly of Ireland). It affirmed the formation of an Irish Republic and passed a Declaration of Independence, the irish people is resolved... to promote the common weal, to re-establish justice... with equal rights and equal opportunity for every citizen. and calling itself Saorstát Éireann in Irish. Although the less than overwhelming majority of Irish people accepted this course, America and Soviet Russia were targeted to recognise the Irish Republic internationally. The Message to the Free Nations of the World called on every free nation to support the Irish Republic by recognizing Ireland's national status... the last outpost of Europe towards the West... demanded by the Freedom of the Seas. Cathal Brugha, elected President of the Ministry Pro-Tem, warned, "Deputies you understand from this that we are now done with England." A war for a new independent Ireland The War of Independence (1919–1921) pitted the army of the Irish Republic, the Irish Republican Army (known subsequently as the "Old IRA" to distinguish it from later organisations of that name), against the British Army, the Black and Tans, the Royal Irish Constabulary, the Auxiliary Division, the Dublin Metropolitan Police, the Ulster Special Constabulary and the Ulster Volunteer Force. On 9 July 1921 a truce came into force. By this time the Ulster Parliament had opened, established under the Government of Ireland Act 1920, and presenting the republican movement with a fait accompli and guaranteeing the British permanent entanglement in Ireland. On 11 October negotiations opened between the British Prime Minister, David Lloyd George, and Arthur Griffith, who headed the Irish Republic's delegation. The Irish Treaty delegation (Griffith, Collins, Duggan, Barton, and Gavan Duffy) set up headquarters in Hans Place, Knightsbridge. On 5 December 1921 at 11:15 am the delegation decided during private discussions at 22 Hans Place to recommend the negotiated agreement to the Dáil Éireann; negotiations continued until 2:30 am on 6 December 1921, after which the parties signed Anglo-Irish Treaty. Nobody had doubted that these negotiations would produce a form of Irish government short of the independence wished for by republicans. The United Kingdom could not offer a republican form of government without losing prestige and risking demands for something similar throughout the Empire. Furthermore, as one of the negotiators, Michael Collins, later admitted (and he would have known, given his leading role in the independence war), the IRA at the time of the truce was weeks, if not days, away from collapse, with a chronic shortage of ammunition. "Frankly, we thought they were mad", Collins said of the sudden British offer of a truce – although the republicans would probably have continued the struggle in one form or another, given the level of public support. Since Lloyd George had already, after the truce had come into effect, made it clear to President of the Republic, Éamon de Valera, "that the achievement of a republic through negotiation was impossible", de Valera decided not to become a member of the treaty delegation and so not to risk more militant republicans labelling him as a "sellout". Yet his own proposals – published in January 1922 – fell far short of an autonomous all-Ireland republic. Sinn Féin's abstention was unambiguous. As expected, the Anglo-Irish Treaty explicitly ruled out a republic. It offered Ireland dominion status, as a state within the then British Empire – equal to Canada, Newfoundland, Australia, New Zealand and South Africa. Though less than expected by the Sinn Féin leadership, this deal offered substantially more than the initial form of home rule within the United Kingdom sought by Charles Stewart Parnell from 1880, and represented a serious advance on the Home Rule Bill of 1914 that the Irish nationalist leader John Redmond had achieved through parliamentary proceedings. However, it all but confirmed the partition of Ireland between Northern Ireland and the Irish Free State. The Second Dáil in Dublin ratified the Treaty (7 January 1922), splitting Sinn Féin in the process. Northern Ireland "opts out" The Treaty, and the legislation introduced to give it legal effect, implied that Northern Ireland would be a part of the Free State on its creation, but legally the terms of the Treaty applied only to the 26 counties, and the government of the Free State never had any powers—even in principle—in Northern Ireland. The Treaty was given legal effect in the United Kingdom through the Irish Free State Constitution Act 1922. That act, which established the Free State, allowed Northern Ireland to "opt out" of it. Under Article 12 of the Treaty, Northern Ireland could exercise its option by presenting an address to the King requesting not to be part of the Irish Free State. Once the Treaty was ratified, the Houses of Parliament of Northern Ireland had one month (dubbed the "Ulster month") to exercise this option during which month the Government of Ireland Act continued to apply in Northern Ireland. Realistically it was always certain that Northern Ireland would opt out of the Free State. The Prime Minister of Northern Ireland, Sir James Craig, speaking in the Parliament in October 1922 said that "when 6 December is passed the month begins in which we will have to make the choice either to vote out or remain within the Free State". He said it was important that that choice be made as soon as possible after 6 December 1922 "in order that it may not go forth to the world that we had the slightest hesitation". On the following day, 7 December 1922, the Parliament resolved to make the following address to the King so as to opt out of the Irish Free State: MOST GRACIOUS SOVEREIGN, We, your Majesty's most dutiful and loyal subjects, the Senators and Commons of Northern Ireland in Parliament assembled, having learnt of the passing of the Irish Free State Constitution Act, 1922, being the Act of Parliament for the ratification of the Articles of Agreement for a Treaty between Great Britain and Ireland, do, by this humble Address, pray your Majesty that the powers of the Parliament and Government of the Irish Free State shall no longer extend to Northern Ireland. Discussion in the Parliament of the address was short. Prime Minister Craig left for London with the memorial embodying the address on the night boat that evening, 7 December 1922. The King received it the following day, The Times reporting: YORK COTTAGE, SANDRINGHAM, DEC. 8. The Earl of Cromer (Lord Chamberlain) was received in audience by The King this evening and presented an Address from the Houses of Parliament of Northern Ireland, to which His Majesty was graciously pleased to make reply. If the Houses of Parliament of Northern Ireland had not made such a declaration, under Article 14 of the Treaty Northern Ireland, its Parliament and government would have continued in being but the Oireachtas would have had jurisdiction to legislate for Northern Ireland in matters not delegated to Northern Ireland under the Government of Ireland Act. This, of course, never came to pass. On 13 December 1922 Prime Minister Craig addressed the Parliament informing them that the King had responded to the Parliament's address as follows: I have received the Address presented to me by both Houses of the Parliament of Northern Ireland in pursuance of Article 12 of the Articles of Agreement set forth in the Schedule to the Irish Free State (Agreement) Act, 1922, and of Section 5 of the Irish Free State Constitution Act, 1922, and I have caused my Ministers and the Irish Free State Government to be so informed. Governmental and constitutional structures The Treaty established that the new Irish Free State would be a constitutional monarchy, with a Governor-General. The Constitution of the Irish Free State made more detailed provision for the state's system of government, with a three-tier parliament, called the Oireachtas, made up of the King and two houses, Dáil Éireann and Seanad Éireann (the Irish Senate). Executive authority was vested in the King, and exercised by a cabinet called the Executive Council, presided over by a prime minister called the President of the Executive Council. The Representative of the Crown The King in the Irish Free State was represented by a Governor-General of the Irish Free State. The office replaced the previous Lord Lieutenant, who had headed English and British administrations in Ireland since the Middle Ages. Governors-General were appointed by the King initially on the advice of the British Government, but with the consent of the Irish Government. From 1927 the Irish Government alone had the power to advise the King whom to appoint. Oath of Allegiance As with all dominions, provision was made for an Oath of Allegiance. Within dominions, such oaths were taken by parliamentarians personally towards the monarch. The Irish Oath of Allegiance was fundamentally different. It had two elements; the first, an oath to the Free State, as by law established, the second part a promise of fidelity, to His Majesty, King George V, his heirs and successors. That second fidelity element, however, was qualified in two ways. It was to the King in Ireland, not specifically to the King of the United Kingdom. Secondly, it was to the King explicitly in his role as part of the Treaty settlement, not in terms of pre-1922 British rule. The Oath itself came from a combination of three sources, and was largely the work of Michael Collins in the Treaty negotiations. It came in part from a draft oath suggested prior to the negotiations by President de Valera. Other sections were taken by Collins directly from the Oath of the Irish Republican Brotherhood (IRB), of which he was the secret head. In its structure, it was also partially based on the form and structure used for 'Dominion status'. Although 'a new departure', and notably indirect in its reference to the monarchy, it was criticised by nationalists and republicans for making any reference to the Crown, the claim being that it was a direct oath to the Crown, a fact demonstrably incorrect by an examination of its wording. But in 1922 Ireland and beyond, it was the perception, not the reality, that influenced public debate on the issue. Had its original author, Michael Collins, survived, he might have been able to clarify its actual meaning, but with his assassination in August 1922, no major negotiator to the Oath's creation on the Irish side was still alive, available or pro-Treaty. (The leader of the Irish delegation, Arthur Griffith, had also died in August 1922). The Oath became a key issue in the resulting Irish Civil War that divided the pro- and anti-treaty sides in 1922–23. The Irish Civil War The compromises contained in the agreement caused the civil war in the 26 counties in June 1922 – April 1923, in which the pro-Treaty Provisional Government defeated the anti-Treaty Republican forces. The latter were led, nominally, by Éamon de Valera, who had resigned as President of the Republic on the treaty's ratification. His resignation outraged some of his own supporters, notably Seán T. O'Kelly, the main Sinn Féin organizer. On resigning, he then sought re-election but was defeated two days later on a vote of 60–58. The pro-Treaty Arthur Griffith followed as President of the Irish Republic. Michael Collins was chosen at a meeting of the members elected to sit in the House of Commons of Southern Ireland (a body set up under the Government of Ireland Act 1920) to become Chairman of the Provisional Government of the Irish Free State in accordance with the Treaty. The general election in June gave overwhelming support for the pro-Treaty parties. W. T. Cosgrave's Crown-appointed Provisional Government effectively subsumed Griffith's republican administration with the death of both Collins and Griffith in August 1922. The "freedom to achieve freedom" The following were the principal parties of government of the Irish Free State between 1922 and 1937: Michael Collins described the Treaty as 'the freedom to achieve freedom'. In practice, the Treaty offered most of the symbols and powers of independence. These included a functioning, if disputed, parliamentary democracy with its own executive, judiciary and written constitution which could be changed by the Oireachtas. However, a number of conditions existed: - The King remained king in Ireland; - Prior to the passage of the Statute of Westminster, the UK government continued to have a significant role in Irish governance. Officially the representative of the King, the Governor-General also received instructions from the British Government on his use of the Royal Assent, namely a Bill passed by the Dáil and Seanad could be Granted Assent (signed into law), Withheld (not signed, pending later approval) or Denied (vetoed). Letters patent to the first Governor-General Tim Healy had named explicitly Bills that if passed were to be blocked, such as any attempt to abolish the Oath. In the event, no such Bills were ever introduced, so the issue was moot. - As a dominion, the Irish Free State had limited independence. Entitlement of citizenship of the Irish Free State was defined in the Irish Free State Constitution, but the status of that citizenship was contentious. One of the first projects of the Irish Free State was the design and production of the Great Seal of Sáorstát Éireann which was carried out on behalf of the Government by Hugh Kennedy. - The meaning of 'Dominion status' changed radically during the 1920s, starting with the Chanak crisis in 1922 and quickly followed by the directly negotiated Halibut Treaty of 1923. A reform of the King's title following an Imperial Conference decision and given effect by the Royal and Parliamentary Titles Act 1927, changed the King's royal title so that it took account of the fact that there was no longer a United Kingdom of Great Britain and Ireland. The King adopted the following style by which he would be known in all of his Empire: By the Grace of God, of Great Britain, Ireland and the British Dominions beyond the Seas King, Defender of the Faith, Emperor of India. That was the King's title in Ireland just as elsewhere in his Empire. - In the conduct of external relations, the Irish Free State tried to push the boundaries of its status as a Dominion. It 'accepted' credentials from international ambassadors to Ireland, something no other dominion up to then had done. It registered the treaty with the League of Nations as an international document, over the objections of the United Kingdom, which saw it as a mere internal document between a dominion and the United Kingdom. The Statute of Westminster (of 1931), embodying a decision of an Imperial Conference, enabled each dominion to enact new legislation or to change any extant legislation, without resorting to any role for the British parliament that may have enacted the original legislation in the past. The Free State symbolically marked these changes in two mould-breaking moves: - It sought, and got, the King's acceptance to have an Irish minister, to the complete exclusion of British ministers, formally advising the King in the exercise of his powers and functions as King in the Irish Free State. Two examples of this are the signing of a treaty between the Irish Free State and the Portuguese Republic in 1931, and the act recognising the abdication of King Edward VIII in 1936 separately from the recognition by the British Parliament. - The unprecedented replacement of the use of the Great Seal of the Realm and its replacement by the Great Seal of the Irish Free State, which the King awarded to the Irish Free State in 1931. (The Irish Seal consisted of a picture of 'King George V' enthroned on one side, with the Irish state harp and the words Saorstát Éireann on the reverse. It is now on display in the Irish National Museum, Collins Barracks in Dublin.) When Éamon de Valera became President of the Executive Council (prime minister) in 1932 he described Cosgrave's ministers' achievements simply. Having read the files, he told his son, Vivion, "they were magnificent, son". The Statute of Westminster allowed de Valera, on becoming President of the Executive Council (February 1932), to go even further. With no ensuing restrictions on his policies, he abolished the Oath of Allegiance (which Cosgrave intended to do had he won the 1932 general election), the Senate, university representation in the Dáil, and appeals to the Privy Council. One major policy error occurred in 1936 when he attempted to use the abdication of King Edward VIII to abolish the crown and governor-general in the Free State with the "Constitution (Amendment No. 27 Act)". He was advised by senior law officers and other constitutional experts that, as the crown and governor-generalship existed separately from the constitution in a vast number of acts, charters, orders-in-council, and letters patent, they both still existed. A second bill, the "Executive Powers (Consequential Provisions) Act, 1937" was quickly introduced to repeal the necessary elements. De Valera retroactively dated the second act back to December 1936. The new state continued to use sterling from its inception; there is no reference in the Treaty or in either of the enabling Acts to currency. Nonetheless and within a few years, the Dáil passed the Coinage Act, 1926 (which provided for a Saorstát [Free State] coinage) and the Currency Act, 1927 (which provided inter alia for banknotes of the Saorstát pound). The new Saorstát pound was defined by the 1927 Act to have exactly the same weight and fineness of gold as was the sovereign at the time, making the new currency pegged at 1:1 with sterling. The State circulated its new national coinage in 1928, marked Saorstát Éireann and a national series of banknotes. British coinage remained acceptable in the Free State at an equal rate. In 1937, when the Free State was superseded by Ireland (Éire), the pound became known as the "Irish pound" and the coins were marked Éire. According to one report, in 1924, shortly after the Irish Free State's establishment, the new dominion had the "lowest birth-rate in the world". The report noted that amongst countries for which statistics were available (Ceylon, Chile, Japan, Spain, South Africa, Netherlands, Canada, Germany, Australia, United States, Britain, New Zealand, Finland and the Irish Free State). Ceylon had the highest birth rate at 40.8 per 1,000 while the Irish Free State had a birth rate of just 18.6 per 1,000. After the Irish Free State In 1937 the Fianna Fáil government presented a draft of an entirely new Constitution to Dáil Éireann. An amended version of the draft document was subsequently approved by the Dáil. A referendum was then held on the same day as the 1937 general election, when a relatively narrow majority approved it. The new Constitution of Ireland (Bunreacht na hÉireann) repealed the 1922 Constitution, and came into effect on 29 December 1937. The state was named Ireland (Éire in the Irish language), and a new office of President of Ireland was instituted in place of the Governor-General of the Irish Free State. The new constitution claimed jurisdiction over all of Ireland while recognising that legislation would not apply in Northern Ireland (see Articles 2 and 3). Articles 2 and 3 were reworded in 1998 to remove jurisdictional claim over the entire island and to recognise that "a united Ireland shall be brought about only by peaceful means with the consent of a majority of the people, democratically expressed, in both jurisdictions in the island". With respect to religion, a section of Article 44 included the following: The State recognises the special position of the Holy Catholic Apostolic and Roman Church as the guardian of the Faith professed by the great majority of the citizens. The State also recognises the Church of Ireland, the Presbyterian Church in Ireland, the Methodist Church in Ireland, the Religious Society of Friends in Ireland, as well as the Jewish Congregations and the other religious denominations existing in Ireland at the date of the coming into operation of this Constitution. Following a referendum, this section was deleted in 1973. It was left to the initiative of de Valera's successors in government to achieve the country's formal transformation into a republic. A small but significant minority of Irish people, usually attached to parties like Sinn Féin and the smaller Republican Sinn Féin, denied the right of the twenty-six county state to use the name Ireland and continued to refer to the state as the Free State. With Sinn Féin's entry into Dáil Éireann and the Northern Ireland Executive at the close of the 20th century, the number of those who refuse to accept the legitimacy of the state, which was already in a minority, declined further. After the setting up of the Free State in 1923, some Protestants left southern Ireland and unionism there largely came to an end. - Officially adopted in July 1926. O'Day, Alan (1987). Alan O'Day, ed. Reactions to Irish nationalism. Continuum International Publishing Group. p. 17. ISBN 978-0-907628-85-9. Retrieved 28 April 2011. - Marie Coleman, The Republican Revolution, 1916-1923, Routledge, 2013, chapter 2 "The Easter Rising", pp. 26-8. ISBN 140827910X - J. J. Lee: Ireland 1912-1985 Politics and Society p.41, Cambridge University Press (1989, 1990) ISBN 9780521266482 - Arthur Mitchell, "Revolutionary Government in Ireland: Dáil Éireann 1919-22" (Dublin 1995), p.17. - Townshend, p.70. - Townshend, p.54 - Hanna Sheehy Skeffington, "Impressions of Sinn Féin in America" (Dublin 1919), cited by C Townshend, "The Republic: The Fight for Irish Independence" (Penguin 2014), 66-8. - Macardle, Dorothy, "Irish Republic 1911-1923" (London 1937) Appendix 1, no.10. - Brollay, Sylvain, "Ireland in Rebellion" (Dublin 1922) translated from articles written in 1920-1 entitled "L'Irlande Insurgee" Paris, 1921. - Garvin, Tom: The Evolution of Irish Nationalist Politics : p.143 Elections, Revolution and Civil War Gill & Macmillan (2005) ISBN 0-7171-3967-0 - Frank Thornton told a meeting, related in Sean O' Sullivan's Memoir, "Make no mistake the IRA was going to fight and going to make the Irish Republic a living fact." cited by Townshend, p.89., Military Archives, Ireland, CD 308/1/5 - Lee, J. J., p.47 - "The Boundary Question: Debate Resumed, Dáil Éireann, 20 June 1924". Oireachteas.ie. Retrieved 30 September 2015. Article 12 of the Treaty reads: 'If before the expiration of the said month an address is presented to his Majesty by both Houses of the Parliament of Northern Ireland to that effect, the powers of the Parliament and Government of the Irish Free State shall no longer extend to Northern Ireland.' By implication that is a declaration that it did extend, but after the exercise of its option this power was no longer extended. - Martin, Ged (1999). "The Origins of Partition". In Anderson, Malcolm; Bort, Eberhard. The Irish Border: History, Politics, Culture. Liverpool University Press. p. 68. ISBN 0853239517. Retrieved 8 September 2015. It is certainly true that the Treaty went through the motions of including Northern Ireland within the Irish Free State while offering it the provision to opt out - Morgan, Austen (2000). The Belfast Agreement: A Practical Legal Analysis (PDF). The Belfast Press. pp. 66, 68. Retrieved 25 September 2015. - Gibbons, Ivan (2015). The British Labour Party and the Establishment of the Irish Free State, 1918-1924. Palgrave Macmillan. p. 107. ISBN 1137444088. Retrieved 23 September 2015. - Northern Ireland Parliamentary Debates, 27 October 1922 - Northern Ireland Parliamentary Report, 7 December 1922 - Times, 9 December 1922 - Northern Ireland Parliamentary Report, 13 December 1922, Volume 2 (1922) / pages 1191–1192, 13 December 1922 - Long after the Irish Free State had ceased to exist, when Elizabeth II ascended the Throne, the Royal Titles Act 1953 was passed, as were other Acts concerning her Style in other parts of the Empire. Until then the British monarch had only one style. The King was never simply the "King of Ireland" or the "King of the Irish Free State". - Except perhaps by inference: the Treaty assigned to the Irish Free State the same status in the Empire as Canada and the latter had already [1851—59] replaced the British Pound (with the Canadian Dollar). - Otautau Standard and Wallace County Chronicle, Volume XVIX, Issue 971, 11 March 1924, Page 1 - Coogan, Tim Pat. Éamon de Valera. ISBN 0-09-175030-X. - Coogan, Tim Pat. Michael Collins. ISBN 0-09-174106-8. - Longford, Lord. Peace by Ordeal. - McCardle, Dorothy. The Irish Republic. ISBN 0-86327-712-8. - Corcoran, Donal. "Public Policy in an emerging state: The Irish Free State 1922–25". Irish Journal of Public Policy. ISSN 2009-1117.
<urn:uuid:6a61edcc-69c1-44e5-a0ac-d9920d548449>
CC-MAIN-2022-33
https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Irish_Free_State.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00499.warc.gz
en
0.955557
6,851
3.28125
3
Shortly after slipping into the driver’s seat of the Motor City’s schools last spring, Detroit’s newly created school board came close to getting run off the road. Convened for their first public meeting, the board’s seven members at first found it impossible to proceed amid the shouts of protesters opposed to the takeover law that had led to their appointment two weeks earlier. It was only after Chairman Freman Hendrix ordered police to “have the hecklers removed—now,” that things quieted down and the board was able to get on with its business. Since then, the meetings have gotten calmer. Yet the anger expressed during that tumultuous first meeting has not gone away. At its root are questions that have divided Detroit—and communities across the nation—throughout the course of this century: Who should be in charge of the public schools, and how should they be run? The answers the city has settled on during the past 100 years have often mirrored broader trends in how Americans, especially those who live in cities, have chosen to organize and govern their schools. These have ranged from Progressive-era innovations in the early part of the century to experiments with decentralization, desegregation, and school restructuring during the past 30 years. “Detroit participated in (and in some cases led) virtually every important reform effort involving urban schools,” the education historian Jeffrey Mirel writes in a recent paper published by the Brookings Institution. Yet in other respects, the story of Detroit’s schools is inseparable from the history of the city itself. As interest groups and individuals have competed to forge new frameworks for governing public education, they have often been guided as much by the circumstances of their time and place as by forces at work in the nation at large. These economic, social, and political conditions have exerted powerful influence over how schools function and how students learn. Motown’s schools rose with their city in the early decades of the century to become one of the finest education systems in the country, only to slip into severe decline in later years as the city itself descended into urban decay and racial unrest. In the midst of these challenges, the school system repeatedly resorted to dramatic shakeups in the way the schools were governed. Yet a century of experimentation in Detroit and elsewhere has shown that shifts in governance often disappoint those hoping for tidy solutions. “We’re looking for structural panaceas and political quick fixes to very complex issues,” says Michael D. Usdan, the president of the Washington-based Institute for Educational Leadership. “Tinkering with the structure might be useful, but that in itself is not going to bring about the changes that people want.” At the turn of this century, as automotive pioneers were giving birth to an industry that would transform the nation, an elite corps of school reformers here was plotting changes of its own. Since the official establishment of Detroit’s public schools in 1842, they had been run almost as a collection of village schools. Each of the city’s political subdivisions, known as wards, maintained substantial control over its own schools. For most of the 19th century, those wards sent “inspectors” to sit on a central school board, which expanded as the city added wards. That method of governance was typical of urban schools until about the 1890s, notes David B. Tyack, a professor of education and history at Stanford University. But around then, groups of reformers, typically drawn from their communities’ social and business elites, began pressing for change. “Defenders of the ward system argued that grassroots interest in the schools and widespread participation in school politics was healthy, indeed necessary, in large cities,” Tyack writes in his 1974 book The One Best System: A History of American Urban Education. But, he adds, “centralizers saw in decentralization only corruption, parochialism, and vestiges of an outmoded village mentality.” In Detroit, a cadre of upper-crust reformers, many of them women, launched a campaign in 1902 aimed at overhauling the governance structure and ousting the incumbent superintendent. As in many other cities, a central goal was to replace the sprawling, ward-based school board with a smaller one elected by voters citywide. The reformers focused heavily on the alleged character flaws of the ward-based politicians—including the ties some of them had to liquor interests. “The conflict in Detroit centered almost entirely on who should rule, not on specific policies or practices,” Mirel writes in The Rise and Fall of an Urban School System: Detroit, 1907-81, published in 1993. In 1911, the reformers won a narrow majority on the ward-based board, only to lose it two years later. But in that same year, 1913, they persuaded the Michigan legislature to pass a bill creating a seven-member, nonpartisan board elected to staggered, six-year terms. In 1916, Detroit voters overwhelmingly ratified the move in a citywide referendum. One of the new board’s first moves was to consolidate power in the superintendent. It also got rid of the committees through which the old, 21-member board had regularly circumvented the schools chief. Such steps reflected a national trend from 1900 to 1930 to transfer control of the public schools from community-based lay people to university-trained education professionals. In Detroit, the shift was viewed as critical to enabling the schools to keep pace with changes in the fast-growing city. In the first three decades of the century, as Detroit established itself as the axis of America’s automobile industry, the city’s population mushroomed from 285,000 to nearly 1.6 million people. Finding enough teachers and space for the children of the new arrivals, many of them from Eastern and Southern Europe, was a formidable challenge. From fewer than 30,000 youngsters in 1900, K-12 enrollment swelled to 235,000 by 1930, although many students had to attend only part time for lack of space. From the advent of the small school board until the onset of the Great Depression, the city saw relatively little political dissension over the basic priorities of expanding and adapting the system to accommodate the enrollment boom, Mirel said in a recent interview. Superintendent Frank Cody, named to the position in 1919, served as the district’s chief for what today seems an extraordinary 23 years. If the Detroit schools ever enjoyed a golden age, Mirel suggests, that was it. “Every major interest group in the city strongly supported efforts to provide a high-quality, ‘modern’ public education to the children of the city,” he says. “This support gave Detroit school leaders the unprecedented opportunity to create one of the great urban school systems of the 20th century.” But the consensus crumbled under the economic burdens of the Depression. As unemployment soared and relief lines lengthened, the city schools faced a staggering fiscal crisis. Business leaders soon broke ranks with school officials, chastising them for moving too slowly to bring spending in line with the blighted economy. Despite their reluctance, district officials retrenched in the early 1930s, cutting the budget by constricting teacher pay, halting construction, increasing the size of classes, and eliminating programs that critics decried as superfluous. During those lean times, the nascent Detroit Federation of Teachers, at first operating underground, began trying to influence school board elections, with limited success. That activity on the part of the fledgling American Federation of Teachers affiliate aroused fierce criticism in the business community. After the country entered World War II, Detroit emerged as an arms-producing powerhouse. That enhanced the clout of business and labor alike, which continued to clash over school funding and curricular issues. As its ranks swelled during the 1940s, the dft increasingly tried to influence the outcome of school board elections and decisions, with limited success. Yet in 1947, the union came out on top in a struggle over salaries, which the union insisted must take precedence over building projects aimed at catching up with years of neglect. The district capitulated after teachers threatened to walk out. Mary Ellen Riordan, the president of the union from 1960 through 1981, views the near-strike as a major step in the dft’s eventual emergence as the most powerful interest group in Detroit school politics. “That was the first big watershed,” she says. Business-backed groups reacted with outrage to the labor action and the resulting pay hikes, and in 1948, state lawmakers passed legislation outlawing strikes by public employees. Another effect of the showdown with the teachers was a growing sense that the district needed fiscal autonomy to set funding priorities as it saw fit. "[T]he power of the mayor and City Council over the activities of the school district effectively made the public school system a department of the city,” the Citizens Research Council of Michigan noted in a 1990 report. After vigorous lobbying by school officials and municipal leaders, the legislature granted the district financial independence in 1949. That governance milestone freed the school board from relying on the city to approve both its budgets and borrowing requests. And it made the capacity to deliver votes, rather than wield influence behind the scenes, of greater value in the political equation. The shift served to bolster the political fortunes of various interest groups in the city, including labor unions and the city’s growing population of African-Americans. Drawn from the South by plentiful wartime jobs, blacks climbed from just 9 percent of Detroit’s population in 1940 to more than 16 percent in 1950. At the same time, many Southern whites were migrating to the city, fueling tensions that burst into outright conflict during a devastating race riot in 1943. Racial strains sometimes ignited in the schools throughout the 1940s. After the 1943 riot, groups representing African-Americans stepped up their pressure on school officials to address their grievances, which included a dearth of black school employees, a pattern of unofficial segregation, and the rundown condition of schools in black neighborhoods. On some of those issues, notably that of facilities, black leaders were sometimes at odds with the dft, whose overriding concern was teacher pay. But as the decade progressed, African-Americans increasingly found common cause with the federation, as well as other labor unions and liberal organizations that exerted influence over education politics. In the late ‘40s, those minority, labor, and liberal groups joined forces in a failed attempt to oust the incumbent schools chief. Out of that effort grew Save Our Schools, or sos, an organization that brought together the teachers’ federation, other labor leaders, black activists, parents, and various civic groups in efforts to increase funding, end racial discrimination, and expand community control of the schools. Between 1949 and 1961, according to Mirel, 11 of 14 candidates backed by Save Our Schools won seats on the school board. During the 1950s, city dwellers throughout the country headed for the suburbs, and Detroiters were no exception. The exodus drained the city of many white, middle-class families who had been a fountainhead of political support for the schools. Blacks, by contrast, continued to flock to the city that perhaps more than any other stood for high-paying jobs. The number of blacks rose by more than 60 percent during the decade, while the white population fell by nearly a quarter. By 1960, the city’s overall population had dropped to 1.67 million from 1.85 million a decade earlier, but the proportion of African-Americans had jumped from 16 percent to 29 percent. Meanwhile, the suburbs, collectively home to more than 2 million people by 1960, had overtaken the city in total population. For the schools, one consequence of the demographic shifts was a sharp drop in property values and a resulting decline in the district’s tax base. Another was a downward shift in the socioeconomic status of families using the system. Race began to assume greater importance in the district’s political dynamics. Buoyed by the U.S. Supreme Court’s historic 1954 decision striking down intentional school segregation, black leaders pressed harder for integration and elimination of unequal distribution of resources. In 1955, an African-American was elected to the Detroit school board for the first time. Besides racial issues, the big challenge facing school leaders was coping with rapid enrollment growth while the tax base was shrinking. Despite the city’s overall population decline, the post-World War II baby boom pushed enrollment up from 232,230 in 1950 to 285,304 a decade later. The growth continued until 1966, when enrollment peaked at just under 300,000. By then, the racial profile had shifted sharply, with whites making up only about 40 percent of students. As district leaders turned to the electorate for tax increases and bond issues to keep up with rising enrollment and declining property values, voters proved harder to persuade. In one such referendum, in April 1963, the resounding defeat of both a proposed tax increase and a school construction bond issue left the district facing the loss of nearly a third of its operating budget. “Detroit schools were in real danger,” writes Donald W. Disbrow in his 1968 book Schools for an Urban Society. School officials went back to city voters the following November, seeking only to renew the existing school tax rather than raise it, and this time succeeded. But less than a year later, in September 1964, voters again turned thumbs down to a $75 million bond issue for school construction. It was during those elections that clear signs of a color line emerged, Mirel says. Whites, who still outnumbered blacks in the voting booths though not in the schools, began rejecting spending measures at far higher rates than in the past. Fueling the trend were race-related clashes starting in the late 1950s over school boundaries, conflicts that were to peak in the busing struggles of the 1970s. Besides alienating whites, the disputes helped turn a minority of blacks against school revenue increases. “These elections signaled the beginning of a sea change in educational politics in the Motor City,” Mirel says of the 1963 and 1964 spending votes. “In the 1930s, the business community abandoned its commitment to expanding and improving the Detroit schools. Similarly, during the racial struggles of the 1960s and early 1970s, large numbers of the white working class and a small but vocal segment of the black community would essentially do the same.” But if voters were exerting pressure on school leaders to keep spending down, an increasingly militant teaching corps was pushing in the opposite direction. For decades, Detroit educators had divided their loyalty between the dft, which had strong ties to organized labor, and the local affiliate of the National Education Association, which saw itself more as a professional organization than a trade union. While the two groups were active in the political arena over the years, teachers did not collectively bargain for their contracts with the district. But beginning in 1963, encouraged by the success of New York City’s aft affiliate in gaining collective bargaining rights two years earlier, the dft started pressing the school board to let Detroit teachers hold a similar election. After the union threatened to strike, school leaders agreed to hold an election in May of 1964. Intense competition between the dft and the Detroit Education Association ensued. In the end, the federation garnered about 60 percent of the vote, becoming the union designated to negotiate for all the district’s teachers. The teachers wasted no time in exercising their newfound clout. In 1965, the dft extracted a sizable wage increase from the cash-strapped district. It did the same in 1967, after a two-week strike. Like the teachers, African-American activists became more assertive in the middle and late ‘60s, as students joined community groups in protesting conditions in the schools. By 1967, the year of traumatic race riots in the city that were among the worst in any U.S. city this century, the goal of transforming the system primarily through integration had fallen out of favor with a growing segment of the city’s black residents. Instead, some leaders advanced the view that unequal education for African-Americans could be remedied only through black control of the schools. Out of that movement came the push for community control, which culminated in a plan to decentralize the system in the early 1970s. Looking back, some see the late 1960s as a turning point in the school system’s fortunes. Before then, says John W. Porter, a former Michigan state schools chief who was the superintendent in Detroit from 1989 to 1991, the city generally benefited from concerned and supportive parents and community members, motivated students, and an effective, dedicated staff. But after that period, he says, such conditions became harder to come by, and the quality of leadership in the district faltered. “Governance is fragile when the socioeconomic conditions shift, and that’s what happened in Detroit,” Porter says. By the end of the 1960s, Detroit’s schools were in many ways unrecognizable from those of 1916. Yet the basic governance structure established that year—a seven-member school board elected at large—still endured more than half a century later. That would change during the turbulent events of the next few years. In April 1969, state Sen. Coleman A. Young—a former union leader who in 1974 would become the city’s first black mayor—introduced legislation that called for creating regional school boards in the city, while preserving a central board with a mixture of subdistrict and at-large representatives. Four months later, Gov. William Milliken signed the bill, which called for abolishing the seven-member board on Jan. 1, 1971. The lame-duck school board then turned its attention to carving out the new subdistricts. Rejecting the pleas of both those who wanted to maximize black control in the regions and those who favored some largely white subdistricts, the board made racial integration its foremost concern. On April 7, 1970, at a meeting that Mirel calls “probably the most tumultuous in the history of the school system,” the school board approved a plan aimed not only at creating racially integrated regions, but also at desegregating the city’s high schools. The outcry was deafening, and weeks later, the Michigan legislature repealed the decentralization law outright, and hence the need for the controversial subdistricts. Young promptly drew up an alternative bill to create eight regions with their own five-member boards. The central board would include the chairman of each regional board, as well as five members elected at large. The bill entrusted the delicate matter of regional boundaries to a gubernatorial commission and effectively nullified the city school board’s desegregation plan. Gov. Milliken made the bill law in July 1970. The following month, the four liberal school board members who supported the desegregation plan fell prey to a recall effort. Two weeks later, the National Association for the Advancement of Colored People sued the state and city school board to desegregate the system. The suit, Milliken v. Bradley, would make the federal courts a major player in governing the district until 1989. After the governance change, the school board that took office in January 1971 confronted the district’s worst fiscal crisis since the Depression. Between 1968 and 1972, voters rejected six requests for tax renewals or increases. The tide finally turned in the fall of 1973, but only after the legislature had empowered the board to impose an income tax without the voters’ go-ahead. Voters then approved two millage proposals in the 1973-74 school year in exchange for repeal of that unpopular income tax. But the district remained on precarious financial footing, in part because of renewed demands by teachers for higher pay. The budget problems were exacerbated by debate over desegregation. During the 1971-72 school year, the judge in the Milliken case mandated a sweeping desegregation plan involving not only Detroit but also 52 surrounding suburban districts. That order was later scaled back to the city alone by a 1974 U.S. Supreme Court ruling. Anti-busing anger exploded, and often translated into opposition to spending measures. Meanwhile, conflict with employee unions became a serious problem. Blaming educators for black students’ academic problems, school leaders who had emerged from the community-control movement were determined to make employees more accountable. Throughout the early 1970s, the central and regional boards clashed with the dft over efforts to do so. “It all related to the central issue of developing the concept that teachers had some direct responsibility for student achievement, and as a result of that, teachers could be evaluated based on student outcomes,” says Arthur Jefferson, the district’s superintendent from 1975 to 1989. The tension boiled over in the fall of 1973, when the teachers staged a 43-day strike, largely over accountability issues. The teachers prevailed, winning a large wage increase and effectively relegating accountability to the back burner. But the strike was quickly followed by a racially charged dispute over an employee residency requirement the central board adopted in March 1974. The dft and the Organization of School Administrators and Supervisors came out on top after challenging the policy in court, but not before the fight had further poisoned the relationship between district leaders and employees. For administrators, recalls Stuart C. Rankin, who was deputy superintendent when he left the district a decade ago after 36 years, the fragmented nature of governance during the decentralization era proved frightening. “The superintendent of schools had a circus,” he says. “It was a very complex and rather strange time.” In the schools, meanwhile, conditions deteriorated, as violence rose and test scores dropped. Those problems contributed to political disenchantment with decentralization. “By 1978, opposition to decentralization was widespread,” Mirel notes. Once again, the stage was set for a turn of the governance wheel. That change came in 1981, when state lawmakers enacted legislation requiring a referendum in the city on recentralizing the system. Residents voted resoundingly to abandon the regional boards, and a new central board of 11 members soon took office, with seven members elected from subdistricts and four chosen by voters citywide. The schools that the new board took over continued to be hobbled by poor academic performance and violence. Many of the district’s schools—showpieces in the ‘teens and ‘20s, were falling apart. Enrollment had been shrinking for years, and was about 85,000 students lower than its 1966 peak. Against the backdrop of a severe recession that hit the auto industry with a vengeance, the city’s new school leaders confronted fiscal pressures as acute as those of a decade before. Although the political climate for tax increases had improved, the property-tax base continued to shrink. “Detroit was no different than Chicago, and New York, and Philadelphia in that there were constant financial issues, and we tried to deal with them as best we could,” Jefferson recalls. The budget shortfalls that started in the late ‘70s had mounted to a cumulative debt of $160 million a decade later. The sizable salary increases granted to teachers, who shored up their bargaining position with strikes in 1982 and 1987, were a primary reason for the red ink. The union argued that the raises were essential to prevent teaching talent from flocking to the better-paying suburbs. In retrospect, says Jefferson, “in some of those contract negotiations, we probably went too far in terms of what we could afford because we were trying so hard to be equitable and fair to our teachers.” Given the chronic fiscal problems, voters were receptive in 1988 when a slate of four candidates pledging to balance the books and devolve power to individual schools sought seats on the school board. Much of their campaign focused on an attention-grabbing issue that attracted criticism from state officials as well: the practice of some board members of traveling first class at public expense and of being chauffeured by school employees in district- owned cars. The reform-minded candidates, known as the hope team, trounced their four incumbent opponents. Under the leadership of the new board and interim Superintendent Porter, the district got its fiscal house in order after voters approved a tax increase and bond issue in the fall of 1989. Then, in 1991, with the hiring of a new superintendent, the board sought to implement changes, including school-based management and more specialized schools. But the effort fell apart after the dft, which had initially supported the 1988 reform candidates, switched gears in 1992. “They started to do things that we just couldn’t live with,” John M. Elliott, the president of the dft since 1981, says of those board members. Three of the four original hope candidates lost their seats, and the reconstituted board turned away from its predecessor’s agenda for change. In the ensuing years, Detroit school leaders came under growing attack for the district’s management practices and lagging educational performance. Citing low student test scores and high dropout rates, Michigan Gov. John Engler put forward a proposal in 1997 that would have opened the way for the state to take over the district. The idea made little headway in the legislature. But early this year, the Republican governor resurrected the idea in a revised form, pushing a plan to shift control of the 174,000-student Detroit schools to Mayor Dennis W. Archer, a Democrat. After heated debate last winter and spring, lawmakers approved a measure restoring a seven-member board, with six seats appointed by the mayor and the seventh filled by the state superintendent. The “reform board,” chaired by Deputy Mayor Hendrix, took office in late March. The board’s chief responsibilities are to name a chief executive officer, who is entrusted with nearly all the powers usually vested in a school board, and to approve a strategic plan. During the search for a permanent schools chief, an interim ceo, a former president of Wayne State University, David Adamany, was tapped to run the district under a one-year contract. Detroit voters will decide in five years whether to keep the new governance scheme. The new arrangement was loosely modeled on the 1995 governance change in Chicago that gave a small, mayorally appointed school board and a ceo extraordinary new powers to run the schools. The two cities are among a handful in which mayors have assumed control of their schools in recent years. As in other cities, Detroit’s new governance plan has drawn fire from those who say it disenfranchises city voters by stripping the elected school board of power. Hendrix acknowledges that the complaint has resonance in a city where the vast majority of voters and more than 90 percent of public school students are black. Still, he and other supporters of the change say citizens can now hold the mayor accountable at election time. “We have extraordinary problems here,” Hendrix says, “and we need to give someone the authority to do what needs to be done.” Whether the latest arrangement succeeds will depend on many factors, including the relationship the city’s new school leaders forge with educators and parents. As the century draws to a close, it’s an open question whether lasting improvements are in store for a system whose status as a beacon of excellence has long since slipped into history. “The number-one problem we face,” says Porter, “is not whether you produce some reforms, but whether you can sustain those reforms.” A version of this article appeared in the November 17, 1999 edition of Education Week as No Easy Answers
<urn:uuid:c595ad65-ffe4-4b1f-917c-c52c8e189b0f>
CC-MAIN-2022-33
https://www.edweek.org/leadership/no-easy-answers/1999/11
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571502.25/warc/CC-MAIN-20220811194507-20220811224507-00099.warc.gz
en
0.970147
5,772
2.8125
3
Dr. Gary G. Kohls — Global Research — 55 years ago (July 2, 1961) an American literary icon, Ernest Hemingway, committed suicide at his beloved vacation retreat in Ketchum, Idaho. He had just flown to Ketchum after being discharged from a psychiatric ward at the Mayo Clinic in Rochester, MN where he had received a series of electroconvulsive “treatments” (ECT) for a life-long depression that had started after he had experienced the horrors of World War I. In the “War To End All Wars” he had been a non-combatant ambulance driver and stretcher-bearer. One of Hemingway’s wartime duties was to retrieve the mutilated bodies of living and dead humans and the body parts of the dead ones from the Italian sector of the WWI battle zone. In more modern times his MOS (military occupational specialty) might have been called Grave’s Registration, a job that – in the Vietnam War – had one of the highest incidences of posttraumatic stress disorder (PTSD) that arose in that war’s aftermath. Hemingway, just like many of the combat-induced PTSD victims of every war, was likely haunted for the rest of his life by the horrific images of the wounded and dead, so there was no question that he had what was later to be understood as combat-induced PTSD with depression, panic attacks, nightmares, auditory and/or visual hallucinations and insomnia. Unfortunately for Papa, the psychiatrists at the Mayo Clinic were unaware of the reality of the PTSD phenomenon. They mistakenly thought that he had a mental illness (depression) of unknown etiology. (The diagnosis of PTSD wasn’t validated by the American Psychiatric Association as a Diagnostic and Statistical Manual (DSM) diagnosis until 1980.) Hemingway, a legendary chronic alcoholic who consumed large volumes of hard liquor daily, had also been wounded by shrapnel in WWI so he probably also had physical pain issues. Therefore, like many other soldier-victims of combat-induced PTSD he used alcohol to self-medicate his physical pain as well as his psychic pain, anxiety, insomnia, nightmares, failed marriages and the financial stresses related to the alimony payments to his ex-wives. Following his Mayo Clinic misadventure, Hemingway rapidly came to understand that his latest ECT “treatments” had erased his memory and creativity, and, because those realities were essential for him to continue his writing career, thus feeling that he no longer had a reason for living and ended his life. There is no record of what psychiatric drugs he had been prescribed over the years, but ECT is typically only attempted after all drug options had failed. The Parallel Paths of Artistic Geniuses Like Hemingway and Williams (and Michael Jackson and Prince) 53 years (July 1, 2014) after Hemingway’s self-inflicted death, another American icon, actor and comedian Robin Williams, entered the Hazelden psychiatric facility and addiction treatment center – also in my home state of Minnesota. He was treated with an cocktail of drugs for a month and, shortly after his discharge, committed suicide by hanging (August 11, 2014). The cocktail of newly prescribed brain-altering drugs surely was a major factor in his becoming increasingly depressed, losing appetite, losing weight and withdrawing from his loved ones. His discharge medications, which included the antidepressant drug Remeron, the anti-psychotic drug Seroquel (probably prescribed off-label for his insomnia) and an unknown anti-Parkinsonian drug caused him to be somnolent, despondent, despairing and increasingly depressed. Remeron, it should be emphasized, is well-known to cause suicidal thinking (and attempts) and carries the Food and Drug Administration’s “Black box” warning for suicidality. After he returned home, he was said to have slept in his darkened bedroom, up to 20 hours a day, in a probably drug-induced stupor. Remeron, it is helpful to remind readers, was one of the two psych meds (the other was the anti-psychotic drug Haldol) that the infamous Andrea Yates was taking before she irrationally drowned her five children – including her 6 month-old baby Mary – in the family bathtub. The devoutly religious Texas mother was convicted of first degree murder and sentenced to life imprisonment but – at re-trial – had her conviction changed to “not guilty by reason of insanity” (rather than “not guilty by reason of the intoxicating, insanity-inducing and homicidal effects of psychiatric medications!”). She is now spending the rest of her life in a psychiatric facility, no longer a threat to children. Robin Williams was said to have been diagnosed with Parkinson’s Disease while at Hazelden. The symptoms of Parkinson’s Disease are well known to be caused by antipsychotic drugs. Children who have been given anti-psychotic drugs (most commonly foster care children) are now coming down with Parkinson’s Disease, an illness totally unheard of prior to the formation of the subspecialty of Pediatric Psychiatry. The Secrets of NIMH (and Hazelden) 30 years ago or so a cartoon movie was released about lab rats that were trying to escape extermination by the National Institute of Mental Health. The movie was titled “The Secret of NIMH”. I tried to watch it a few years ago and was disappointed to discover that it really didn’t expose any of the real secrets of NIMH, its American Psychiatric Association foundations or the psychopharmaceutical industry’s unholy alliance with NIMH. I understand that a remake of the film is planned. I hope some of the real secrets will be revealed in the new film. Robin Williams left no suicide note, and so far Hazelden is mum on what happened behind closed doors during that fateful – and failed – month-long stay. “What Brain-Altering Drugs was Williams or Michael Jackson or Prince On?” Williams’ legendary cocaine and amphetamine use are certainly factors to consider as contributing causes for his suicide, for such drugs are notoriously toxic to mitochondria and brain cells. What is also deserving of consideration is the fact that when patients abruptly quit taking an antipsychotic drug, withdrawal symptoms can occur – even if the drug was first prescribed for non-psychotic issues like insomnia. Those withdrawal symptoms can include irrational thinking, loss of impulse control, psychoses, hallucinations, insomnia and mania, any of which can lead a physician to falsely diagnose schizophrenia or so-called bipolar disorder or any number of mental disorders “of unknown cause”. Some of Williams’ closest friends are logically wondering about what was the effect of the newly prescribed drugs that may have motivated Williams to so illogically kill himself. Hollywood journalists swarmed all over the tragic event two years ago, but characteristically avoided even speculating about the possibility of psychiatric drug-induced suicide, the most logical explanation for the series of events, especially for any thinking person who knows anything about the connections between psychiatric prescription drugs and suidicality, homicidality, aggression, violence, dementia, and irrational thinking and actions (whether while taking the drugs or withdrawing from them). Such informed people have already asked themselves the question: “I wonder what psych drugs Robin (or Hemingway or Michael Jackson or Prince) was taking?” Tragically, the media has been totally unhelpful in discussing that important question or in offering any answers to the question. Iatrogenic (doctor-caused or prescription drug-induced) causes of morbidity and mortality are apparently not to be discussed in polite company. It is important to point out that a bottle of Seroquel with 8 pills missing was found in Williams’ bedroom and drug toxicity testing revealed Remeron in Williams’ bloodstream at autopsy. The coroner emphasized that the dose of the legally-prescribed drug was at “therapeutic levels”, which is, of course, totally unhelpful information, given the fact that the undesired effects of a drug have no correlation to dosage. The Taboo Reality: Psych Drugs Can Cause Suicidality There have been millions of words written about how much everybody was shocked by Williams’ suicide. There have been thousands of flowers placed at any number of temporary shrines “honoring” his legacy. There have been thousands of comments on the internet from amateur arm-chair psychologists spouting obsolete clichés about suicide, mental illness, drug abuse, alcoholism, cocaine addiction, and how wonderful psychoactive prescription drugs have been. And there have been hundreds of dis-informational essays and website commentaries written by professional arm-chair psychiatrists who have financial or career conflicts of interest with Big Pharma, Big Psychiatry, Big Medicine and the rehab industries. Most of those commentaries distract readers from making the connections between suicidality and psych drugs. Some of the comments I have read have preemptively tried to discredit those who are publicly making those connections. Whenever unexpected suicides or accidental drug overdose deaths occur among heavily drugged-up military veterans, active duty soldiers, Hollywood celebrities or other groups of individuals, I search the media – usually in vain – for information that identifies the drugs that are usually involved in such cases. But revealing the drug names, dosages, length of usage or who prescribed them seems to be a taboo subject. One has to read between the lines or wait until the information gets revealed at www.ssristories.org (a Big Pharma whistle-blowing website that should be mandatory reading for everybody who prescribes or consumes psychiatric drugs). Patient confidentiality is usually the reason given for the cover-ups – and why important potentially teachable moments about these iatrogenic (drug-induced or vaccine-induced) tragedies are averted. Big Pharma, the AMA, the APA, the AAP, the AAFP, the CDC, the FDA, the NIH, the NIMH, Wall Street and most of the patient or disease advocacy groups that sponsor the annual fund-raising and very futile “searches for the cure” all understand that the hidden epidemic of iatrogenic illnesses must be de-emphasized. And, simultaneously, the altruistic whistle-blowers among us will be black-listed, denigrated and labeled as nuisance conspiracy theorists. The corporate entities mentioned above also know how useful it is if patients (rather than the system) are blamed for causing their own health problems. Typical examples include: “you eat too much”, “you don’t exercise enough”, “you smoke too much”, “you don’t eat right”, “your family history is bad”, “you don’t take your meds correctly”, “you don’t come in for your screening tests or routine exams often enough”, “you don’t get all the vaccinations like you are told to do”, etc). Highly unlikely “genetic” causes are energetically promoted as preferable root causes of totally preventable iatrogenic illnesses (because inherited disorders are not preventable and are also untreatable). This reality ensures that researchers can annually demand billions of dollars for research while at the same time short-changing and discrediting simple, cheap, do-it-yourself prevention that doesn’t need a doctor. The confidence of the American public in Big Pharma’s drug and vaccine promotions must not be disturbed. Wall Street’s rigged stock market does not permit the publication of any information that could destroy investor confidence in the pharmaceutical or vaccine corporation’s highly profitable products, even if the science behind the drugs and vaccines is bogus and the unaffordable products are dangerous. The beauty of an unbiased public inquest, which I advocated for in this column two years ago, should have been done in the case of Robin Williams and all the school shooters, would be the subpoena power of a grand jury to open up the previously secretive medical records and force testimony from Williams’ treatment team. The public could finally hear information that could make comprehensible the mysterious death of yet another high profile suicide victim and start the process of actually positively America’s suicide and violence epidemics. An inquest would likely reveal that Robin Williams did not have a “mental illness of unknown cause” or “bipolar disorder of unknown cause” or “depression of unknown cause” or “suicidality of unknown cause”. An inquest would obtain testimony from feared whistle-blower experts in the fields of medicine, psychiatry and psychopharmaceuticals such as Peter Breggin, MD, Joseph Glenmullen, MD, Grace Jackson, MD, David Healey, MD, Russell Blaylock, MD, Fred Baughmann, MD and other well-informed medical specialists who don’t own stock in Big Pharma corporations and who know very well how dangerous their drugs can be. Robin Williams did not have a Mental Illness of Unknown Etiology Just knowing a little about the life and times of Robin Williams and others on the long list of celebrity victims of psychiatric drugs (like Michael Jackson and Prince both of whom “died too soon”) would easily disprove most of the unscientific theories about their deaths that have widely published online. Why did many of us psych drug sceptics and psychiatric survivors want an inquest in Robin Williams’ suicide? We wanted to know the names of the ingredients in the cocktail of drugs that had been tried on him (and the dosages and length of time they were taken). We wanted to know what side effects he had from the drugs and what his responses were. We wanted to know what was the reasoning behind the decision to prescribe unproven combinations of powerful drugs on someone whose brain was already compromised by the past use of known brain-damaging drugs. And we wanted to know, for the sake of past and future victims of these neurotoxic substances, if the prescribing practitioners informed Williams about the dangers of those treatments, particularly the black box suicide warnings for Remeron. Stress-induced and Drug-induced Mental Ill Health Doesn’t Mean One is Mentally Ill Robin Williams gained fame and fortune as a comic actor, starting with what was to become his trade mark manic acting style (stimulant drug-induced mania?) on “Mork and Mindy”. As have many other famous persons that attained sudden fame and fortune, Williams spent his millions lavishly and – in retrospect – often foolishly. After his third marriage he found that he could no longer afford his Hollywood lifestyle. But long before his two divorces and the serious financial difficulties caused him to decompensate and again fall off the sobriety wagon, Robin Williams had lived in the fast lane, working long exhausting days and weeks and partying long exhausting nights with the help of stimulant drugs like the dependency-inducing drug cocaine (that overcomes sleepiness and fatigue) and artificial sleep-inducing tranquilizers whose mechanism of action resembles long-acting alcohol. Sedative drugs artificially counter the drug-induced mania and drug-induced insomnia that predictably results from psycho-stimulants like cocaine, nicotine, caffeine, Ritalin, Strattera, Prozac, Paxil, Zoloft, Celexa, Wellbutrin, Provigil, amphetamines, etc, etc). Williams had acknowledged that he was addicted to both cocaine and alcohol when his famous comedian buddy John Belushi died of an accidental drug overdose shortly after they had snorted cocaine together (March 4, 1962). Shortly after Belushi’s overdose death, Williams quit both drugs cold turkey, and he remained sober and cocaine-free for the next 20 years. There is no public information about his use of addictive prescription drugs, but it is well-known that many Hollywood personalities like him have close relationships with both prescription-writing physicians and illicit drug pushers, many of whom make house calls. However, Williams relapsed in 2006 and started abusing drugs and alcohol again, eventually being admitted to a Hazelden drug rehab facility in Oregon. After “taking the cure” he continued his exhausting career making movies, doing comedy tours and engaging in personal appearances in order to “pay the bills and support my family”. After two expensive divorces, huge indebtedness and an impending bankruptcy, Williams was forced, in September of 2013, to sell both his $35,000,000 home and his even more expensive ranch in Napa Valley. He moved into a more modest, more affordable home in the San Francisco area, where he lived until his death. But despite solving his near-bankruptcy situation (which would make any sane person temporarily depressed), Williams continued having a hard time paying the bills and making the alimony payments, so he was forced to go back to making movies (which he despised doing because of the rigorous schedule and being away from his family for extended periods of time). And he hated the fact that he was being financially forced to sign a contract to do a “Mrs. Doubtfire” sequel later in 2014. For regular income, he took a job doing a TV comedy series called “The Crazy Ones”, but the pressures of working so hard got him drinking again, even using alcohol on the set, which he had never done before. He was making $165,000 per episode and was counting on continuing the series beyond the first season in order to have a steady income. So when CBS cancelled the show in May 2014, humiliation, sadness, anxiety and insomnia naturally set in, and he decided to go for professional help at the Minnesota Hazelden addiction facility, spending the month of July 2014 as an patient there. The public deserves to know what really happened inside that facility. We certainly deserve to know the full story. There are many painful lessons that can be learned. Those who think that we can’t handle the truth are wrong. The psychiatric drug-taking public deserves to know what were the offending drugs that contributed to his pain, anguish, sadness, nervousness, insomnia, sleep deprivation, hopelessness and the seemingly irrational decision to kill himself. And the family, friends and fans of Robin Williams certainly deserve to know the essential facts of the case which, if not accomplished, will otherwise just result in a blind continuation of America’s “mysterious” iatrogenic suicide, violence and dementia epidemics. Ignorance of the well-hidden truths will just allow the continuation of Big Pharma’s ill-gotten gains and the fact that it has been deceiving the medical profession for so long – and destroying the memory, creativity, brains and lives of millions of our patients simultaneously. For more information on the above very serious issues, check out these websites:www.ssristories.com, www.madinamerica.com,www.mindfreedom.org.“http://rxisk.org/www.mindfreedom.org, www.breggin.com, www.cchrint.org,www.drugawareness.org, www.psychrights.org, www.quitpaxil.org, www.endofshock.com. Drug-Induced Mental and/or Neurological Ill Health It needs to be mentioned that all the so-called “atypical” antipsychotic drugs (like Seroquel, Risperdal, Abilify, Geodon, Zyprexa, Clozaril, Fanapt, Invega, Saphris, etc) can also cause diabetes, obesity, hyperlipidemia, liver cell necrosis, and the metabolic syndrome, as well as neurologic movement disorders that mimic (or actually cause) Parkinson’s and Lewy Body disorder (the latter diagnosis of which Williams’ autopsy findings revealed). But it is important to point out that, contrary to what Robin Williams widow has recently proposed, Lewy Bodies in the brain DO NOT cause suicidality. Rather, the brain lesions of neuro-degenerative disorders such as Parkinson’s (and probably also the equally microscopic Lewy Body lesions that can’t be diagnosed prior to autopsy) are commonly caused by neurotoxins such as petrochemical solvents (such as toluene, trichlorethylene and benzene), poisons (such as carbon monoxide and cyanide), insecticides (such as Rotenone), herbicides (such as Paraquat), fungicides (such as Maneb), metals (such as copper, mercury, manganese and lead), and brain-altering psychiatric drugs that are known to cause drug-induced dementia. (See the seminal work of practicing psychiatrist Grace E. Jackson, titled “Drug-induced Dementia: A Perfect Crime” for much more.) Both illicit and prescription psychoactive drugs can indeed cause the death of brain cells, at least partly because of those synthetic drug’s mitochondrial toxicity traits. The carcasses of the dead and dying cells can be identified as abnormal microscopic deposits of nerve tissue such as can be found in the brain biopsies of patients who died with Parkinson’s Disease, Dementia with Lewy Bodies or drug-induced dementia (which is commonly mis-diagnosed as “Alzheimer’s disease of unknown cause”). Incidentally, drug-induced Parkinsonism can be caused by the neurotoxic effects of the following groups of commonly prescribed drugs: 1) “typical” antipsychotic drugs (such as Thorazine and Haldol), 2) “atypical” antipsychotics (such as Seroquel and Risperidal), 3) pro-motility gastrointestinal drugs (such as Reglan), 4) calcium channel blockers (such as Norvasc and Cardizem), and 5) antiepileptic drugs (such as Valproate). Shouldn’t There be Penalties for Pushers of Legal Brain-altering Substances? There are penalties for bartenders who serve underage drinkers who go on to have auto accidents while under the influence. There are penalties for street corner drug pushers who supply their junkies with dangerous illicit drugs, and there are penalties for the drug lords who are at the top of the drug supply chain. But shouldn’t there also be penalties for legal drug pushers who are supplying medications to their addiction-prone clients without first obtaining from them fully informed consent concerning the dangers of the drugs? Shouldn’t there be penalties for legal drug pushers who are prescribing dangerous brain-altering psychiatric drugs in combinations that have never even been tested for safety, even in the animal labs? The very profitable industries of Big Pharma, Big Psychiatry, Big Medicine and drug rehabilitation are all very interested in de-emphasizing all unwelcome truths about the lethality of their products and thus they successfully prevent them from being aired in the mainstream media. Thus there is a rapid disappearance of interest in the celebrity suicides or lethal drug overdoses by the time the delayed coroner’s report reveals what drugs were in the victim’s blood and gastric contents. (Note that many coroners are not aware that many psych drugs are detectable in brain tissue long after the time that they have disappeared from the stomach and bloodstream; therefore many coroners don’t bother to test for drugs in brain tissue samples). If blood tests are negative for drugs, it is often erroneously assumed by the uninformed public (and even many medical professionals) that drugs aren’t a factor in the aberrant behavior or death of vulnerable drug-taking humans. Drug withdrawal commonly causes patients to become irrational, violent or suicidal – realities that can occur at any time, even after the drug has long disappeared from the blood. The lessons are numerous and the teachers are available, but they are censored-out of our corporate-dominated media system. Those important lessons are there for anybody to learn, but we must first overcome the powers-that-be that know they won’t profit from our enlightenment. Spread the word. Robin Williams, Ernest Hemingway, Michael Jackson and Prince would want us to do that. Dr. Kohls is a retired physician who practiced holistic, non-drug, mental health care for the last decade of his family practice career. He is a past member of MindFreedom International, the International Center for the Study of Psychiatry and Psychology and the International Society for Traumatic Stress Studies. He now writes a weekly column for the Reader Weekly, an alternative newsweekly published in Duluth, Minnesota, USA. Many of Dr Kohls’ columns are archived at http://duluthreader.com/articles/categories/200_Duty_to_Warn, http://www.globalresearch.ca/authors?query=Gary+Kohls+articles&by=&p=&page_id= or at https://www.transcend.org/tms/search/?q=gary+kohls+articles
<urn:uuid:46aef680-4f8f-4a0a-9a62-5f7465db3646>
CC-MAIN-2022-33
https://thebulletin.ca/363588-2/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571056.58/warc/CC-MAIN-20220809155137-20220809185137-00698.warc.gz
en
0.956186
5,265
2.734375
3
By Arnold Blumberg At 2:30 am on June 15, 1815, tens of thousands of French soldiers around the town of Beaumont, France, were roused from their bivouacs. After hurriedly cooking breakfast, cleaning weapons, and answering roll call, the myriad battalions, regiments, and divisions formed up ready to take the road to the Belgian frontier three miles to the north. This mass of soldiery assembled into three great columns that snaked along the paved avenues and dirt paths leading to the Belgian city of Charleroi. At 3:30 am the first French troops stepped foot on Belgian soil. As the Gallic host entered Belgium, squadrons of light cavalry leading the advance fanned out over the countryside. Within minutes these marauders clashed with mounted Prussian vedettes monitoring the Franco-Belgian frontier crossing points. Here and there sharp skirmishes and running fights ensued. As the Prussian sentinels were pushed back, French riders attempted to secure the border villages; at some of these they were met with vicious enemy sniper fire. Both French and Prussian cannons boomed intermittently as Prussian infantry garrisoning the villages resisted the French advance. Soon smoke from burning and looted buildings boiled upward into the summer sky as the French troops made their way deeper into Belgium. The critical phase of the Waterloo Campaign had begun. The French troops who marched into Belgium on June 15, 1815, belonged to the 120,000-strong Army of the North commanded by the recently restored emperor, Napoleon Bonaparte. It would be the instrument Napoleon intended to employ for the destruction of the two enemy armies then stationed there: an Anglo-Dutch-Belgian-German force of 106,000 men under English Field Marshal Arthur Wellesley, Duke of Wellington, and the Royal Prussian Army of the Lower Rhine, composed of 117,000 troops overseen by Field Marshal the Prince of Wahlstatt Gebhard Leberecht von Blucher. The decision for a strike into Belgium was the result of the French leader’s desire to seize the strategic initiative by immediately going on the offensive. By clearing the Allies from Belgium, Napoleon could then circle around the 220,000 Austrians preparing to invade France from southern Germany and the 150,000 Russians then gathering in the central Rhine area, thus cutting their line of communication, forcing both foes to retreat eastward away from France. The gist of the emperor’s move into Belgium was clear: he intended to grab Charleroi, getting between the coalition armies, then strike each in turn, anticipating each would fall back on its lines of communication: the English to the west, the Prussians to the east. However, he had to do more than merely push the Allied armies in Belgium back; one or the other had to be wrecked enough to force its parent country out of the war. To achieve his Belgian gambit, Napoleon fashioned the Army of the North, which he would command in person. The most experienced force he had led since 1807, it was largely composed of veteran volunteers, with few raw conscripts in its ranks and no dubious allies marching with it. Leadership up to corps level was generally good, and the emperor had the undivided loyalty of the majority of the enlisted men as well as the field grade officers. French General Count Maximilian Foy noted in his journal that “the troops exhibit not patriotism, not enthusiasm, but an actual mania for the Emperor and against his enemies.” In terms of accumulated experience and achievement the Army of the North possessed both in abundance and its junior and mid-level leadership was outstanding; however, the army was deeply divided. On the one hand the “old sweats” that had refused to serve under the restored Bourbon crown looked with great suspicion at those who had pledged allegiance to King Louis XVIII, half expecting treachery and betrayal from them. Those who had switched from the Royalist side to that of Napoleon on his return in 1815 loathed the superior airs of the other half of the army and secretly longed to see them cut down to size. As one French officer recorded, there was “no mutual confidence, no fraternity of arms, no interchange of generous feelings; pride, selfishness and thirst of prey reigned throughout.” If the lower ranks and junior officers in the Army of the North were of good quality and capable of solid military performance, the same cannot be ascribed to its senior leaders. Napoleon in many respects, even at the age of 46, was the same master of war he had been for the past 20 years. His capacity to organize, move, and inspire his troops remained extraordinary. When it came to concentrating before battle and then fighting an engagement, the army’s corps commanders were the most important figures. By 1815 so many of Napoleon’s finest marshals and generals were dead, retired, or exiled that he had to staff senior positions with what was available. Some of these commanders were talented, while many others were timid and tired. During the Hundred Days Campaign the quality of Napoleon’s army wing and corps leaders was uneven. Marshal Michel Ney, who handled the left wing on June 16, was popular with the ordinary soldier and extremely brave in battle. But his tempestuous personality and lackluster performance during the 1814 campaign in northeastern France had shown he was past his prime. Other corps leaders’ shortcomings included indecisiveness and a lack of confidence. Such was the case with Jean-Baptiste Drouet, Count d’Erlon, the I Corps commander, and also Count Honore Relle, the II Corps commander. As for Dominique Vandamme, the III Corps commander, he was not terribly bright. In contrast, Count Maurice Gerard, the talented head of IV Corps, and Georges Mouton, Count of Lobau, the VI Corps leader, were capable, as was Count Antoine Drouot, who efficiently handled the Imperial Guard Corps. At the helm of the army’s Cavalry Reserve stood Marshal Emmanuel, Marquis de Grouchy. Like Ney, Grouchy would act as an army wing commander when Napoleon was not present. He was a capable commander of mounted forces but when exercising independent command lacked dash and imagination. His veteran subordinate cavalry corps leaders, General of Division François Kellerman, Counts Claude Pajol, Remy Exelmans, and Edouard Milhaud, all were reliable tacticians. The Army of the North lacked cohesion resulting from the men not being familiar with their commanders and mistrusting their generals. The latter was due to the return of the Bourbon dynasty to power after Napoleon’s abdication in 1814, during which time many generals had taken office under Louis XVIII. This left the majority of the rank and file in the army suspicious of the trustworthiness of these commanders. As a result, this military force was capable of swings between exuberant morale and great feats of arms and gloomy depression. The troops attributed every mishap and delay to treason, and if put under pressure were prone to panic. The army lacked discipline; it was very able but also unstable. Ultimately, it was held together only by the shared fanatical loyalty to the emperor. The first enemy the Army of the North would encounter in its thrust into Belgium was the Prussians under Blucher. The armed forces fielded by the Kingdom of Prussia for the campaign of 1815 were plagued by low manpower quality, outdated equipment, and overall poor organization, making it the worst army the kingdom employed during the entire Revolutionary and Napoleonic Wars. This was in part brought about by the general lack of national resources due to 25 years of continuous warfare and the largely agrarian and economically underdeveloped nature of the kingdom. A substantial part of the Prussian infantry in 1815 consisted of untrained, badly equipped militia known as the Landwehr, many of whom came from territories only recently occupied by Prussia and whose loyalty to their new masters was doubtful. The Prussian mounted arm was in the midst of a major reorganization when the campaign of 1815 began. Many of the newly raised cavalry regiments lacked training and cohesion and thus were not ready for active service. The artillery arm needed equipment and was understrength in both guns and manpower. The poorly equipped, inexperienced Prussian Army of 1815 was held together and prevailed due to the determination of its officers and enlisted men. At the apex of that leadership stood the army commander, Blucher. The Prussian commander was indomitable, resilient, and optimistic. He was always ready for a fight. Blucher was a veteran of dozens of battles. He harbored a pathological hatred for Napoleon. At age 72, the “Old Hussar” possessed an unlimited capacity to inspire his troops and was a team player. Although a dogged consuming persistence did much to win him battles, Blucher did have drawbacks as a field commander. He tended to wage war by instinct rather than reasoned logic, had a modicum of tactical skill, but in the realm of strategy was completely out of his depth. Fortunately, the field marshal’s military shortcomings were compensated for by the talents of his capable chief of staff, Lt. Gen. Count Neithardt von Gneisenau. Gneisenau joined the Prussian Army in 1786. After the crushing defeat inflicted upon it by Napoleon in the campaign of 1806, he was instrumental in reshaping the outdated army into a modern national patriotic force. Together Gneisenau and Blucher made a formidable command team. The final word on that extraordinary combination may have come from Colonel Baron Carl von Muffling, the Prussian military liaison to Wellington’s Allied army, when he wrote, “Gneisenau really commanded the army…. Blucher merely acted as an example as the bravest in battle.” The four infantry corps that made up Blucher’s army were commanded by Lt. Gen. W. Hans Karl Friedrich Ernst Heinrich von Zieten (I Corps), Maj. Gen. George Dubislav Ludwig von Pirch I (II Corps), Lt. Gen. Johann Adolf Freiherr von Thielmann (III Corps), and General Friedrich Wilhelm Bulow von Dennewitz (IV Corps). Zieten was a tough and effective veteran of the 1813 and 1814 campaigns against the French. Pirch I had served adequately in the German Wars of Liberation against Napoleonic France. Thielmann, a Saxon by birth, had fought in the French Revolutionary Wars as an officer in Saxon service; fought for the French in 1809 and at the 1812 Battle of Borodino; changed sides for the 1813 and 1814 campaigns fighting under the Russian flag, then entered Prussian service in early 1815. He was an experienced combat leader. Bulow was a seasoned veteran with victories over the French at the Battles of Gross Beeren and Denewitz in 1813. Although his corps was not present at Ligny, it was the main Prussian contribution at the Battle of Waterloo two days later. Except for Bulow, Blucher’s army did not contain Prussia’s tested senior combat leaders. Because it was deemed vital that Gneisenau be paired with Blucher, outstanding Prussian military leaders such as Yorck, Kleist, and Tauenzien, all of whom outranked Gneisenau, could not be assigned to the Army of the Lower Rhine. Hence, the only answer was to appoint younger generals to handle Blucher’s corps even at the price of depriving the army of the services of the better, yet senior, officers. Of the four corps heads, only Bulow was senior to Gneisenau, and so his IV Corps was designated as Blucher’s reserve in the hope Bulow would not find himself under Gneisenau’s orders. In all, the Prussian generals were a reasonably able lot with experience and valor; however, they lacked the élan and tactical flair possessed by the best of their French counterparts. As the spring of 1815 wore on, Wellington’s polyglot army gathered around Brussels, and Blucher’s I, II, and III Corps spilled into Belgium. Many of the Prussian units, especially the cavalry, never reached full strength. Much of the rapidly expanded army consisted of ill-trained militia armed with barely serviceable weapons: the infantry of I and II Corps consisted of one third Landwehr, that of III Corps one half, and in IV Corps two thirds. Yet Blucher was not concerned with that; morale and zeal mattered most to him, and he felt his men had both attributes in abundance. To Karl August von Hardenburg, Prussian chancellor, Blucher wrote in late May, “In our troops reigns a courage that becomes boldness.” The Allies planned to defeat Napoleon through their numerical superiority. All their armies were to cross the French frontier between June 27 and July 1. Aside from this broad strategy, Blucher and Wellington had not arranged a concrete plan of operations. At a meeting on May 3, though, they did agree to concentrate their forces on the Quatre Bras-Sombreffe line if attacked so they could support each other. The Prussian remained confident about the military situation and wanted the Coalition forces to attack the enemy “with the most possible haste.” To Hardenburg he said, “Our delay [in attacking the French] can only have the greatest disadvantages.” Blucher was soon proven correct after Napoleon launched his assault into Belgium. Within two hours of setting off from their encampments on June 15, the French cavalry made contact with enemy outposts on the road to Charleroi. Prussian screening forces were soon forced to retire as Prussian cannons fired three warning rounds signaling the start of hostilities. As the French advanced during the morning, battalion-sized clashes at the towns of Thuin, Marchienne-au-Pont, and Ham-sur-Heure marked the approximately 20,000-strong center column’s progress toward Charleroi. To its left was a mass of 27,000 French infantry and cavalry, to its right another column of 18,000 French foot and horse soldiers. At 11 am sappers and marines from the Imperial Guard breached Charleroi’s defenses, followed by its occupation by French hussars. As the French light cavalry swarmed into the city the Prussian defenders retreated. Napoleon entered the town soon after. With the capture of Charleroi, the French quickly fanned out toward Gosselies and Gilly to the north and east, respectively. It was not until 9 am that Blucher, at his headquarters in Namur, learned of the French invasion. Faced with a critical situation, the Old Hussar issued orders to concentrate his army at the town of Sombreffe 15 miles northeast of Charleroi. This would be difficult in the face of an advancing enemy since the Prussian Army was spread widely over southern Belgium. Zieten’s 32,533 man I Corps was encamped between Charleroi and Gembloux; Pirch I’s II Corps, numbering 31,000, was situated northeast of Namur; Thielmann’s III Corps of 25,000 troops was below the Meuse River near Dinant; and Bulow’s IV Corps of 30,000 troops was near Liege. Regardless of the delicate situation he was in, Blucher, after sending word of the French invasion to Wellington at Brussels, wrote to his wife, “Bonaparte has engaged my whole outposts. I break up at once and take the field against the enemy. I will accept battle with pleasure.” As the Prussian Army moved to assemble at Sombreffe, Zieten, on instructions from Blucher, drew back toward Sombreffe. On the way he placed division-sized forces supported by cavalry and artillery in defensive positions to slow the French advance. By nightfall the Prussian I Corps had successfully broken contact with the oncoming French and bivouacked between Ligny and Sombreffe. Its casualties for the day numbered 1,200 men; the French lost half that number. By day’s end Blucher had started concentrating his army around Sombreffe as agreed with the Duke of Wellington. I Corps covered the assembly area as II and III Corps raced west. The Prussian IV Corps had been sent orders to proceed to Sombreffe, but those instructions were not received by Bulow in time for him to carry them out. At the end of June 15, Blucher had every reason to believe that the plan formulated on May 3 to combine the Allied armies in Belgium and confront Napoleon in one great battle was on track and would take place the next day. He could not have been more wrong. As June 15 came to a close, Napoleon sent part of the Army of the North under Marshal Ney to secure the Quatre Bras crossroad, while Marshal Grouchy, with the remainder of the army, pursued the Prussians to Ligny. Napoleon’s intent was to cut the link between his two adversaries, the Nivelles-Sombreffe-Namur road, and as he later wrote, “take the initiative of attacking the enemy armies, one by one.” Unknown to Blucher, Wellington, described as “always being inclined to accept a battle than to offer one,” had done little on June 15 to comply with the arrangement he had previously made with Blucher. The French assault at Charleroi had not convinced him that Napoleon intended to drive a wedge between the two Allied armies. He believed the French would attack Mons to his right in an attempt to cut him off from his supplies and escape route to the English Channel. It was not until early on June 16 that Wellington realized Napoleon’s true intentions; regardless, Wellington did not issue orders to move his army to Quatre Bras (eight miles from Sombreffe) until 5 am on June 16. The duke’s dawdling prevented his army from concentrating in time to lend direct support to the Prussians at Ligny later that day. Instead, Wellington spent June 16 merely containing Ney’s force at Quatre Bras. Arriving at Sombreffe during the evening of June 15 and aware that Bulow’s command would likely not join the rest of his army the next day, Blucher nevertheless decided to engage the French in battle southwest of Sombreffe at the village of Ligny on the 16th. The field marshal’s decision was influenced by messages from Wellington assuring him that 60,000 men of Wellington’s army would be in a position to support Blucher by the afternoon of the 16th. In reality, the duke’s information regarding his troop positions was extremely inaccurate. Wellington’s soldiers would only arrive at Quatre Bras after 3 pm on June 16, and only in a steady trickle. It would not be until the afternoon of June 17 that the Englishman’s entire force was finally united. Around noon on June 16, Wellington rode from Quatre Bras to meet Blucher. The former promised that he would come to the aid of his ally only if the French did not attack him at Quatre-Bras. Nevertheless, Blucher still expected at least the aid of 20,000 soldiers from the Anglo-Dutch army. In anticipation of receiving this support, the Prussian right wing at Ligny was intentionally left in the air ready to connect with Wellington’s men coming from the west. Meanwhile, thinking only one Prussian corps stood at Sombreffe, Napoleon ordered Grouchy to advance against it on June 16 with the III and IV Infantry and I, II, and IV Cavalry Corps. At the same time, Marshal Ney, leading I and II Infantry, and III Cavalry Corps would drive on Quatre Bras. Napoleon followed with the Imperial Guard and VI Infantry Corps as a reserve, ready to support either Ney or Grouchy as the situation demanded. In the early morning hours of June 16, Napoleon fully expected the Prussians to continue their retreat east from their exposed position at Sombreffe. He therefore decided to throw the majority of the Army of the North against Wellington’s forces in the area of Quatre Bras and then move on Brussels. Then news reached him that instead of retreating the Prussians were reinforcing Zieten’s corps at Sombreffe. At 11 am the emperor saw the large masses of new Prussian forces (Prussian II and III Corps) approaching the area. Napoleon decided to change his plan for June 16. He would frontally assault the Prussian forces then staging at Ligny and have Ney bring most of his command from Quatre Bras to fall upon the Prussian right flank and rear. Exclusive of Ney’s force, Napoleon planned to mass 68,000 men, including 12,500 cavalry and 210 cannons, for the battle. With only Vandamme’s infantry and two reserve cavalry corps on hand facing the Prussians at the time Napoleon made his decision, the battle he anticipated could not begin until the rest of the army’s right wing arrived and deployed for action. This would be around 2 pm at the earliest. As Napoleon revised his strategy, the Prussians concentrated their available forces near Ligny. The terrain there gave a number of advantages to the defenders. First, the undulating countryside contained considerable areas of dead ground in which troops would be concealed. Second, villages like St. Amand, Ligny, Sombreffe, Tongrinne, Boignee, and Baltare contained limestone buildings that could be turned into strongpoints. These sturdy buildings would function as ready-made field fortifications from which raw troops could fight. Third, Ligny Brook was a hard to cross marshland with only four bridges leading out of the Ligny Valley, all of which were dominated by Prussian held villages. Fourth, the hamlet of Brye, north of Ligny and Ligny Brook, covering 600 yards of front, was built like a fortress and could serve as a staging place for Prussian counterattacks or a last line of defense. Fifth, behind St. Amand and Ligny was a long, gentle, clear slope leading up to Brye. If the French were to seize St. Amand and/or Ligny, they would then have to advance up the slope in open terrain against massed Prussian artillery. One weakness of the Prussian position was its right flank, which rested precariously in the air and on open ground. Another flaw in the deployment was that the Prussian troops were packed in close-ordered formations on rising ground north of Ligny Brook. They proved to be perfect targets for French artillery once the battle opened. Blucher’s strategy for the battle was one of aggressive defense. After holding up French attacks with his urban strongpoints, he would send his inexperienced men to retake any lost terrain by vigorous counterattacks. The recovered ground would then be held by fresh troops as long as possible. Blucher hoped this strategy would allow him to maintain his position long enough for Wellington to arrive. To implement his defensive plan, Blucher placed Zieten’s I Corps along the line of the Ligny Brook, its left holding Ligny, its center at St. Amand, and its right near the village of Wagnelee. The II Corps was placed in reserve in the rear of I Corps on the Nivelles-Namur road, while III Corps was stationed between Sombreffe and Mazy on the Prussian left. By 3 pm Blucher had established a seven-mile battle line containing 76,000 infantry, 8,000 cavalry, and 224 cannons. The French Army deployed for its assault throughout the hot early afternoon. Vandamme’s III Corps, supported by Lt. Gen. Baron Girard’s 7th Infantry Division, II Infantry Corps, drew up north of the village of Wangenies just to the west of Fluerus facing northeast toward the Prussians holding St. Amand. Nine squadrons of light cavalry under Lt. Gen. Jean Simon Baron Domon covered its left flank. Vandamme’s artillery did not arrive until after the battle began. Gerard’s IV Corps deployed at right angles to III Corps facing Ligny, its 24 pieces of artillery drawn up 600 yards from the town. The mile and a half between III and IV Corps was filled by Milhaud’s horsemen. Gerard placed Brig. Gen. Baron Etienne Hulot’s infantry division, with eight guns, behind his other divisions at right angles facing northeast. Flanking Hulot was Lt. Gen. Baron Maurin’s cavalry division with six guns. Exelman created a line by forming his II Cavalry Corps and 12 cannons to the left of Pajol’s reduced I Cavalry Corps, which also controlled 12 guns and two infantry battalions detached from Hulot. Exelman and Pajol faced the left flank of the Prussians covering the area between Sombreffe and Balatre. The Imperial Guard Corps, with its 96 guns, and Milhaud’s IV Cavalry Corps were held in reserve a little to the west of Fleurus. At 2:30 pm a French Guard artillery piece boomed, than again and a third time, signaling the start of the Battle of Ligny. Vandamme commenced the fight by sending forward the division under General of Division Etienne Lefol. His task was to attack St. Amand to draw as many Prussian reserves as possible into the front line of their western wing. This would ensure a large number of Prussians would be trapped around St. Amand when Ney’s flanking force arrived. Lefol’s regiments, formed in attack columns, rolled forward under the fire of Blucher’s cannons situated north of Ligny Brook. Despite the pounding they took, after 15 minutes the French ejected the three infantry battalions of the Prussian I Corps’ 3rd Brigade defending the village from its houses, walled gardens, and the church. As the retreating enemy fled over Ligny Brook, Lefol’s men followed, but as they debouched from St. Amand they were crushed by 40 Prussian cannons sited north of the town. Meanwhile, French artillery dueled with their Prussian counterpart causing the earth to tremble. With friendly artillery shells passing over their heads, four battalions from the Prussian 1st Brigade, I Corps, counterattacked and drove the French out of St. Amand. Fifteen minutes after the attack on St. Amand began, French artillery poured fire on Ligny held by Maj. Gen. Henckel von Donnersmarck’s 4th Brigade, I Corps. Much of the French fire was, however, directed against the Prussian guns and reserve infantry beyond Ligny. One French eyewitness recorded, “Our artillery did considerable mischief among the great body of Prussian troops that were posted in mass on the heights and slopes.” Since the French forces were on slightly higher terrain, as well as protected by undulating ground, they fared better against their enemy’s return cannon fire. Lieutenant General Baron Marc Pecheux’s infantry division, IV Corps, advanced on Ligny in three columns preceded by a line of skirmishers. Prussian musket fire forced one column to retire, but the column made up of the 30th Line Regiment penetrated into the village only to be driven out due to mounting casualties and lack of support. Another assault on Ligny went in led by clouds of skirmishers and under cover of a barrage of French artillery, which blasted the town and the slopes beyond. Although this too was repulsed, three more attacks were made enabling the French to gain a tenuous presence in the village. On the French right, the cavalry made moves to outflank the eastern margin of the Prussian III Corps, but without infantry support the threats were idle. Hulot could not come to the aid of the cavalry since he was tied up fighting Prussian III Corps units holding Sombreffe and Tongrenelle. The battle escalated when Girard hurled his 5,000-man division at the village of La Haye, threatening the right flank of the Prussians defending St. Amand. Blucher, from his headquarters at the Bussy windmill, sent 2nd Brigade, I Corps, 5th Brigade, and the cavalry of II Corps to counter the French threat. Following a dense line of skirmishers, 2nd Brigade’s two assault columns drove the enemy out of La Haye. Girard rallied his men and, again moving forward, expelled the now exhausted Prussians from La Haye, causing them to retreat over Ligny Brook. The cost to the French for this achievement was high, including the mortal wounding of Girard. Reacting to the loss of La Haye, Blucher threw 2nd Brigade at the town, driving the now disorganized and bloodied French out. At the same time, 5th Brigade occupied Wagnelee despite suffering heavy fire from Lt. Gen. Habert’s division, III Corps. To the west, Domon’s light cavalry contained the opposing horsemen during the infantry clash. Reinforcements from the French right flank in the form of Lt. Gen. Baron Jacques Gervais Subservie’s cavalry division helped achieve this. By 5 pm the Prussian garrison in St. Amand was finally expelled and its mauled battalions retreated to Brye. However, the devastating fire of Zieten’s 12-pounder batteries prevented the French from debouching from the village. Meanwhile, part of Lt. Gen. Baron Louis Vichery’s division, IV Corps, accompanied by two cannons, was thrown into the fight for Ligny. The Prussians responded by reinforcing the town with four battalions from the 3rd Brigade. As increasingly fierce combat raged among the villages below Ligny Brook, Napoleon, at 5:30 pm, prepared to deliver the coup de grace by having Ney attack their western flank with d’Erlon’s Corps from Quatre Bras, while Lobau’s VI Corps, the Guard, and Milhaud’s cuirassiers crashed through the Prussian center. If all went according to plan, the western Prussian forces would then be encircled and destroyed between Vandamme, the Guard, and d’Erlon. “If Ney carries out my orders well, not a single gun of the Prussian army will escape; it is going utterly to be smashed,” said Napoleon. As the emperor prepared to deliver his decisive blow, news of an unidentified column was reported heading eastward to Fleurus in the French rear. Then it was learned that the French had abandoned La Haye. Worse, Vandamme wrote that unless he was reinforced he would lose St. Amand. Reacting to the multiple crises, Napoleon ordered VI Corps and part of the Guard back to Fleurus, while sending support to Vandamme. As the French scrambled to cope with the phantom force bearing down on their left flank and rear, Blucher scrounged together battalions from his I and II Corps and flung them at St. Amand. After close quarters fighting the French fled the northern part of the village. Meanwhile, the Prussian 5th Brigade, supported by artillery, left Wagnelee and attacked La Haye but was thrown back. Blucher was seen in the thick of the fighting on his black charger. In the nick of time the Young Guard Division, Guard Corps, arrived and with Girard’s infantry rallied and drove back the enemy near La Haye, while Vandamme’s men ushered the Prussians out of St. Amand. On the French eastern flank the villages of Boignee and Balatre were barely wrenched from the Prussians in bitter fighting. At Ligny, the close quarters combat amid the burning buildings continued unabated. The Prussian 4th Brigade was replaced with fresh battalions after losing 2,500 men out of the 4,721 who entered the fight. The French countered by pouring Vichery’s 2nd Brigade into the inferno. At 6 pm Blucher received the shocking news that Wellington, who had his hands full fighting Ney at Quatre Bras, would not be coming to support the Prussians at Ligny. Undaunted, Blucher resolved to win the battle by himself. He gathered up a few wrecked formations from his second line, as well as the last fresh battalions from II Corps, and shouting to his men, “Fix bayonets and forward!” led them again between Wagnelee and La Haye toward the French. The Prussian tide soon broke upon the immovable three chasseur regiments of the Old Guard Division, which repelled the Prussian assault and then went on to occupy La Haye. At this point in the battle the 22nd and 70th Line Regiments, Lt. Gen. Baroin Habert’s division, tramped through St. Amand and deployed to its north and, seeing enemy cavalry nearby, went into squares. Charged by enemy cavalry, the 22nd panicked, broke, and was mercilessly hacked down by the horsemen. The 70th retained its square and maintained its position. It was now 6:30 pm and Napoleon finally discovered the identity of the mystery column approaching from the west. It was d’Erlon’s I Corps, 20,000 strong. Napoleon sent it orders to attack the Prussians at Wagnelee, but it was too late; d’Erlon was marching back to Quatre Bras on Ney’s instructions. By 7:30 pm, thick clouds of smoke from burning villages mixed with rain and the sound of thunder hung over the battlefield. Suddenly, several Guard batteries commenced firing from south of St. Amand, while others east of Ligny opened a devastating cannonade. This was followed at 7:45 pm by the advance into Ligny of the Guard infantry, General Claude Etienne Guyot heavy cavalry division, and Milhaud’s riders. These formations moved through Ligny and north and south of it. This avalanche was sent forward by Napoleon when he realized the Prussians had no reserves north of Ligny, Blucher having squandered them in the failed counterattack in the Wagnelee-St. Amand sector. As the Guard passed their emperor they shouted, “No quarter!” As the Guard crashed into Ligny the Prussians buckled but did not break; in fact, they counterattacked but were repulsed. The French infantry soon exited Ligny, reformed their lines, and started up the slopes toward Brye and the heart of the Prussian position. In support rode a massed column of Milhaud’s cuirassiers. As the Guard juggernaut neared, the Prussian line dissolved. Seeking to avert the unfolding disaster about to overtake his army, Blucher ordered a counterattack with the I Corps cavalry brigade. The 6th Uhlans Regiment charged the 4th Grenadiers of the Guard but was swept away by French musket fire. Then a second and third wave of Prussian cavalry attacked, meeting the same fate as their Uhlan comrades, also being struck by French cuirassiers. Blucher had charged at the head of the 6th Uhlans and had his horse killed from under him, the animal pinning the field marshal to the ground. Blucher’s aide, Lt. Col. Count von Nostitz, was able to cover his leader with a cloak as French cavalry stormed past. With help from some retreating Prussians, the dead mount was removed from Blucher’s body and the general was placed on a horse and led to a nearby unit of Prussian infantry, which retired to safety. French forces pushed on from St. Amand and Ligny and attacked Sombreffe, driving back the Prussian 12th Brigade, III Corps, and the remains of the 1st Brigade. As the Prussians attempted to create a defense at Brye, to the east Thielmann tried to take the pressure off of the Prussian west wing by attacking down the Sombreffe-Fleurus highway with a brigade of cavalry supported by a battery of horse artillery. This small effort was soon derailed when French dragoons attacked the Prussians and drove them back. In the gathering darkness, Lt. Gen. Antoine Maurin led his French cavalry division, detached from Gerard’s corps, up the Fleurus-Sombreffe road deep into the enemy’s position. He was brought to a halt when Thielmann sought to establish a new defense line between Brye and Sombreffe. Grouchy’s cavalry pressed forward to occupy Tongrinne, just vacated by Thielmann, but could advance no farther. Not knowing whether Blucher was alive or dead, command of the army devolved on Gneisenau. The army was ordered to retreat to Wavre 13 miles to the north. It was the only real choice since I and II Corps were already streaming away in that direction and the way to join Wellington was blocked by the French. Later that evening Blucher directed the army to march to join Wellington, who on June 17 pulled out of Quatre Bras. At 5 am on June 17, 14 hours after the Battle of Ligny began, the Prussian Army, after suffering 16,000 casualties as well as the loss of 21 artillery pieces, moved northward. Although they had captured Ligny, the French, who lost 12,000 killed, wounded, and missing in the battle, had never reached the open ground beyond the village, having been stopped by the Prussian rear guard and the dark of night. On June 18, 1815, Napoleon would fight at the village of Waterloo, and his fate would be decided by the timely arrival of Blucher’s Prussians, whom he had failed to destroy two days earlier.
<urn:uuid:2b3444f5-14f7-4935-b927-d6d084e0b383>
CC-MAIN-2022-33
https://warfarehistorynetwork.com/article/napoleons-last-victory/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573118.26/warc/CC-MAIN-20220817213446-20220818003446-00699.warc.gz
en
0.975513
8,078
3.640625
4
This article needs additional citations for verification. (January 2015) Romanesque art is the art of Europe from approximately 1000 AD to the rise of the Gothic style in the 12th century, or later depending on region. The preceding period is known as the Pre-Romanesque period. The term was invented by 19th-century art historians, especially for Romanesque architecture, which retained many basic features of Roman architectural style – most notably round-headed arches, but also barrel vaults, apses, and acanthus-leaf decoration – but had also developed many very different characteristics. In Southern France, Spain, and Italy there was an architectural continuity with the Late Antique, but the Romanesque style was the first style to spread across the whole of Catholic Europe, from Sicily to Scandinavia. Romanesque art was also greatly influenced by Byzantine art, especially in painting, and by the anti-classical energy of the decoration of the Insular art of the British Isles. From these elements was forged a highly innovative and coherent style. Outside Romanesque architecture, the art of the period was characterised by a vigorous style in both sculpture and painting. The latter continued to follow essentially Byzantine iconographic models for the most common subjects in churches, which remained Christ in Majesty, the Last Judgment, and scenes from the Life of Christ. In illuminated manuscripts more originality is seen, as new scenes needed to be depicted. The most lavishly decorated manuscripts of this period were bibles and psalters. The same originality applied to the capitals of columns: often carved with complete scenes with several figures. The large wooden crucifix was a German innovation at the very start of the period, as were free-standing statues of the enthroned Madonna. High relief was the dominant sculptural mode of the period. Colours were very striking, and mostly primary. In the 21st century: these colours can only be seen in their original brightness in stained glass, and a few well-preserved manuscripts. Stained glass became widely used, although survivals are sadly few. In an invention of the period, the tympanums of important church portals were carved with monumental schemes, often Christ in Majesty or the Last Judgement, but treated with more freedom than painted versions, as there were no equivalent Byzantine models. Compositions usually had little depth and needed to be flexible to be squeezed into the shapes of historiated initials, column capitals, and church tympanums; the tension between a tightly enclosing frame, from which the composition sometimes escapes, is a recurrent theme in Romanesque art. Figures often varied in size in relation to their importance. Landscape backgrounds, if attempted at all, were closer to abstract decorations than realism – as in the trees in the "Morgan Leaf". Portraiture hardly existed. During this period Europe grew steadily more prosperous, and art of the highest quality was no longer confined, as it largely was in the Carolingian and Ottonian periods, to the royal court and a small circle of monasteries. Monasteries continued to be extremely important, especially those of the expansionist new orders of the period, the Cistercian, Cluniac, and Carthusian, which spread across Europe. But city churches, those on pilgrimage routes, and many churches in small towns and villages were elaborately decorated to a very high standard – these are often the structures to have survived, when cathedrals and city churches have been rebuilt. No Romanesque royal palace has really survived. The lay artist was becoming a valued figure – Nicholas of Verdun seems to have been known across the continent. Most masons and goldsmiths were now lay, and lay painters such as Master Hugo seem to have been in the majority, at least of those doing the best work, by the end of the period. The iconography of their church work was no doubt arrived at in consultation with clerical advisors. Metalwork, enamels, and ivoriesEdit Precious objects in these media had a very high status in the period, probably much more so than paintings – the names of more makers of these objects are known than those of contemporary painters, illuminators or architect-masons. Metalwork, including decoration in enamel, became very sophisticated. Many spectacular shrines made to hold relics have survived, of which the best known is the Shrine of the Three Kings at Cologne Cathedral by Nicholas of Verdun and others (c. 1180–1225). The Stavelot Triptych and Reliquary of St. Maurus are other examples of Mosan enamelwork. Large reliquaries and altar frontals were built around a wooden frame, but smaller caskets were all metal and enamel. A few secular pieces, such as mirror cases, jewellery and clasps have survived, but these no doubt under-represent the amount of fine metalwork owned by the nobility. The bronze Gloucester candlestick and the brass font of 1108–1117 now in Liège are superb examples, very different in style, of metal casting. The former is highly intricate and energetic, drawing on manuscript painting, while the font shows the Mosan style at its most classical and majestic. The bronze doors, a triumphal column and other fittings at Hildesheim Cathedral, the Gniezno Doors, and the doors of the Basilica di San Zeno in Verona are other substantial survivals. The aquamanile, a container for water to wash with, appears to have been introduced to Europe in the 11th century. Artisans often gave the pieces fantastic zoomorphic forms; surviving examples are mostly in brass. Many wax impressions from impressive seals survive on charters and documents, although Romanesque coins are generally not of great aesthetic interest. The Cloisters Cross is an unusually large ivory crucifix, with complex carving including many figures of prophets and others, which has been attributed to one of the relatively few artists whose name is known, Master Hugo, who also illuminated manuscripts. Like many pieces it was originally partly coloured. The Lewis chessmen are well-preserved examples of small ivories, of which many pieces or fragments remain from croziers, plaques, pectoral crosses and similar objects. With the fall of the Western Roman Empire, the tradition of carving large works in stone and sculpting figures in bronze died out, as it effectively did (for religious reasons) in the Byzantine (Eastern Roman) world. Some life-size sculpture was evidently done in stucco or plaster, but surviving examples are understandably rare. The best-known surviving large sculptural work of Proto-Romanesque Europe is the life-size wooden Crucifix commissioned by Archbishop Gero of Cologne in about 960–965, apparently the prototype of what became a popular form. These were later set up on a beam below the chancel arch, known in English as a rood, from the twelfth century accompanied by figures of the Virgin Mary and John the Evangelist to the sides. During the 11th and 12th centuries, figurative sculpture strongly revived, and architectural reliefs are a hallmark of the later Romanesque period. Sources and styleEdit Figurative sculpture was based on two other sources in particular, manuscript illumination and small-scale sculpture in ivory and metal. The extensive friezes sculpted on Armenian and Syriac churches have been proposed as another likely influence. These sources together produced a distinct style which can be recognised across Europe, although the most spectacular sculptural projects are concentrated in South-Western France, Northern Spain and Italy. Images that occurred in metalwork were frequently embossed. The resultant surface had two main planes and details that were usually incised. This treatment was adapted to stone carving and is seen particularly in the tympanum above the portal, where the imagery of Christ in Majesty with the symbols of the Four Evangelists is drawn directly from the gilt covers of medieval Gospel Books. This style of doorway occurs in many places and continued into the Gothic period. A rare survival in England is that of the "Prior's Door" at Ely Cathedral. In South-Western France, many have survived, with impressive examples at Saint-Pierre, Moissac, Souillac, and La Madeleine, Vézelay – all daughter houses of Cluny, with extensive other sculpture remaining in cloisters and other buildings. Nearby, Autun Cathedral has a Last Judgement of great rarity in that it has uniquely been signed by its creator, Giselbertus. A feature of the figures in manuscript illumination is that they often occupy confined spaces and are contorted to fit. The custom of artists to make the figure fit the available space lent itself to a facility in designing figures to ornament door posts and lintels and other such architectural surfaces. The robes of painted figures were commonly treated in a flat and decorative style that bore little resemblance to the weight and fall of actual cloth. This feature was also adapted for sculpture. Among the many examples that exist, one of the finest is the figure of the Prophet Jeremiah from the pillar of the portal of the Abbey of Saint-Pierre, Moissac, France, from about 1130. One of the most significant motifs of Romanesque design, occurring in both figurative and non-figurative sculpture is the spiral. One of the sources may be Ionic capitals. Scrolling vines were a common motif of both Byzantine and Roman design, and may be seen in mosaic on the vaults of the 4th century Church of Santa Costanza, Rome. Manuscripts and architectural carvings of the 12th century have very similar scrolling vine motifs. Another source of the spiral is clearly the illuminated manuscripts of the 7th to 9th centuries, particularly Irish manuscripts such as the St. Gall Gospel Book, spread into Europe by the Hiberno-Scottish mission. In these illuminations the use of the spiral has nothing to do with vines or other plant forms. The motif is abstract and mathematical. The style was then picked up in Carolingian art and given a more botanical character. It is in an adaptation of this form that the spiral occurs in the draperies of both sculpture and stained glass windows. Of all the many examples that occur on Romanesque portals, one of the most outstanding is that of the central figure of Christ at La Madeleine, Vezelay. Another influence from Insular art are engaged and entwined animals, often used to superb effect in capitals (as at Silos) and sometimes on a column itself (as at Moissac). Much of the treatment of paired, confronted and entwined animals in Romanesque decoration has similar Insular origins, as do animals whose bodies tail into purely decorative shapes. (Despite the adoption of Hiberno-Saxon traditions into Romanesque styles in England and on the continent, the influence was primarily one-way. Irish art during this period remained isolated, developing a unique amalgam of native Irish and Viking styles which would be slowly extinguished and replaced by mainstream Romanesque style in the early 13th century following the Anglo-Norman invasion of Ireland.) Most Romanesque sculpture is pictorial and biblical in subject. A great variety of themes are found on capitals and include scenes of Creation and the Fall of Man, episodes from the life of Christ and those Old Testament scenes which prefigure his Death and Resurrection, such as Jonah and the Whale and Daniel in the lions' den. Many Nativity scenes occur, the theme of the Three Kings being particularly popular. The cloisters of Santo Domingo de Silos Abbey in Northern Spain, and Moissac are fine examples surviving complete, as are the relief sculptures on the many Tournai fonts found in churches in southern England, France and Belgium. A feature of some Romanesque churches is the extensive sculptural scheme which covers the area surrounding the portal or, in some case, much of the facade. Angouleme Cathedral in France has a highly elaborate scheme of sculpture set within the broad niches created by the arcading of the facade. In the Spanish region of Catalonia, an elaborate pictorial scheme in low relief surrounds the door of the church of Santa Maria at Ripoll. The purpose of the sculptural schemes was to convey a message that the Christian believer should recognize wrongdoing, repent and be redeemed. The Last Judgement reminds the believer to repent. The carved or painted Crucifix, displayed prominently within the church, reminds the sinner of redemption. Often the sculpture is alarming in form and in subject matter. These works are found on capitals, corbels and bosses, or entwined in the foliage on door mouldings. They represent forms that are not easily recognizable today. Common motifs include Sheela na Gig, fearsome demons, ouroboros or dragons swallowing their tails, and many other mythical creatures with obscure meaning. Spirals and paired motifs originally had special significance in oral tradition that has been lost or rejected by modern scholars. The Seven Deadly Sins including lust, gluttony and avarice are also frequently represented. The appearance of many figures with oversized genitals can be equated with carnal sin, and so can the numerous figures shown with protruding tongues, which are a feature of the doorway of Lincoln Cathedral. Pulling one's beard was a symbol of masturbation, and pulling one's mouth wide open was also a sign of lewdness. A common theme found on capitals of this period is a tongue poker or beard stroker being beaten by his wife or seized by demons. Demons fighting over the soul of a wrongdoer such as a miser is another popular subject. Late Romanesque sculptureEdit Gothic architecture is usually considered to begin with the design of the choir at the Abbey of Saint-Denis, north of Paris, by the Abbot Suger, consecrated 1144. The beginning of Gothic sculpture is usually dated a little later, with the carving of the figures around the Royal Portal at Chartres Cathedral, France, 1150–1155. The style of sculpture spread rapidly from Chartres, overtaking the new Gothic architecture. In fact, many churches of the late Romanesque period post-date the building at Saint-Denis. The sculptural style based more upon observation and naturalism than on formalised design developed rapidly. It is thought that one reason for the rapid development of naturalistic form was a growing awareness of Classical remains in places where they were most numerous and a deliberate imitation of their style. The consequence is that there are doorways which are Romanesque in form, and yet show a naturalism associated with Early Gothic sculpture. One of these is the Pórtico da Gloria dating from 1180, at Santiago de Compostela. This portal is internal and is particularly well preserved, even retaining colour on the figures and indicating the gaudy appearance of much architectural decoration which is now perceived as monochrome. Around the doorway are figures who are integrated with the colonnettes that make the mouldings of the doors. They are three-dimensional, but slightly flattened. They are highly individualised, not only in appearance but also expression and bear quite strong resemblance to those around the north porch of the Abbey of St. Denis, dating from 1170. Beneath the tympanum there is a realistically carved row of figures playing a range of different and easily identifiable musical instruments. A number of regional schools converged in the early Romanesque illuminated manuscript: the "Channel school" of England and Northern France was heavily influenced by late Anglo-Saxon art, whereas in Southern France the style depended more on Iberian influence, and in Germany and the Low Countries, Ottonian styles continued to develop, and also, along with Byzantine styles, influenced Italy. By the 12th century there had been reciprocal influences between all these, although naturally regional distinctiveness remained. The typical foci of Romanesque illumination were the Bible, where each book could be prefaced by a large historiated initial, and the Psalter, where major initials were similarly illuminated. In both cases more lavish examples might have cycles of scenes in fully illuminated pages, sometimes with several scenes per page, in compartments. The Bibles in particular often had a, and might be bound into more than one volume. Examples include the St. Albans Psalter, Hunterian Psalter, Winchester Bible (the "Morgan Leaf" shown above), Fécamp Bible, Stavelot Bible, and Parc Abbey Bible. By the end of the period lay commercial workshops of artists and scribes were becoming significant, and illumination, and books generally, became more widely available to both laity and clergy. The large wall surfaces and plain, curving vaults of the Romanesque period lent themselves to mural decoration. Unfortunately, many of these early wall paintings have been destroyed by damp or the walls have been replastered and painted over. In England, France and the Netherlands such pictures were systematically destroyed or whitewashed in bouts of Reformation iconoclasm. In Denmark, in Sweden, and elsewhere many have since been restored. In Catalonia (Spain), there was a campaign to save such murals in the early 20th century (as of 1907) by removing them and transferring them to safekeeping in Barcelona, resulting in the spectacular collection at the National Art Museum of Catalonia. In other countries they have suffered from war, neglect and changing fashion. A classic scheme for the full painted decoration of a church, derived from earlier examples often in mosaic, had, as its focal point in the semi-dome of the apse, Christ in Majesty or Christ the Redeemer enthroned within a mandorla and framed by the four winged beasts, symbols of the Four Evangelists, comparing directly with examples from the gilt covers or the illuminations of Gospel Books of the period. If the Virgin Mary was the dedicatee of the church, she might replace Christ here. On the apse walls below would be saints and apostles, perhaps including narrative scenes, for example of the saint to whom the church was dedicated. On the sanctuary arch were figures of apostles, prophets or the twenty-four "elders of the Apocalypse", looking in towards a bust of Christ, or his symbol the Lamb, at the top of the arch. The north wall of the nave would contain narrative scenes from the Old Testament, and the south wall from the New Testament. On the rear west wall would be a Last Judgement, with an enthroned and judging Christ at the top. One of the most intact schemes to exist is that at Saint-Savin-sur-Gartempe in France. The long barrel vault of the nave provides an excellent surface for fresco, and is decorated with scenes of the Old Testament, showing the Creation, the Fall of Man and other stories including a lively depiction of Noah's Ark complete with a fearsome figurehead and numerous windows through which can be seen Noah and his family on the upper deck, birds on the middle deck, while on the lower are the pairs of animals. Another scene shows with great vigour the swamping of Pharaoh's army by the Red Sea. The scheme extends to other parts of the church, with the martyrdom of the local saints shown in the crypt, and Apocalypse in the narthex and Christ in Majesty. The range of colours employed is limited to light blue-green, yellow ochre, reddish brown and black. Similar paintings exist in Serbia, Spain, Germany, Italy and elsewhere in France. The now-dispersed paintings from Arlanza in the Province of Burgos, Spain, though from a monastery, are secular in subject-matter, showing huge and vigorous mythical beasts above a frieze in black and white with other creatures. They give a rare idea of what decorated Romanesque palaces would have contained. Other visual artsEdit Romanesque embroidery is best known from the Bayeux Tapestry, but many more closely worked pieces of Opus Anglicanum ("English work" – considered the finest in the West) and other styles have survived, mostly as church vestments. The oldest-known fragments of medieval pictorial stained glass appear to date from the 10th century. The earliest intact figures are five prophet windows at Augsburg, dating from the late 11th century. The figures, though stiff and formalised, demonstrate considerable proficiency in design, both pictorially and in the functional use of the glass, indicating that their maker was well accustomed to the medium. At Le Mans, Canterbury and Chartres Cathedrals, and Saint-Denis, a number of panels of the 12th century have survived. At Canterbury these include a figure of Adam digging, and another of his son Seth from a series of Ancestors of Christ. Adam represents a highly naturalistic and lively portrayal, while in the figure of Seth, the robes have been used to great decorative effect, similar to the best stone carving of the period. Glass craftsmen were slower than architects to change their style, and much glass from at least the first part of the 13th century can be considered as essentially Romanesque. Especially fine are large figures of 1200 from Strasbourg Cathedral (some now removed to the museum) and of about 1220 from Saint Kunibert's Church in Cologne. Most of the magnificent stained glass of France, including the famous windows of Chartres, date from the 13th century. Far fewer large windows remain intact from the 12th century. One such is the Crucifixion of Poitiers, a remarkable composition which rises through three stages, the lowest with a quatrefoil depicting the Martyrdom of St Peter, the largest central stage dominated by the crucifixion and the upper stage showing the Ascension of Christ in a mandorla. The figure of the crucified Christ is already showing the Gothic curve. The window is described by George Seddon as being of "unforgettable beauty". Many detached fragments are in museums, and a window at Twycross Church in England is made up of important French panels rescued from the French Revolution. Glass was both expensive and fairly flexible (in that it could be added to or re-arranged) and seems to have been often re-used when churches were rebuilt in the Gothic style – the earliest datable English glass, a panel in York Minster from a Tree of Jesse probably of before 1154, has been recycled in this way. - Some (probably) 9th century near life-size stucco figures were discovered behind a wall in Santa Maria in Valle, Cividale del Friuli in Northern Italy relatively recently. Atroshenko and Collins p. 142 - G Schiller, Iconography of Christian Art, Vol. II, 1972 (English trans from German), Lund Humphries, London, pp. 140–142 for early crosses, p. 145 for roods, ISBN 0-85331-324-5 - V. I. Atroshenko and Judith Collins, The Origins of the Romanesque, p. 144–150, Lund Humphries, London, 1985, ISBN 0-85331-487-X - Howe, Jeffery. "Romanesque Architecture (slides)". A digital archive of architecture. Boston College. Retrieved 2007-09-28. - Helen Gardner, Art through the Ages. - Rene Hyughe, Larousse Encyclopedia of Byzantine and Medieval Art - Roger A. Stalley, "Irish Art in the Romanesque and Gothic Periods". In Treasures of Irish Art 1500 B.C. to 1500 A.D., New York: Metropolitan Museum of Art/Alfred A. Knopf, 1977. - "Satan in the Groin". beyond-the-pale. Retrieved 2007-09-28. - James Hall, A History of Ideas and Images in Italian Art, p154, 1983, John Murray, London, ISBN 0-7195-3971-4 - Rolf Toman, Romanesque, Könemann, (1997), ISBN 3-89508-447-6 - George Seddon in Lee, Seddon and Stephens, Stained Glass - Church website Archived 2008-07-08 at the Wayback Machine - Legner, Anton (ed). Ornamenta Ecclesiae, Kunst und Künstler der Romanik. Catalogue of an exhibition in the Schnütgen Museum, Köln, 1985. 3 vols. - Conrad Rudolph, ed., A Companion to Medieval Art: Romanesque and Gothic in Northern Europe, 2nd ed. (2016) - Metropolitan Museum Timeline Essay - crsbi.ac.uk (Electronic archive of medieval British and Irish Romanesque stone sculpture) - Corpus of Romanesque Sculpture in Britain and Ireland - Romanes.com Romanesque Art in France - Círculo Románico: Visigothic, Mozarabic and Romanesque art's in all Europe - Romanesque Sculpture group on Flickr
<urn:uuid:87fc9f36-4913-46f4-a503-b1f5ea3f81d4>
CC-MAIN-2022-33
https://en.m.wikipedia.org/wiki/Romanesque_sculpture
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572408.31/warc/CC-MAIN-20220816151008-20220816181008-00699.warc.gz
en
0.959344
5,281
3.890625
4
Shevchenko Scientific Society From 1873, it functioned as the administrative body of the union printing house. In 1892 the NTSh became a scientific society that quickly integrated into Western and Central European scientific space and was formed as a regional and national scintific center under the leadership of Mykhailo Hrushevskyi. During the interwar period, the Society functioned at the regional level, while remaining the key factor in the development of Ukrainian culture in Lviv and the center around which scholars of the all-Ukrainian caliber gathered. In 1939, after the Soviet occupation of Lviv, the Society ceased its activities, though it resumed its work for some time during the German occupation. When the oppression of the Ukrainian language began in the Russian Empire, the Ukrainian intelligentsia decided to move its cultural and educational activities to Galicia, which was part of the Habsburg monarchy with a more liberal regime . With the support of patrons, funds were raised to set up a Ukrainian printing house in Lviv, and a local scientific society was to take care of it. As a result of this initiative, the Shevchenko Society was founded, whose charter was approved in 1874 by the Austrian governor of Galicia. The society founders (for legal reasons, they could only be Austrian citizens), although receiving financial assistance, did not fulfill their statutory obligations and focused primarily on activities related to the printing house. It was not until the late 1880s that another attempt was made to unite the Ukrainian elites of Kyiv and Galicia by creating a scientific society. Oleksandr Barvinskyi, Volodymyr Antonovych and Oleksandr Konyskyi decided to reform the society and establish the Shevchenko Scientific Society (ukr. Naukove Tovarystvo im. Shevchenka, NTSh). In 1892 the Society was reformed; it was now organized on the model of European Academies of Sciences. The founders aimed to obtain the status of the Ukrainian Academy of Sciences from the Vienna authorities, following the example of the Polish Academy of Sciences in Krakow. The NTSh began to publish a scientific periodical ("Notes of the NTSh") and positioned itself as an institution loyal to the Austrian state. The society received subsidies from the state and local authorities. The amount of these subsidies increased steadily until 1914. After Oleksandr Barvinskyi (1893–1897), the Society was headed by Mykhailo Hrushevskyi (1897–1913). The young historian proved to be a capable and energetic organizer. In a short time he managed to make the Society more visible and known in the city. In 1898 a celebration was organized on the occasion of the centenary of Ivan Kotlyarevskyi's Eneyida. The Society organized a soiree, inviting various public organizations and the most prominent figures of the Ukrainian cultural intelligentsia to participate. Among the guests were representatives of Russian-controlled Ukraine, including Mykola Lysenko, the author of the opera Natalka Poltavka, which was performed on the stage. Later, a "scientific academy" (solemn meeting) was held in the National House. In the same year, another important event took place, which ultimately registered the NTSh on the city map: the Society bought a house on ul. Czarneckiego (now vul. Vynnychenka) 26. At the time of its founding, the Shevchenko Society was given a stockroom in the Prosvita Society building (pl. Rynok 10). In addition, the Society rented a room on ul. Akademicka (now prosp. Shevchenka 8, where the restored NTSh’s bookstore is located). The printing house was originally situated in a courtyard of a bank, later it was moved to the courtyard at ul. Akademicka, 8, while the printing house administration often changed its address (Купчинський, 2013, 146). The new building of the Society was important because it was now possible to gather all the NTSh institutions under one roof. An office of the Society was established, headed by secretary, Volodymyr Hnatiuk. The office became a meeting place for the Society members. In 1904 the NTSh opened its own binding workshop. Hrushevskyi focused mostly on the development of publishing projects and activities. The Society was quite successful in this area. This was a relevant reason to appeal to the Galician Sejm and the Ministry of Religion and Education in Vienna to increase subsidies. There was an opinion among the NTSh members that the society, in comparison with other Academies of Sciences, such as the Krakow Academy of Sciences, was no less productive, although the amount of funding was significantly smaller. The development of the Society's publishing activities helped to attract new members from among the Ukrainian elites. Young scholars, including those from the universities of Lviv, Chernivtsi, and Vienna, as well as amateur researchers and collectors of ethnographic artifacts and folklore (frequently, these were Greek Catholic priests from eastern Galicia and the Ruthenian regions of Hungary) (Rohde, 2019), also became members, as well as prominent representatives of the Ukrainian elite from Russian-controlled Ukraine (St. Petersburg) or France, such as anthropologist Fedir Vovk. An important aspect in the Society activities was fundraising for the academic house, which was held on Hrushevskyi’s initiative. Apart from Ivan Franko and Mykhailo Hrushevskyi, the organizing committee included Yevhen Chykalenko, a philanthropist from Kyiv, who supported Hrushevskyi's projects financially. In addition to his own donations, Chykalenko, using his personal connections, organized fundraising campaigns. He stipulated that this academic house should also accept students from Russian-controlled Ukraine. Although these donations were not sufficient, the construction started. Chykalenko claimed that donations would begin to be made when the project was completed. The construction costs contributed to a serious financial crisis in the NTSh, which was overcome only in 1907, after moving some branches of the Society to the newly built building. Attempts to obtain public funding were unsuccessful (Rohde, 2020, Galizische Erbschaften?). The house first functioned as a dormitory, the first inhabitants moving in in late 1906 (DALO 292/1/8). Among them, there were also emigrants from tsarist Russia. This contributed to the formation of a scientific community, as many of these emigrants became members of the NTSh and began working in the administration of the Society’s museum or library. Despite the resistance of local elites, Hrushevskyi sought to keep the NTSh away from Galician politics; nevertheless, the society established regular cooperation with known local figures. It was not only about the participation of Hrushevskyi and Franko in the creation of the UNDP and their cooperation in raising funds for financial support of the Society. The main motive of the NTSh's activity was the desire of students and professors of Lviv University to have their own independent Ukrainian university, so the Society members took part in almost all parliamentary sittings on this issue. Despite his initial position, Hrushevskyi, through the Literary-Scientific Bulletin (Літературно-науковий вісник), regularly interfered in party and political affairs and caused numerous conflicts. In the NTSh milieu, his leadership style was also considered authoritarian. The Society became more and more non-public and, despite the efforts of both its individual members and the public, refused to hold regular popular science events. The consequence of this position was the founding of the Petro Mohyla Ukrainian Scientific Lectures Society. After Hrushevskyi resigned as chairman of the Society in 1913, steps were taken to establish internal and external scientific communication. In the period from September 1914, after the outbreak of the First World War, and until 1916, the Society's activities took place mainly in Vienna. This was due not only to the Russian occupation of Lviv but also to the fact that almost all members of the Society performed other dutiesat that time, engaged in political and party activities, being in military service or in the Union for the Liberation of Ukraine and living in different cities. In Vienna, they established numerous public organizations, such as the Ukrainian Cultural Council, which not only came up with the idea of organizing Ukrainian schools but also promoted popular science events in the capital of the empire. Some members of the NTSh, such as Hrushevskyi and Okhrymovych, were in captivity. Many scholars returned to Lviv only in late 1916 and early 1917 and saw that the Society had suffered significant losses that needed to be repaired. In the interwar period there was a partial change in the Society’s activity trends. After the Natural History Museum and a bacteriological laboratory were founded, research in the field of natural sciences was revived. At the same time, there was a tendency to strengthen cooperation with the public of the city and the region. One of the manifestations of this tendency was the activity of the Ukrainian University in Lviv (1921–1925, Secret Ukrainian University) in which the Society actively participated (Дудка, Головач, 2018). The NTSh not only used the university premises; many of its members held key positions there. In addition, both the Society's museums and its library actively cooperated with the public. Cultural events were regularly held in the museums: for example, in 1935 a photo exhibition "Our Motherland in Photo" was organized, arousing considerable interest. This cooperation with the public was the Society's response to the challenges of the period preceding the Second World War. At the same time, it required certain sacrifices, as the funding of public organizations in the Polish Republic decreased significantly compared to 1914. The range of Society’s periodicals was quite wide;however, in comparison with the pre-war period these editions were published much less often; professional magazines were published very irregularly. In addition to specialized publications, such as the magazine of the NTSh physiographic commission, the Society launched a large publishing project of the "Ukrainian General Encyclopaedia" (publisher Ivan Rakovskyi, 1930-1935). The project was aimed at promoting science (Savenko, 2016, 167-176). After the Soviet occupation, all public organizations were closed; later, during the Nazi rule, they resumed their activities for some time, whichceased again in 1944. In the period from 1946 to 1950, NKVD commissions exported a significant amount of archival materials to Moscow and Kyiv (Сварник, 2005, 11–12). Almost two thirds of the former library stock are now kept in the Lviv Vasyl Stefanyk National Scientific Library, the rest are considered lost (Svarnyk 2014, 54). Ivan Franko's library and his artistic heritage, passedinto the possession of the Society after his death, are owned by the Taras Shevchenko Institute of Literature of the NASU. In the diaspora, the NTSh cells were established (France, USA, Australia, Canada). In 1989, the NTSh resumed its activities in Lviv, when an organization of the same name was created. Today the Society has branches all over Ukraine. The NTSh World Council coordinates the activities of its centers around the world. Stories and buildings The building at ul.Czarneckiego26functioned also as a residential building. The Society's management responsibilities included its maintenance and lease. Revenues from rent made it possible to repay loans. For some time, artist Ivan Trush rented a room for his studio there (ЦДІАЛ, фонд 309, op. 1, file 565). The purchase of the building atul.Czarneckiego 26 for the needs of the Society, as is usually believed, became possible thanks to a generous donation from Petro Pelekhin. Somealso believed thatthe funds previously meant to finance the medical faculty of the future Ukrainian university were used here. On the one hand, it is clear that this was an unattainablegoalin the late 19th century. However, on the other hand, from a purely legal point of view, this use of the collected donations was illegal, as the documents stated that these funds could not be spent for other purposes. Thus, the purchase of a building as a central moment in the history of the Society was, in terms of law, an illegal act. This legal dispute lasted until the 1930s and ended after the death of Serhiy Shelukhin, who took care of Pelekhin's legacy (Rohde, 2020, Galizische Erbschaften?) The Society's library was located in the Prosvita building and in the People's House. In 1899 it was transferred to a room atul.Czarneckiego 26. In 1907-1914, the library functioned in the "Academic House", and later, after the purchase of the building atul.Czarneckiego 26,was moved there and expanded. The Society's museum had a similar history, as there were no suitable premisesto accommodate it for a long time. Subsequently, the Society began to use the premises of the "Academic House." In 1914, the museum moved as well. As the move was somewhat delayed, some of the collection was still in the Academic House when the First World War broke out. Both the Austro-Hungarian and Russian troops used the house as barracks, and this caused considerable damage, as the property and equipment were partially destroyed, primarily bythe Russians. After these damages were discovered, officials in charge sent, for reasons of safety, the most valuable funds to Vienna as a precaution against a re-occupation of Lviv. During the interwar period, the library and museum were completely moved to the premises at ul.Czarneckiego 24. Both institutions were originally intended for internal use, their funds being used for research. Later, however, they became available to the public. The bookstore was located at ul.Czarneckiego 26; at first, the Society's own printed publications were sold there. After August Demel was hired as a bookseller in early 1905, the bookstore became a specialized scientific one and was provided with separate premises on ul. Teatyńska (now vul. M. Kryvonosa) (Kulchytska 2009). In 1908, the bookstore was housed in the Prosvita building in the city center and operated there until its closure by the Soviet occupation administration. The purchase of the building atul.Czarneckiego 24 took place in 1913, after several years of difficult negotiations with the owners, who had initially refused to sell the building to Ukrainians. The money was raised thanks to significant supportfrom benefactor Vasyl Symyrenko and the Ministry of Culture and Education. The Ministry's subsidy in the amount of 100 thousand crowns significantly exceeded not only the amount of traditional annual subsidies for the NTSh but also the annual budget of the Krakow Academy. The granting of this subsidy testifies to the NTSh’s close connection with the local and state policy of Austria-Hungary.After the tragic death of Adam Kotsko during the 1910student riots, the Ruthenian Club at the Reichsrat’s (Parliament)House of Deputies, represented by Oleksandr Kolessa and Teofil Okunevsky, declared their readiness to appease the protesters. Some state-initiated measures to resolve the situation were proposed, in particular, financial support for professors, as well as the allocation of budget funds for scholarships and subsidies for cultural and scientific societies. The governor of Galicia, whose assessment of the situation was crucial for the Ministry of Education, critically considered the proposals. The idea to turn the NTSh into a state-run Academy of Sciences was rejected immediately, as was the project to create an independent Ukrainian university. However, the governor approved a generous grant for the NTSh museum. He stressed that this was possible only if the museum was transferred from the "Academic House", since the latter as a "house of Ruthenian students was a center of radical university youth." The NTSh fulfilled this requirement, and after the house at ul.Czarneckiego 24 was purchased and reconstructed,the museum finally moved there. Vul. Vynnychenka, 24 – research institutions building (former residential) Show full description Vul. Vynnychenka, 26 – residential building Show full description Pl. Rynok, 10 – former Lubomirski Palace/ Prosvita building Show full description Prosp. Shevchenka, 8 – Kyiv cinema building Show full description Vul. Teatralna, 22 – The House of Officers (former Peoples’ House) Show full description Kotsiubynskoho str. 21 Kornyliy/Kornylo Sushkevych (1840–1885, chairman in 1873–1885) Sydir Hromnytskyi (1850–1937, chairman in 1885–1887 and 1889–1891) Demyan Hladylovych (1845–1892, chairman in 1887–1889 and 1891–1892) Yuliyan Tselevych (1843–1892, chairman in 1892) Volodymyr Shukhevych (1849–1915) was the interim chairman of the Society after the death of Tselevych and until the election of Barvinskyi Oleksandr Barvinskyi (1847–1926, chairman in 1893–1897) Mykhailo Hrushevskyi (1866–1934, chairman in 1897–1913) Stepan Tomashivskyi (1875–1930) was Hrushevskyi's deputy and headed the Society after his resignation Volodymyr Okhrymovych (1870–1931) managed the local affairs of the Society in 1914 as the oldest Committee member, after Tomashivsky joined the army; Okhrymovych was later deported by the Russian military. Vasyl Shchurat (1871–1948) was asked in writing in 1915 to manage the Society affairs until the Committee returned to Lviv; in 1919-1923 he headed the Society as an elected President. Kyrylo Studynskyi (1868–1941, chairman in 1923–1932) Volodymyr Levytskyi (1872–1956, chairman in 1932–1935) Ivan Rakovskyi (1874–1949, chairman in 1935–1939) The NTSh building. The reverse is signed: "Камениці Н.Т ім Шевченка Чарнецького 26 в р. 1926 р. Світлив Екушевич [?]". Source: Vasyl Stefanyk National Scientific Library in Lviv
<urn:uuid:82a14688-74d7-46a1-af43-0bc866cd058d>
CC-MAIN-2022-33
https://lia.lvivcenter.org/en/organizations/naukove-tovarystvo-shevchenka/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573760.75/warc/CC-MAIN-20220819191655-20220819221655-00697.warc.gz
en
0.968482
4,111
2.84375
3
This is a work in progress. Constructive feedback is welcome. This online publication is a collection of essays on design, political economy, and the relationship between human economic systems and natural systems. The philosophical underpinnings of these writings is that of consequentialism. It holds that what makes a decision good is the consequences that flow from that decision. As applied to the design of human systems of economy, politics, and infrastructures, this means that we should create policies and institutions with incentive structures that naturally lead to the greatest outcomes and maximize the number of people whose lives are defined by happiness and wellbeing. This is very much in keeping with the philosophy of the founders of the United States of America and the preamble to the Constitution. In 1742, coal mining yielded its first five million tons worldwide. The steam engine was perfected by James Watt seventeen years later in 1759. That very same year Adam Smith published his “Theory of Moral Sentiments,” the work upon which the capitalist understanding was established that individual self interest as aggregated through the activity of free markets can be a powerful force for social progress. There is certainly some truth to the fact that markets can be a powerful force for good in the world. But, as we have seen time and time again, they can also do great harm if there are not sufficient checks on the behavior of corporations or limits placed on the powerful. Early capitalism thrived on three important feedstocks: 1) colonialism, which supplied land and raw materials; 2) chattel slavery and coercive labor, which kept profits high while adding value to products; and 3) access to energy-dense fossil fuel energy resources. The “capital” in capitalism is deeply rooted in these unsustainably and unjustly obtained resources that were extracted from the commonwealth of nature, and which should be the equal inheritance of all living things. Instead, this capital was extracted, hoarded, and the profits accrued to private interests holding positions of power. Thus the first industrial age was born into a perfect storm and was guided by a political economy founded on consumption, greed, exploitation of labor, and the combustion of non-renewable carbon-based fuels. Our current economic system is petrocultural, colonial, dominionist, and exploitative by virtue of the era in which it was designed. This is not a value judgement, but rather a simple statement of fact. To every season there is a political economy. A quarter of a millennium later, it’s time for a new political economy for an era of environmental stewardship and economic democracy. The American economy is not the greatest economy. But it could be. This site is a collection of thoughts about how to make our economy stronger by designing a truly fair and just society through policies that fulfill the underlying promises of the American experiment in liberty and justice. It is informed by synthesizing disparate socioeconomic theories and seeking a new kind of economic system that can be most effective in correcting the social and environmental issues that threaten social progress. While it is focused on America, the ideas can be applied to any nation, and the success of such an economic transformation will be far greater if applied globally. A new economic system for the 21st century will make the American dream a reality for every person, while maintaining the beneficial features of our current economy that reward industriousness, innovation, and entrepreneurship. We play lip service to meritocracy, but our current system closes the door on it for most people. This new economic system will bring true egalitarian democracy by laying the foundation for a new paradigm of social wealth that can lead to the carbon drawdown the environmental movement is seeking, while also increasing the standard of living and the free time for all people to enjoy their one precious lifetime on this beautiful spaceship Earth. The contemporary political discourse is desperately seeking an economic blueprint—a design for a peaceful and orderly transition from where we are now (winner-take-all corporate capitalism) to a socially just and environmentally sustainable economic model that is highly desirable to the general public. To succeed politically, this new economy cannot be about asceticism or sacrifice. In fact, this new economy should be designed to bring riches to a vast number of people who could never dream of such abundance within our current system of capitalism, which itself is riddled throughout with artificial scarcity, nepotism, happenstance of birth, racism, and class tribalism that limit the opportunities available to Black, Indigenous, People of Color, and those born into poverty to participate in wealth-building endeavors. There are many proposals out there to reign in corporate power and regulate winner-take-all capitalism. But there are limitations to the efficacy of policy reforms that are a patchwork of repairs layered over our current socioeconomic system that at it core relies on endless raw resources and a large class of extremely poor and disenfranchised (essential workers) who can be called upon to provide inexpensive labor. What is needed is a new political economy that removes the countless daily discretionary opportunities for the implicit bias of individuals to create accumulating structural inequity. A system that limits the power of luck and birth fortune and instead rewards merit. We need a new political economy that incentivises human behavior towards acts that will help to heal a dying planet and regenerate natural systems. We must take a new look at how wealth is created at the most basic level, and how it can be equitably shared, by design. The term “greatest economy” can be read in two ways. In one sense it means an economic powerhouse that increases quality of life and expands opportunities for everyone, while generating wealth and reinvesting in the future. In the second sense it means the most frugal, the most efficient—that which fulfills our needs while expending the least amount of resources and energy. These two readings of the term “greatest economy” may on the surface seem to be in contrast, yet they are fundamentally intertwined within in a closed system like the planet Earth. They are the yin and the yang that give rise to regeneration. One of the fundamental contentions of my argument in these writings is that our economy functions better for everyone, even the most well-off, when it is just and equitable by design. By designing our social contract to be just and equitable, we will make our society and our politics more democratic and more productive. According to Ganesh Sitaraman, Theodore Roosevelt wrote that “there can be no real political democracy without something approaching an economic democracy.” Economic democracy can be defined as relative economic equity. It can be recognized by a low gini coefficient, a measure that reflects the disparity of income within a society. A Gini coefficient of zero expresses perfect equality, where all values are the same (for example, where everyone has the same income). A Gini coefficient of one (or 100%) expresses maximal inequality among values (e.g., one person has all the income and all others have none).http://www.fao.org/docs/up/easypol/329/gini_index_040en.pdf Since 1979, the gini coefficient of the United States has risen from 34% to 42%. We have gotten 24% more unequal under a system of neoliberal capitalism. The District of Columbia is 54%. The states of New York and Louisiana are are 52% and 49% respectively. Those with capital and access to capital have built more wealth and have not been taxed on it. Those with debt are taking on more debt and being taxed at higher rates. We’ve not checked nor have we balanced our greed. Instead we have rigged the system for the greedy. The kind of resource hoarding that we have designed into our current economic system of winner-take-all capitalism is anathema to any system that can perpetuate in the natural world without collapsing. At some point every logistic curve meets its constraint. The parasite cannot continue to grow exponentially within its host. In the image above, the rectangular areas correspond to the total after-tax income of equal groupings of American households. The large rectangle at the top left is the average after-tax annual income of the wealthiest 11,600 households (some households in the top of that group keep more than $100 million each year, but the average is $36.4 million). Keep in mind that this represents income only. As the 2021 piece in ProPublica makes abundantly clear, the wealthiest Americans do not even take income as a way to avoid taxes. On the lower right where the image gets denser are the after-tax incomes of the majority of Americans. There are 32,800 people (2.83 persons per household) in every one of the rectangles, even the ones so tiny you can’t see them. If you look closely you will see a cyan square. That square is the absolute middle grouping of 11,600 households (there are the same number of households above and below). That cyan square represents the 11,600 households that live on $40,465 after federal taxes each year. Households to the right and below that tiny cyan square makes less than that. They are the lower half. The households to the left and above that small square make above average income. Within that upper half of the population, 7.4% of households reported their self-identified race as Black to the United States census. If income distribution were colorblind in practice, then 13.4% of upper half households would be Black because that is the percentage of the overall population. Instead, in the upper half of the socioeconomic ladder there are half as many black households as we would expect within a colorblind society. The same disparity can be seen in Hispanic, Latinx, and Native American populations. When you look at the households at the top 5% of incomes (those making more than $400,000) the disparities are even greater with only 4% self-identifying as Black. I point this out only to say that it is possible to do better. I’m not implying that any of this has to do with the fault of any white person or any conspiracy by any group of white people. I’m not casting blame. Structural inequities are just that: structural. We can decide to change our structures once we are aware of their failings. There are a lot of poor people in the richest country on earth. If you look closely you can see a dark blue overlay in the bottom right corner. That’s the 38.1 million people who live in poverty. For most of them every day is a struggle to survive. 12 million are children. More than half a million people will sleep on the street tonight. It’s not necessary. There is plenty of wealth to go around in America. When you look at the image above, imagine that the tiny squares you can’t even see they are so small in the bottom right corner—those squares are each just large enough for a person to stand shoulder to shoulder with her neighbor. When you get to the top left rectangle, each person is standing more than two football fields apart. They would feel completely alone. They couldn’t see the next person without binoculars. This is America in a way that we rarely get to see it (we’ve designed poverty to be hidden). This radical inequality is the root of so many of our problems, including partisanship, crime, incarceration, homelessness, entrenched racism, antisemitism, and xenophobia (looking for others to blame for your misfortune), poor health, substance abuse, poor quality education, and the politicization of issues like public health. In her book, Doughnut Economics, Kate Raworth runs through the research that verifies that more unequal nations, “tend to have more teenage pregnancy, mental illness, drug abuse, obesity, prisoners, school dropouts, and community breakdown, along with lower life expectancy, lower status for women and lower levels of trust.” This makes sense because all of those social ills could be mitigated through investment in education, healthcare, treatment centers, community projects, infrastructure, and social programs that rely on public spending by government agencies. The hoarding of wealth by the top 1% of income earners limits our ability to fund such programs. Inequality also impacts democratic elections when money is considered speech under the constitution, hinders our ability to act collectively to address environmental issues, and promotes conspicuous consumption. By separating classes of our society to such polar extremes, political divisions also become magnified. When access to capital is equated with free speech, then a kind of absolute power will inevitably corrupt the system, leading to kleptocracy and oligarchy. The above image is a tree-diagram of after-tax income during one year. It does not address wealth, which is accumulated income year after year. The households who are in the top groupings in the upper left of the image were most likely born there and will most likely die there. They get to keep piling on year after year and using more and more of their growing capital to accumulate more capital, while those at the bottom struggle every day to get out of debt. There is not much mobility across this chart. A similar diagram of wealth (the link does not break down the top 1% into finer detail) is even more skewed and it would need to show the negative wealth (the absolute debt burden) of the lower 10%. If we all took a vote, would the households taking home $41,000 or less every year (the majority of Americans) be in favor of designing a more equitable economic system? The bottom half of our nation’s income earners includes Americans we have come to call “essential workers” during the coronavirus pandemic—the farmworkers, meatpackers, medical workers, sanitation workers, retail workers, first responders, postal workers, and bus drivers. According to the Federal Reserve, “Among people who were working in February , almost 40 percent of those in households making less than $40,000 a year had lost a job in March.” Can we instead design a socioeconomic system that provides a sense of security and a living wage to these foundational members of our society? Half of the population experiences almost daily stress related to money and basic survival. The intergenerational trauma brought on by unnecessary poverty and extreme inequality has negative impacts on all tiers of society, entrenches class immobility, and reinforces what Isabel Wilkerson calls a modern caste system. Studies of inequality in educational outcomes have pointed to early childhood education as a defining factor. Children whose parents struggle to make ends meet do not have access to the kind of pre-kindergarten learning experiences of children from more affluent households. Those who are fortunate enough to live in the top 10% are mostly able to live lives secluded from reminders of poverty. They are able to provide opportunity and education to their children. But the global coronavirus pandemic has shattered this illusion of insulation. We can see now how interdependent we all are upon one another. A part of the design of our present socioeconomic system is inherited from earlier systems. It is born of a fetishism of the aristocratic class, and the normalization of nepotism and cronyism in life and politics that is related to it. In the modern era this design feature has invaded the zeitgeist of nearly all political commentary with a media class and political class ensconced within a top 1% information echochamber with a megaphone of unprecedented proportion. The divide between class perceptions has fueled conspiracy theories and allowed demagoguing politicians to create expanding layers of rhetorical wedges between people, dividing and conquering, redirecting blame for personal misfortunes brought on by the structural failings of capitalism onto traditional scapegoats: BIPOC and immigrants. The divide between class perceptions leads to the use of words like “criminal” and “inmate” to dehumanize people of lower classes who are in many cases themselves victims of our present system of injustice that locks people away for nonviolent infractions and deports people for minor drug possession—people with families who then suffer generational trauma—while failing to hold criminals of higher class accountable when they commit fraud, wage theft, bribery, or other crimes with far more serious social victims. “Crime is a problem of a diseased society, which neglects its marginalized people. Policing is not the solution to crime.”Rep. Alexandria Ocasio-Cortez The above words ring true to many of us. We feel deep down that if only we could remove the social conditions that give rise to acts of desperation and avarice, we could do a far more efficient job of reducing crime rates than is possible through further expansion of the police state. At some point there is a limit to the draconian military tactics that law enforcement can take to stem the social unrest that results from massive income and wealth inequality. We may be running up against that limit today in America. Design can change the world. It can also rig the system. There is not much that we’ve left to chance over the years. We’ve designed it all. Those making design decisions, however, have not always had at heart the interests of the working class or of mother nature. More often than not they were redesigning one small part of an overly complex system already full of loopholes. The result is that the design of our socioeconomic systems over the past 60 years has intentionally rigged our economy to siphon income up to those who already hold the vast majority of the nation’s wealth. Many economists, like Thomas Piketty, make the convincing case through data that ever-expanding inequality is a fundamental design feature of capitalism—a feature that must be transcended through progressive taxation, public investment, pro-labor policy, and regulation. He writes that “extremely high levels” of wealth inequality are “incompatible with the meritocratic values and principles of social justice fundamental to modern democratic societies.” Our present system has also been designed to exploit nature as a resource to the point where we will soon witness major ecosystem collapse and the most dire effects of climate change if we maintain our rate of increase of consumption and pollution. It has led us to the point where we have locked in 1.5 degrees Celsius of global warming even if we stopped burning fossil fuels this decade. This site is a collection of thoughts on inequality and the incompatibility between modern capitalism and environmental sustainability, with some sketches of potential solutions. In thinking through new systems, I try to maintain the best parts of capitalism and leverage the power of the marketplace of goods, services, and ideas, while eradicating poverty and all of its adjacent social problems. It’s my hope that my ideas will find resonance across the political divide. They are non-partisan. Still I have no illusion that Congress would enact a new Terrametric monetary standard anytime soon. These are most likely policy proposals for a decade or more in the future. In these writings you’ll get to see what your after tax income would be under two new systems of taxation. You’ll learn about a new system of wealth creation, terrametrism, that can replace the one we currently have. The new system replaces the broken foundation of capitalism with a revaluation of value to align social wealth of human economies with our sustainable stewardship of the planet. I invite you to add your own thoughts and comments, and I hope to collaborate with you on the design of this new system of political economy for a woke and climate conscious world. The eventual goal is publication of these and similar writings in a book that can help provide a blueprint to a truly sustainable and equitable future after I’ve been able to respond to constructive criticisms and arguments that point out the weaknesses and unanticipated consequences of these ideas. If you have such criticism, please let me know! Robert Ferry is a LEED accredited architect and the co-founder with Elizabeth Monoian of the Land Art Generator Initiative, a non-profit that is inspiring the world about the beauty and greatness of a post-carbon tomorrow.
<urn:uuid:8e55ce6c-b990-4f8e-94bd-8e4ec8fc1383>
CC-MAIN-2022-33
https://www.greatesteconomy.org/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572215.27/warc/CC-MAIN-20220815235954-20220816025954-00699.warc.gz
en
0.952221
4,154
2.75
3
What’s the Purpose of a Syllabus? Many students will recognize the syllabus as a reference guide for a particular course. It provides them with a compendium of information that they will consult throughout the course, including: logistical information, prerequisites, the instructor’s contact information, course policies, due dates and requirements, a list of resources, and grading criteria. It outlines clearly what a student must do to be successful in the course. The most effective syllabi not only act as a reference guide for students, but also function as an invitation to learning (Bain, What The Best College Teachers Do, 2004, p. 75). They set the tone for the course as they communicate with students about what they can expect from you, why they should take a course, and what they’ll have the opportunity to learn and learn to do while engaging in it. In this way, the syllabus acts as a “promise” as much as it is a contract. Constructing a Syllabus: A Checklist* The syllabus checklist below outlines the important sections of effective, learner-centered syllabi. If you are new to syllabus design or looking for suggestions on how to revise your syllabus, you may wish to consider using our syllabus template. This template includes elements of effective syllabi, as well as recommended language related to University policies and resources for students. Content should be customized to fit the course, but instructors are welcome to copy any language from this document that they find suitable (this is particularly recommended for the “Resources for Students” and the “University-Wide Policies” section). Note that this template was adapted based on suggestions developed by the Inclusive Teaching and Learning Fellows (2017), and also includes updated Fall 2021 information from the provost and campus partners. Basic Course Information Department, Course Number, and Section(s) Class Meeting Time(s) and Location(s) - Consider adding a description of your Mode(s) of Instruction: In-person, Hyflex, Hybrid, etc. Preferred contact information (email address, office phone number) Office location; phone Contact information for AIs and/or TAs or other course support staff - Let your students know the process for attending office hours: will they be in-person? online? If online, is there a permanent Zoom link for your office hours this semester? - Note how students should expect to hear from you in an emergency. Make it clear where they should go for updates and announcements. Course Description and Course Goals - Provide a course description consistent with that which appears in the course listings as well as any prerequisites for taking the course. - You may also provide more detailed information about the course that will help students feel “invited” into the learning experience. You might answer the following questions: How will taking the course prepare students for future learning and/or professional work? How will the learning they will engage in during this course connect to their lives outside of the course? How will the course prepare students to be an engaged citizens of the world and their local communities? - Consider listing 4-6 student-centered course goals or learning objectives. Objectives generally answer the question: What should your students learn or be able to do as a result of participating successfully in your course? Identify modes of thinking and transferrable skills when possible. The best constructed goals are specific, measurable, and attainable. Texts, Materials, and Supplies - List required and non-required texts including: title, author, ISBN #, edition, and where each text can be purchased, borrowed from, or found (e.g. Canvas course page). - List all required materials or equipment (e.g. lab notebooks, specific calculators, safety equipment, supplies) and where to find these items. - Include information about any required field trips or class events that have an additional cost or that will occur outside of regular class time. - Note how students should plan to access any digital course content. - Consider a statement indicating free or reduced-cost options that exist for obtaining course materials. Further, encourage students to speak with you if they experience logistical challenges in obtaining materials or participating in required experiences such as field trips or off-campus meetings. - Provide a statement of your grading approach or philosophy that explains why you grade the way you do and offers some detail about how you will assess student work. - Provide a grading scale (e.g. 90-100 A) and a breakdown of how much each individual assignment or group of assignments is worth in terms of the overall grade. Make it clear to students if you are using a points system or percentages. - Indicate your policy on late work, missed exams, and regrading. Regrading is especially important to clarify if you have AIs or TAs that will be grading in the course. - Provide a statement on academic integrity. This might include pertinent definitions (e.g. plagiarism), information about when collaboration is authorized, information about what appropriate collaboration looks like for various activities or assignments, and expectations for where and when content from the course is to be shared or not shared. Also consider including information about the consequences for an academic integrity infraction and links to further information about school academic integrity policies. Assignments & Homework - Describe each graded component in enough detail that students reading will have a general understanding of the amount of and type of work required. Include information about the assignment’s purpose. Example: Exams: There are three in-class exams that will allow you to demonstrate your learning on each of the three course units. Exam format will be short answer and essay questions and they will cover material from each respective unit. In addition, the Unit 3 exam will contain a cumulative essay portion. I will provide you with a study guide before each exam, but students who do well do not wait until getting the guide to begin studying. - Describe what students will be required to do to prepare for class and/or complete weekly homework. Include information here about “best practices” for maximizing their learning (e.g. attending study sessions, taking good notes). Attendance, Participation, and Classroom Climate - Describe your attendance policy. Particular attention should be paid to describing how illness/quarantine will be handled. - Describe the function of classroom participation within the context of your course as well as your expectations for how students should participate. Explain whether participation is required and how it will be assessed. Example: Discussion and participation are a major emphasis in this course. This means that it is your responsibility to come to class ready and willing to take part in group knowledge building. Your in-class participation grade for this class will be primarily based upon the small group work and activities that we do in class. This grade will also reflect your level of investment in classroom discussion and how often you bring required materials to class. I will provide you with a provisional participation grade at three checkpoints during the semester. - Consider describing what students should do if they or their loved ones get sick and they are unable to fully participate in the class. - Explain your policy for students using technology in the classroom. - Consider including ground rules for appropriate classroom interactions, as well as a clear statement of expectations that classroom interactions will remain civil, respectful, and supportive. You may wish to draw language from the Standing Committee on Facilitating Inclusive Classrooms’ Inclusive Learning Environment Statement. - Encourage students to speak with you, the department chair, or their advisors about any concerns they have about classroom dynamics and/or classroom climate. Other Sections that You Might Consider Including - If applicable: Ground Rules for Online Discussion & Zoom/Canvas Netiquette: What rules will you establish for appropriate participation in Zoom discussion? What elements of netiquette should students follow in live or face to face settings? - Technical Requirements and Support Available: What kinds of technology and technology access will students need to participate successfully in your course? What additional EdTech tools will they need to learn? Where should the students go for tech support? - Course Website/Canvas Usage Description: How will students use your course website or Canvas course shell? What will students do on your website or in your Canvas course? Where should they expect to find readings, assignment descriptions, discussion threads, grades, etc. - For Remote Students: Description of Successful Online Learners: What are the characteristics of successful remote learners? What steps can students take to ensure that they make the most out of their courses if they are participating remotely? COVID-19 Health and Safety Protocols: NOTE We are waiting on an updated version of this from the Provost’s Office for Fall 2022. Exceptions to course attendance policies, expectations, and requirements because of a COVID-19 diagnosis, symptoms consistent with COVID-19, or exposure to a person with a confirmed or suspected COVID-19 diagnosis that requires quarantine or isolation will be made in collaboration between the student and instructor. In these cases, please notify your instructor as soon as possible to discuss appropriate accommodations. While on campus, it is imperative that students follow all public health guidelines established to reduce the risk of COVID-19 transmission within our community. The full set of University protocols can be found at https://covid19.wustl.edu/health-safety/. This includes: - Completing a self-screening using the WashU COVID-19 Screening app every day before coming to campus or leaving your residence hall room. If you do not receive a green check and pass the screening, you are not permitted to come to campus or leave your residence hall room. You must contact the COVID Call Center (314-362-5056) or the Habif Health and Wellness Center (314 935-6666) immediately. Note: In addition to the symptoms listed in the screening tool, everyone also should pay attention to symptoms that are new or different for you, including things like headache and congestion, particularly in combination with diarrhea. These can also be signs of COVID-19. Call the COVID Call Center or Habif to report these symptoms. - Complying with universal masking. All individuals on campus must wear disposable masks or cloth face coverings while occupying indoor public settings, including: multi-person offices, hallways, stairwells, elevators, meeting rooms, classrooms and restrooms. Masks are encouraged but not required for outdoor activities, particularly at large events or in crowded settings. Students with disabilities for whom masked instructors or classmates create a communication barrier are encouraged to contact Disability Resources (www.disability.wustl.edu) or talk to their instructor for assistance in determining reasonable adjustments. Adjustments may involve amplification devices, captioning, or clear masks but will not allow for the disregard of mask policies. - Maintaining physical distancing as needed. While distancing requirements have been removed for vaccinated students, those who are not fully vaccinated are strongly encouraged, for their own health, to maintain a distance of 6 ft from others in the classroom. If you are not able to be vaccinated or have conditions that may put you at increased risk of failed immunity and classroom activities would bring you in frequent proximity to other students, contact your instructor to discuss alternatives. - Practicing healthy personal hygiene, including frequent handwashing with soap and warm water for at least 20 seconds and/or using hand sanitizer with at least 60% alcohol. Note that the Provost’s Office strongly recommends that all policies listed below be included in university syllabi. Reporting Sexual Assault or Harassment If a student discusses or discloses an instance of sexual assault, sex discrimination, sexual harassment, dating violence, domestic violence or stalking, or if a faculty member otherwise observes or becomes aware of such an allegation, the faculty member will keep the information as private as possible, but as a faculty member of Washington University, they are required to immediately report it to the Department Chair, Dean, or to Ms. Cynthia Copeland, the Associate Title IX Coordinator, at (314) 935-3411, firstname.lastname@example.org. Additionally, you can report incidents or complaints to the Office of Student Conduct and Community Standards or by contacting WUPD at (314) 935-5555 or your local law enforcement agency. See: Title IX. WashU supports the right of all enrolled students to an equitable educational opportunity, and strives to create an inclusive learning environment. In the event the physical or online environment results in barriers to the inclusion of a student due to a disability, they should notify the instructor as soon as possible. Disabled students requiring adjustments to equitably complete expectations in this course should contact WashU’s Disability Resources (DR), and engage in a process for determining and communicating reasonable accommodations. Because accommodations are not applied retroactively, DR recommends initiating requests prior to, or at the beginning of, the academic term to avoid delays in accessing accommodations once classes begin. Once established, responsibility for disability-related accommodations and access is shared by Disability Resources, faculty, and the student. Disability Resources: http://www.disability.wustl.edu/; 3147-935-5970 Statement on Military Service Leave Washington University recognizes that students serving in the U.S. Armed Forces and their family members may encounter situations where military service forces them to withdraw from a course of study, sometimes with little notice. Students may contact the Office of Military and Veteran Services at (314) 935-2609 or email@example.com and their academic dean for guidance and assistance. See: https://veterans.wustl.edu/policies/policy-for-military-students/. Preferred Names and Pronouns In order to affirm each person’s gender identity and lived experiences, it is important that we ask and check in with others about pronouns. This simple effort can make a profound difference in a person’s experience of safety, respect, and support. See: https://students.wustl.edu/pronouns-information/, https://registrar.wustl.edu/student-records/ssn-name-changes/preferred-name/. Before an emergency, familiarize yourself with the building(s) that you frequent. Know the layout, including exit locations, stairwells and the Emergency Assembly Point (EAP). Review the “Quick Guide for Emergencies” that is found near the door in many classrooms for specific emergency information and instructions. For additional Information and EAP maps, visit emergency.wustl.edu. To ensure that you receive emergency notifications, make sure your information and cell phone number is updated in SIS, and/or download. The WUSTL app and enable notifications. To report an emergency: Danforth Campus: (314) 935-5555 School of Medicine Campus: (314) 362-4357 North/West/South and Off Campus: 911 then (314) 935-5555 Effective learning, teaching and research all depend upon the ability of members of the academic community to trust one another and to trust the integrity of work that is submitted for academic credit or conducted in the wider arena of scholarly research. Such an atmosphere of mutual trust fosters the free exchange of ideas and enables all members of the community to achieve their highest potential. In all academic work, the ideas and contributions of others must be appropriately acknowledged and work that is presented as original must be, in fact, original. Faculty, students and administrative staff all share the responsibility of ensuring the honesty and fairness of the intellectual environment at Washington University in St. Louis. For additional details on the university-wide Undergraduate Academic Integrity policy, please see: https://wustl.edu/about/compliance-policies/academic-policies/undergraduate-student-academic-integrity-policy/ Instructors are encouraged to include in their syllabus a link to school-specific information on Academic Integrity policies and procedures. TurnitIn (Note that this should only be included if you intend to use TurnItIn in your course.) In taking this course, students may be expected to submit papers and assignments through Turnitin for detection of potential plagiarism and other academic integrity concerns. If students do not have an account with Turnitin and/or do not utilize Turnitin when submitting their papers and assignments, the instructor may upload your paper or assignment to Turnitin for processing and review. Resources for Students The syllabus can be a place for students to find support for academic and non-academic challenges that can impact their learning. Resources for students that can be highlighted in the syllabus include those listed below. Confidential Resources for Instances of Sexual Assault, Sex Discrimination, Sexual Harassment, Dating Violence, Domestic Violence, or Stalking The University is committed to offering reasonable academic accommodations (e.g. a no-contact order, course changes) to students who are victims of relationship or sexual violence, regardless of whether they seek criminal or disciplinary action. If a student needs to explore options for medical care, protections, or reporting, or would like to receive individual counseling services, there are free, confidential support resources and professional counseling services available through the Relationship and Sexual Violence Prevention (RSVP) Center. If you need to request such accommodations, please contact RSVP to schedule an appointment with a confidential and licensed counselor. Although information shared with counselors is confidential, requests for accommodations will be coordinated with the appropriate University administrators and faculty. The RSVP Center is located in Seigle Hall, Suite 435, and can be reached at firstname.lastname@example.org or (314) 935-3445. For after-hours emergency response services, call (314) 935-6666 or (314) 935-5555 and ask to speak with an RSVP Counselor on call. See: RSVP Center. Bias Report and Support System (BRSS) The University has a process through which students, faculty, staff, and community members who have experienced or witnessed incidents of bias, prejudice, or discrimination against a student can report their experiences to the University’s Bias Report and Support System (BRSS) team. To report an instance of bias, visit https://students.wustl.edu/bias-report-support-system/. Mental Health Services Mental Health Services’ professional staff members work with students to resolve personal and interpersonal difficulties, many of which can affect a student’s academic experience. These include conflicts with or worry about friends or family, concerns about eating or drinking patterns, and feelings of anxiety, depression, and thoughts of suicide. See: https://students.wustl.edu/mental-health-services/. The Division of Student Affairs also offers a telehealth program to students called TimelyCare. While students are encouraged to visit the Habif Health and Wellness Center during business hours, this additional service also provides after-hours access to medical care and 24/7 access to mental telehealth care across the United States, with no cost at the time of your visit. Students who pay the Health and Wellness fee are eligible for this service. Additionally, see the mental health services offered through the RSVP Center listed above. WashU Cares specializes providing referrals and resources, both on, and off campus for mental health, medical health, financial and academic resources by using supportive case management. WashU Cares also receives reports on students who may need help connecting to resources or whom a campus partner is concerned about. If you are concerned about a student or yourself, you can file a report here: https://washucares.wustl.edu/. The Writing Center The Writing Center offers free writing support to all Washington University undergraduate and graduate students. Staff members will work with students on any kind of writing project, including essays, writing assignments, personal statements, theses, and dissertations. They can help at any stage of the process, including brainstorming, developing and clarifying an argument, organizing evidence, or improving style. Instead of simply editing or proofreading papers, the tutors will ask questions and have a conversation with the writer about their ideas and reasoning, allowing for a higher order revision of the work. They will also spend some time looking at sentence level patterns to teach students to edit their own work. The Center is located in Mallinckrodt and open Sunday through Thursday from 11:00 am to 9:00 pm and Friday from 11:00 am to 5:00 pm. Students are seen primarily by appointment, but walk-ins will be accepted as the schedule allows. Both in-person and online appointments are available. To make an appointment, go to writingcenter.wustl.edu. Email: writingcenter.wustl.edu. Engineering Communications Center The Engineering Communications Center offers students in the McKelvey School of Engineering one-on-one help with oral presentations, writing assignments, and other communications projects. They are located in Urbauer Hall, Rm. 104. To schedule an appointment, please email the ECC faculty at email@example.com. The Learning Center The Learning Center provides support programs, including course-specific mentoring and academic skills coaching (study and test-taking strategies, time management, etc.), that enhance undergraduate students’ academic progress. Contact them at firstname.lastname@example.org or visit ctl.wustl.edu/learningcenter to find out what support they may offer for your classes. Center for Diversity and Inclusion (CDI) The Center for Diversity and Inclusion (CDI) supports and advocates for undergraduate, graduate, and professional school students from underrepresented and/or marginalized populations, collaborates with campus and community partners, and promotes dialogue and social change to cultivate and foster a supportive campus climate for students of all backgrounds, cultures, and identities. See: https://diversityinclusion.wustl.edu/. Students play an essential role in a vibrant and functioning democracy! In addition to the November 2022 midterm elections, state and local elections will take place throughout the year and have a direct impact on our communities. You can register to vote, request an absentee ballot, confirm your polling location, and get Election Day reminders at http://wustl.turbovote.org for any of the 50 states and Washington D.C. WashU students are considered Missouri residents, and eligible student voters can register to vote in the state of Missouri or their home state. The deadline to register to vote in Missouri in this year’s midterm election is Wednesday, October 12, 2022. The election will take place on Tuesday, November 8, 2022. If you are ineligible to vote, you can participate by encouraging your friends to register and vote, engaging your peers in local issues, and taking part in other civic and community engagement activities. For more resources on voting and other civic and community engagement opportunities, please visit http://washuvotes.wustl.edu and http://gephardtinstitute.wustl.edu. Include dates you plan to cover specific topics (with reading assignments), the due dates for major assignments, and the due date for the final exam. Consult relevant academic calendars and keep in mind religious holidays and significant campus events. The Office of Religious, Spiritual and Ethical Life maintains a calendar of many religious holidays observed by the WashU community. Listed below are dates of some of the major religious holidays or obligations in the Fall 2022/Spring 2023 semester that may pose potential conflicts for observant students. The Jewish holidays that may pose potential scheduling conflicts begin at sundown on the first day listed and end at nightfall of the last day shown: September 25-27 Rosh Hashanah October 4- 5 Yom Kippur October 9 -11 Sukkot Opening Days October 16-17 Shemini Atzeret October 17-18 Simchat Torah April 5-7 Passover Opening Days April 10-13 Passover Closing Days May 25-27 Shavuot Additionally, the Sabbath/Shabbat is celebrated each Friday at sundown though Saturday at nightfall. Baha’i students may require observance on the following days: October 26-27 Twin Holy Days May 24 Declaration of the Bab The dates this fall that may present a conflict for Hindu students are: October 5 Dussehra October 24-28 Diwali (also celebrated by Jains and Sikhs) Muslim student may require observance on the following days: March 22-April 20 (approximately) Ramadan April 21-22 Eid al-Fitr Universal Design for Learning One important consideration when preparing a syllabus is in making sure that it is clear, and easy to read for all students. Instructors should consider following Universal Design for Learning (UDL) guidelines for accessible texts by: using a clear, easy to read font style, avoiding italics, organizing the document clearly and with headings, considering color contrast when adding colored text or imagines, and adding alt-text to digital copies (CAST UDL Syllabus). Instructors may also wish to consider where their syllabus will “live.” Frequently, the syllabus is distributed on the first day of class, but instructors may also wish to add the syllabus to the course Canvas page or course website as well. Having the syllabus available digitally makes it easier to update in response to unforeseeable circumstances (e.g. a snow day) or necessary changes (e.g. students are struggling with a particular concept and the class must review rather than moving on). While it’s important to be responsive to student needs, students may also feel disoriented if too many changes to the syllabus occur in a single course. It is critical to help students understand the reason for any change that is made to the syllabus mid-semester. Finally, instructors should carefully consider how they will introduce the syllabus to students. While it may be tempting to read your syllabus to students on the first day, there are many other strategies that can be employed that may be more effective at helping students understand the course and setting the right tone for the rest of the semester. Some popular strategies include creating a “syllabus quiz,” asking students to identify information in the syllabus in small groups, and using the allotted syllabus time for individual reading and reflection followed by large group discussion that clarifies questions and concerns. *This checklist has been revised August 10, 2021.
<urn:uuid:e5975a16-d27d-41f7-9c05-a40db2219719>
CC-MAIN-2022-33
https://ctl.wustl.edu/resources/constructing-a-syllabus/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571692.3/warc/CC-MAIN-20220812105810-20220812135810-00099.warc.gz
en
0.927208
5,643
2.78125
3
By William F.B. Vodrey The Cleveland Civil War Roundtable Copyright © 2008, All Rights Reserved Editor’s note: This article is adapted from the presentation William Vodrey made to the Cleveland Civil War Roundtable in February 2006. Michael Shaara’s Pulitzer Prize-winning 1974 novel The Killer Angels and the movie Gettysburg reintroduced a new generation to a long-obscure hero of the battle, Joshua Lawrence Chamberlain. Chamberlain, then colonel of the 20th Maine infantry regiment, saved the Union left with a desperate bayonet charge down Little Round Top on July 2, 1863. Chamberlain’s reputation was also boosted by Ken Burns’s PBS series The Civil War. He was a genuine hero, much deserving of our study, admiration and respect. Had he not been where he was, when he was, the Confederacy might well have won the Civil War. But who was Chamberlain, really? It’s easy to run out of adjectives in describing him, just as it’s easy to make him sound too good to be true: courageous, learned, selfless, resolute, thoughtful, articulate, modest. But even Jeff Daniels’s excellent portrayal of him in Gettysburg doesn’t convey the full picture of Joshua Lawrence Chamberlain. Bruce Catton called him a “hawk-nosed theologian turned soldier.” James M. McPherson wrote, “A man of letters and peace, he became an outstanding warrior.” Geoffrey C. Ward, author of the book accompanying Ken Burns’s series, wrote, “I confess that [I began further research of Chamberlain] with some trepidation, concerned that our admiring portrait of him might somehow have been overdrawn, that a persistent biographer would have turned up flaws in a character that had seemed to us astonishingly consistent. I needn’t have worried. Chamberlain is just as impressive as we thought he was – and more interesting.” Joshua Lawrence Chamberlain was born September 8, 1828 in Brewer, Maine, to Joshua Chamberlain Jr. and Sarah “Sally” Brastow. He was called “Lawrence” by his parents and the four siblings who came along over the years: Horace, Sarah, John, and Thomas; the latter two would later serve under his command in the 20th Maine. Chamberlain’s ancestors had come from Massachusetts to Maine in the late 1700s; a female ancestor had been falsely accused of witchcraft and died in a Cambridge jail in September 1692. Chamberlain came from a distinguished military background, although he modestly would have been the first to deny it: his great-grandfather served in the colonial and Revolutionary wars, his grandfather was a colonel in the War of 1812, and his father acted as second-in-command of Maine forces in the so-called Aroostook War against New Brunswick in 1839. In his youth, Chamberlain read, farmed, hunted, and sailed the family sloop off Bangor, Maine. His mother wanted him to be a clergyman, but his father wanted him to go to West Point and become a soldier. His mother won, but only in the short term. Chamberlain was educated at a military academy in Ellsworth, Maine, and graduated Phi Beta Kappa from Bowdoin College in Brunswick in 1852. In 1855, he earned a Bachelor’s of Divinity at the Bangor Theological Seminary. On December 7, 1855, he married Frances Caroline Adams, daughter of Ashur and Emily Adams of Boston, and a distant cousin of President John Quincy Adams. Chamberlain and his wife “Fannie,” as she was called, shared a love that would endure despite the strains of war and Chamberlain’s own long and devoted public service, which sometimes left Frances feeling neglected. They had two children who lived past infancy, Grace Dupee, born in 1856, and Harold Wyllys, born in 1858. Unfortunately, they also lost an unnamed infant son just a few days after birth in October 1857; as well as a daughter, Emily Steele, who died only a few months old in 1860; and another infant daughter, Gertrude Loraine, who was born and died in 1865. The year of his wedding, Chamberlain was appointed instructor in natural and revealed religion at Bowdoin College. He succeeded Calvin Stowe, whose wife Harriet Beecher Stowe wrote Uncle Tom’s Cabin while Chamberlain was a student at the college. It was the beginning of a distinguished lifelong teaching career: from 1856 to 1862 he was professor of rhetoric; from 1857 to 1861 instructor in modern languages, from 1861 to 1865 (in title, if not in actual duties) professor of modern languages. In 1862, Chamberlain was granted a two-year leave of absence for study abroad. Despite protests from the faculty (which didn’t want to lose so fine a teacher on the battlefield), he instead enlisted as lieutenant colonel of the 20th Maine Infantry regiment. Israel Washburn was the Governor of Maine at the time, and before he signed Chamberlain’s commission, he was warned about the young professor by other jealous claimants to the post: Chamberlain was “no fighter,” one man wrote; another contemptuously said that Chamberlain was “nothing at all.” As McPherson later wrote, Chamberlain “was not the only college professor in the Union army, but he was surely the only man in either army who could read seven languages: Greek, Latin, Hebrew, Arabic, Syriac, French, and German.” Certainly he was more of a scholar than a soldier then. In May 1863, however, Chamberlain became colonel of the 20th Maine upon the promotion of its colonel, Adelbert Ames. Chamberlain taught himself to be a soldier from both books and hard experience; his courage and fortitude soon became legendary. Chamberlain took part in 24 engagements in the Civil War, among them Antietam, Fredericksburg (at which he and his men, stranded overnight on the battlefield, were compelled to pile the bodies of their dead comrades before them as shields against the Confederate guns), Chancellorsville, Gettysburg, Spotsylvania, Cold Harbor, Petersburg, and Five Forks. Over the course of the war, troops under Chamberlain’s command took 2,700 prisoners and seized eight Confederate battle flags. He was wounded six times, and narrowly escaped capture three times – once, at Five Forks, thanks to a badly-faded uniform coat and a quickly improvised Virginia drawl: “Surrender? What’s the matter with you? What do you take me for? Don’t you see these Yanks right onto us? Come along with me and let’s break ’em.” His would-be captors were then themselves promptly captured. After Antietam he saw President Lincoln visit the battlefield, and wrote, “We could see the deep sadness in [Lincoln’s] face, and feel the burden on his heart thinking of his great commission to save this people and knowing that he could do this no otherwise than as he had been doing – by and through . . . these men.” There is virtual unanimity among historians that Chamberlain’s finest wartime hour was in the late afternoon on the second day of Gettysburg, July 2, 1863. Chamberlain was awarded the Congressional Medal of Honor “for daring heroism and great tenacity” for his regiment’s defense of Little Round Top, although he would not accept the medal until 1893, thirty years later. Lt. Col. Joseph B. Mitchell later wrote in his book on Civil War Medal of Honor winners, If, on the afternoon of July 2, 1863 a less capable officer had been in command of the 20th Maine, the Battle of Gettysburg would probably have been a Southern victory. Of all the Congressional Medals of Honor awarded in the history of our country, that won by Joshua Lawrence Chamberlain is particularly outstanding. Confederate and Union troops contesting Little Round Top, Shelby Foote agreed, “fought as if the outcome of the battle, and with it the war, depended on their valor: as indeed perhaps it did, since whoever had possession of this craggy height on the Union left would dominate the whole fishhook position.” Chamberlain’s 20th Maine, along with the 83rd Pennsylvania, the 44th New York, and the 16th Michigan were part of Col. Strong Vincent’s brigade, and were rushed to the crest of Little Round Top when Brigadier General Gouverneur K. Warren, General Meade’s chief of engineers, noticed that the high ground was unguarded against a Confederate advance. Geoffrey C. Ward describes the scene: “As Chamberlain and his two brothers, Tom and John, rode abreast together toward the hill, a Confederate shell narrowly missed them. ‘Boys,’ the colonel said, ‘another such shot might make it hard for Mother. Tom, go to the rear of the regiment and see that it is well closed up! John, pass up ahead and look out a place for our wounded.'” The fighting was fast and furious, and the Confederates charged up the hill repeatedly. Some forty thousand rounds were fired on that slope in less than an hour and a half; saplings halfway up the hill were gnawed in two by bullets. Chamberlain later wrote, “The facts were that, being ordered to hold that ground – the extreme left flank of the Union position – [‘at all hazards’] and finding myself unable to hold it by the mere defensive, after [more than an hour’s fighting, and after] more than a third of my men had fallen, and my ammunition was exhausted, as well as all we could snatch from the cartridge boxes of the fallen – friend and foe – upon the field, and having at that moment right upon me a third desperate onset of the enemy with more than three times my numbers, I saw no way to hold the position but to make a counter-charge with the bayonet, and to place myself at the head of it.” Chamberlain added in a masterpiece of understatement, “It happened that we were successful.” Another Maine soldier there that day, Theodore Gerrish, remembered it vividly: “The order is given, ‘Fix bayonets!’ and the steel shanks of the bayonets rattle upon the rifle barrels. ‘Charge bayonets! Charge!’ Every man understood in a moment that the movement was our only salvation, but there is a limit to human endurance and… for a brief moment the order was not obeyed, and the little line seemed to quail under the fearful fire that was being poured upon it… [then] with one wild yell of anguish wrung from its tortured heart, the regiment charged.” “…I remember that, as we struck the enemy’s onrushing lines, I was confronted by an officer, also in front of his line, who fired one shot of his revolver at my head within six feet of me. When, in an instant, the point of my sabre was at his throat, he quickly presented me with both his pistol and his sword, which I have preserved as memorials of my narrow escape… We cleared the enemy entirely away from the left flank of our lines, extended and secured the commanding heights still to our left, and brought back from our charge twice as many prisoners [from the 15th and 47th Alabama] as the entire number of men in our own ranks.” Col. William C. Oates of the 15th Alabama admitted, “When the signal was given we ran like a herd of wild cattle.” Chamberlain and “his men saved Little Round Top and the Army of the Potomac from defeat… Great events sometimes turn on comparatively small affairs.” Another Confederate soldier simply said, “We were never whipped before, and [we] never wanted to meet the 20th Maine again.” General James C. Rice, Chamberlain’s immediate superior at Gettysburg, wrote in his official report, “For the brilliant success of the second day’s struggle, history will give credit to the bravery and unflinching fortitude of [the 20th Maine] more than to any equal body of men upon the field – conduct, which as an eyewitness, I do not hesitate to say, had its inspiration and great success from the moral power and personal heroism of Colonel Chamberlain. Promotion is but a partial reward for his magnificent gallantry on the hard-won field of Gettysburg.” Chamberlain himself was far more modest, writing many years later, “It seems to me I did no more than should have been expected of me, and what it was my duty to do under the sudden and great responsibilities which fell upon me there.” McPherson has written that the novel The Killer Angels does “an ironic injustice to Chamberlain. Shaara’s novel ends with Lee’s retreat from Gettysburg, and thus ends most readers’ knowledge of Chamberlain. Yet he went on to become one of the most remarkable soldiers of the Civil War – indeed, in all of American history.” His skills recognized by Grant and others, Chamberlain rose to command of the 1st Brigade, 1st Division, Fifth Corps. On June 18, 1864, in the fighting before Petersburg, he was wounded so badly that it was thought he would die. A ricocheting minie ball went through his left thigh, smashing both hips, severing arteries, and piercing his bladder. Chamberlain stayed on his feet, rallying his men as he leaned on his sword, waiting until the troops had passed out of sight before sinking to the ground. Told of the extent of Chamberlain’s wounds, which had proved fatal to many another soldier before, General Ulysses S. Grant promoted him to brigadier general on the field, the first soldier to be so honored, and one of only two in the entire war. In his memoirs, Grant wrote, “Colonel J.L. Chamberlain, of the 20th Maine, was wounded on the 18th [of June]. He was gallantly leading his brigade at the time, as he had been in the habit of doing in all the engagements in which he had previously been engaged. He had several times been recommended for a brigadier-generalcy… On this occasion, however, I promoted him on the spot, and forwarded a copy of my order to the War Department, asking that my act might be confirmed without delay. This was done, and at last a gallant and meritorious officer received partial justice at the hands of his Government, which he had served so faithfully and so well.” Perhaps Grant was thinking of Chamberlain when he later remarked, “You can never tell what makes a general. Our war, and all wars, are surprises in that respect.” The New York newspapers reported Chamberlain’s death. However, he astounded everybody by not only surviving, but by taking to the field again just five weeks after being shot, still not completely healed. When his initial term of enlistment expired in 1864, Chamberlain was urged by his wife, family and friends to go home, but he would hear none of it. “I owe the country three years service. It is a time when every man should stand by his guns. And I am not scared or hurt enough yet to be willing to face to the rear, when other men are marching to the front. . . . And I am so confident of the sincerity of my motives that I can trust my own life & the welfare of my family in the hands of Providence.” On March 29, 1865, in fighting along the Quaker Road near Five Forks, Chamberlain was shot again. Chamberlain had been so prominent in his leadership in the face of danger that Sheridan himself exclaimed, “By God, that’s what I want to see! General officers at the front!” However, within hours, as Ward writes, “a minie ball pierced [Chamberlain’s] horse’s neck, tore though his left arm, then smashed into his chest just beneath his heart. A folded sheaf of orders and a pocket mirror backed with brass saved his life, but the ball still had enough force to spin round his torso, rip through the seam of his coat, and knock from his saddle the aide riding next to him. Chamberlain slumped into temporary unconsciousness. But when he came to and saw that his men had started to buckle under the intense Rebel fire, he insisted on riding up and down the lines, waving his sword and urging his men to hold. They did, while cheering their bloodied commander – whose courage so impressed the Confederates that they began to cheer him, too.” The newspapers again reported Chamberlain’s death. As McPherson writes, “[He] went Mark Twain one better: he twice had the pleasure of reading his own obituary.” Chamberlain’s distinguished conduct in attacking Lee’s right flank, despite his two cracked ribs and a bruised arm, earned him a brevetted rank of major general of volunteers. In the final campaign of the war in the East leading up to Lee’s surrender at Appomattox, Chamberlain commanded two brigades of the First Division of the Fifth Corps, and was personally selected by Grant to receive the Confederate surrender. The event has passed into legend, of course, not the least because of Chamberlain’s gallantry. Lee and Grant were elsewhere by then; Chamberlain faced Confederate General John B. Gordon at the head of the Army of Northern Virginia. Chamberlain ordered his men to salute their defeated countrymen, and Gordon would forever remember that Chamberlain saw to it that the Army of the Potomac “gave [them] a soldierly salute… a token of respect from Americans to Americans… [in a gesture of] mutual salutation and farewell… honor answering honor.” Bruce Catton noted that not everyone approved of the gesture at the time: Chamberlain “scandalized fire-eating patriots but gratified future generations” by ordering the salute. With the rest of the Army, Chamberlain mourned the death of President Lincoln, but he discouraged talk of revenge against the South for John Wilkes Booth’s crime. Chamberlain led the Fifth Corps in the Grand Review on May 23, 1865, a bright, clear day in Washington, and sat with President Andrew Johnson and other dignitaries in a reviewing stand opposite the White House. He wrote many years later, “It [was] the Army of the Potomac. After years of tragic history and dear-bought glories, gathering again on the banks of the river from which it took its departure and its name;… having kept the faith, having fought the good fight, now standing up to receive its benediction and dismissal, and bid farewell to comradeship so strangely dear… What far dreams drift over the spirit, of the days when we questioned what life should be, and answered for ourselves what we would be!” On June 16, 1866, Chamberlain was mustered out. Due to his fragile health, he declined an offer of a colonelcy in the regular Army and a command on the Rio Grande. He was by then a celebrated war hero, probably the most famous man from Maine after Hannibal Hamlin, Lincoln’s first Vice President. He decided to enter politics and, in November 1866, was elected Governor of Maine by the largest majority in the state’s history. He was reelected three times (Maine governors in those days served one-year terms), facing down political rivals and rebellious legislators with equal determination. Had the political winds blown just slightly differently, Chamberlain would likely have been a U.S. Senator and perhaps even, in time, President of the United States – and wouldn’t that have been something? We could do much worse, then and now. After his fourth term as Governor, Chamberlain returned to his beloved Bowdoin College, serving as president from 1871 to 1883. His sole defeat in the less bloody but no less heartfelt struggles of academia came when he insisted that students take part in military drill. Some students complained that drill took time away from their studies, and dissatisfaction with the new requirement spread quickly. A brewing boycott was quelled when Chamberlain threatened to expel any student who didn’t take part in drill, but drill was eventually made voluntary and then dropped altogether. “Joshua Lawrence Chamberlain,” Ward wrote, had at last “been beaten by an army of unruly schoolboys.” From 1874 to 1879, he was also professor of mental and moral philosophy and a lecturer on political science and public law, continuing to lecture on these subjects until 1885. In time, he would teach every subject in the school’s curriculum besides mathematics. During the winter of 1878-79, Maine was wracked by political controversy. The Democratic and Greenback Labor parties, led by Gov. Alonzo Garcelon, teamed up to seize control of the state legislature in a hotly-disputed election. There was a flurry of accusations of voting fraud. The state’s Republicans formed a rival legislature. Chamberlain was still a major general of the Maine militia, and he ordered the offices of the governor and his council sealed, their records secured. “Each side accused him of favoring the other. He paid no attention. Partisan newspapers demanded his arrest, even his assassination. Finally an armed and ugly crowd stormed into the capitol, threatening to shoot him. Chamberlain met them in the rotunda. ‘Men,’ he called out, ‘you wished to kill me, I hear. Killing is no new thing to me. I have offered myself to be killed many times, when I no more deserved it than I do now…. It is for me to see that the laws of this state are put into effect, without fraud, without force, but with calm thought and sincere purpose. I am here for that, and I shall do it. If anybody wants to kill me for it, here I am. Let him kill!’ Chamberlain opened his coat and waited. A Civil War veteran pushed to the front of the crowd. ‘By God, old general,’ he shouted, ‘the first man that dares to lay a hand on you, I’ll kill him on the spot.’ The mob melted away.” Chamberlain kept the peace until the Maine Supreme Court ruled that the Republicans had fairly won the election, and the crisis passed. In later life, Chamberlain didn’t slow down much. He spoke for Maine at the Centennial Exposition in Philadelphia in 1876. He was one of the U.S. commissioners at the Universal Exposition in Paris in 1878, and wrote a widely-praised report on European methods of education. From 1884 to 1889, he kept himself busy with railroad and industrial investments in Florida; he found the warm weather there was better for his still-fragile health. ”There are great opportunities to get health and wealth [here],” he wrote to his sister Sarah, “and also to do good, and help other people.” In 1900, he was appointed by President McKinley to be Surveyor of the Port of Portland, Maine, a post he held until his death. Chamberlain was a prolific and talented writer. His The Passing of Armies: An Account Of The Final Campaign Of The Army Of The Potomac is a detailed description of the final exhausting days of the war, when the Army of the Potomac broke through Lee’s lines around Petersburg and Lee tried to escape to the southwest. Although Chamberlain’s writing may be a bit flowery for modern readers, the book is still an interesting account of the final death agonies of Lee’s army, and the growing exhilaration of the Federal troops, particularly of the Fifth Corps, giving chase. Chamberlain also stoutly defends Gen. Gouverneur K. Warren against those (Gen. Phil Sheridan among them) who criticized Warren’s conduct at the Battle of Five Forks. And when silence falls at last at Appomattox, you can easily imagine you are there. Chamberlain also wrote a definitive history of Maine, faithfully attended reunions of the 20th Maine and gave many, many speeches to veterans organizations around the country on his own experiences and the need to remember those who died in the Civil War. He helped survey the Gettysburg battlefield soon after the war, and he attended both the 25th and 50th anniversaries of the battle in 1888 and 1913, overwhelmed by memories of his men’s sacrifices. It was, he wrote, “a radiant fellowship of the fallen.” His wife Fannie died on October 18, 1905, just two months before their fiftieth wedding anniversary. Time took its toll on him and his comrades-in-arms, as it does to us all. Chamberlain said at a 1901 Memorial Day parade, “On each returning Memorial Day your thinning ranks, your feeble step, your greyer faces are tokens that would make me wholly sad, were it not for something undying in your eyes. And you, strong as your hearts are, do not wholly master the feeling that all is declining that made your worth, and the only struggle you can make now is against fast-coming oblivion. You hold together by the power of things you will not forget; though a shadow comes out of the cloud chilling you with the notion that these things and you are doomed to be forgotten.” Chamberlain, however, never forgot why he and all the men in blue fought. He wrote, “Slavery and freedom cannot live together. Had slavery been kept out of the fight, the Union would have gone down. But the enemies of the country were so misguided as to rest their cause upon it, and that was the destruction of it and of them. We did not go into that fight to strike at slavery directly; we were not thinking to solve that problem, but God, in His providence, in His justice, in His mercy, in His great covenant with our fathers, set slavery at the forefront, and it was swept aside as with a whirlwind, when the mighty pageant of the people passed on to its triumph.” He also wrote soon after the war, “There is a phrase abroad which obscures the legal and the moral questions involved in the issue, – indeed, which distorts and falsifies history: ‘The War Between the States.’ There are here no States outside of the Union. Resolving themselves out of it does not release them. Even were they successful in entrenching themselves in this attitude, they would only relapse into territories of the United States. Indeed, several of the States so resolving were never in their own right either States or Colonies; but their territories were purchased by the common treasury of the Union, and were admitted as States out of its grace and generosity… There was no war between the States. It was a war in the name of certain States to destroy the political existence of the United States.” Chamberlain’s wound from Petersburg never really healed; he lived in continual pain for the rest of his life, and for many years had a silver tube in his gut to drain the wound. Bruce Catton wrote that Chamberlain “somehow carried the wound around with him for the better part of half a century, building a military career on what a modern Army doctor would probably consider total disability.” Doctors attending him at his death on February 24, 1914 directly attributed his passing to infection of the wound, four months shy of fifty years since he was wounded at Petersburg, thus making Chamberlain “almost certainly the last Civil War soldier to die of wounds received in action,” as Catton would note. It was the eve of another great war that would change America forever. Chamberlain was buried with full military honors, and you may find his grave beside that of his wife, in Pine Grove Cemetery, near the Bowdoin campus in Brunswick. Geoffrey Ward writes that late in Chamberlain’s life, “when an author asked him for a first-person account of the action that had won him the Medal of Honor, Chamberlain declined, not wishing to appear immodest. ‘It would be impossible for you to say anything… that would savor of boasting,’ the writer responded [at once], ‘for your record as a brave soldier is so well known that self praise would necessarily fall far below what those who remember the dark days know to be true of you.'” Perhaps General Charles Griffin of the Fifth Corps said it best in describing Chamberlain: “You yourself, General, a youthful subordinate when I first took command of this division, now through so many deep experiences risen to be its tested, trusted, and beloved commander, – you are an example of what experiences of loyalty and fortitude, of change and constancy, have marked the career of this honored division…. You have written a deathless page on the records of your country’s history, and… your character and your valor have entered into her life for all the future.” But I think I should let Chamberlain himself have the last word, for by his life and in his service he proved its fundamental truth: “War is for the participants a test of character: it makes bad men worse and good men better.” Bibliography – Books Boritt, Gabor S., ed. Why the Confederacy Lost (Gettysburg Civil War Institute Books) (Oxford University Press, N.Y. 1992) Bowen, John Battlefields of the Civil War (Chartwell Books, London 1986) Carroll, Les The Angel of Marye’s Heights: Sergeant Richard Kirkland’s Extraordinary Deed at Fredericksburg (Palmetto Bookworks, Columbia, S.C. 1994) Catton, Bruce A Stillness at Appomattox (Army of the Potomac, Vol. 3) (Doubleday & Co., N.Y. 1953) Catton, Bruce American Heritage Picture History of the Civil War (American Heritage Publishing Co., N.Y. repr. 1982) Catton, Bruce Glory Road: The Bloody Route from Fredericksburg to Gettysburg (Doubleday & Co., N.Y. 1954) Catton, Bruce Never Call Retreat (Centennial History of the Civil War, Vol. III, Doubleday & Co., N.Y. 1965) Chamberlain, Joshua Lawrence The Passing of Armies: An Account Of The Final Campaign Of The Army Of The Potomac (G.P. Putnam’s Sons, N.Y. 1915; Bantam Books repr. 1993) Clark, Charles E. Maine: A History (W.W. Norton & Co., N.Y. 1977) Coddington, Edward B. The Gettysburg Campaign: A Study in Command (Scribner’s, N.Y. 1968) Commager, Henry Steele, ed. The Blue and the Gray: The Story of the Civil War as Told By the Participants (Bobbs-Merrill Co., N.Y. 1950) Foote, Shelby The Civil War: A Narrative (Random House, N.Y. 1963) Golay, Michael To Gettysburg And Beyond: The Parallel Lives Of Joshua Chamberlain And Edward Porter Alexander (Crown Publishers, N.Y. 1994) Johnson, Allen and Dumas Malone, eds. Dictionary of American Biography Volumes 1 – 10, Supplements, Index (Charles Scribner’s Sons, N.Y. 1929) McPherson, James M. Battle Chronicles of the Civil War 1863 and 1865 (MacMillan Publishing Co., N.Y. 1989) McPherson, James M. Battle Cry of Freedom: The Civil War Era (Oxford History of the United States) (Oxford University Press, N.Y. 1988) McPherson, James M. and Mort Kunstler Gettysburg: The Paintings of Mort Kunstler (Turner Publishing Co., Atlanta 1993) Mitchell, Joseph B. The Badge of Gallantry: Recollections of Civil War Congressional Medal of Honor Winners (MacMillan Publishing Co., N.Y. 1968) Racine, Philip N., ed. “Unspoiled Heart”: The Journal of Charles Mattocks of the 17th Maine (Voices of the Civil War) (University of Tennessee Press, Knoxville 1994) Reeder, Red The Northern Generals (Duell, Sloan & Pearce, N.Y. 1964) Shaara, Jeff Gods and Generals (Ballantine Books, N.Y. 1996) Shaara, Michael The Killer Angels (Ballantine Books, N.Y. 1974) Trudeau, Noah Andre Bloody Roads South: The Wilderness to Cold Harbor, May-June 1864 (Little, Brown, N.Y. 1989) Trudeau, Noah Andre Out of the Storm: The End of the Civil War, April-June 1865 (Little, Brown, N.Y. 1994) Trulock, Alice Rains In the Hands of Providence: Joshua L. Chamberlain and the American Civil War (University of North Carolina Press, 1992) Wallace, Willard M. Soul of the Lion: A Biography of General Joshua L. Chamberlain (Stan Clark Military Books, Gettysburg 1960; repr. 1991) Ward, Geoffrey C., with Ric Burns and Ken Burns The Civil War: An Illustrated History (Alfred A. Knopf, N.Y. 1994) Wheeler, Richard Witness to Gettysburg (Stackpole Military History Series) (Harper & Row Publishers, N.Y. 1987) Wood, W.B. and Major Edmonds Military History of the Civil War, 1861-1865 (Jack Russell Publishers, N.Y. 1959) Bibliography – Periodicals and Other Media Hansen, Liane “Jeff Shaara Discusses Writing Gods and Generals ” Weekend Edition/Sunday (National Public Radio transcript, June 30, 1996) Hennessy, Thomas A. “One Hundred Years Ago – Gettysburg,” Pittsburgh Post-Gazette, p. 35 (July 1, 1963) Unknown author, “Survivor,” American Heritage (December 1978) Ward, Geoffrey C. “Hero of the 20th,” American Heritage (November 1992) Click on any of the book links on this page to purchase from Amazon. Part of the proceeds from any book purchased from Amazon through the CCWRT website is returned to the CCWRT to support its education and preservation programs.
<urn:uuid:557ee102-71e4-4c6e-998a-fc24f2e9778e>
CC-MAIN-2022-33
https://www.clevelandcivilwarroundtable.com/joshua-lawrence-chamberlain-scholar-citizen-soldier/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570651.49/warc/CC-MAIN-20220807150925-20220807180925-00497.warc.gz
en
0.974436
7,356
2.875
3
What is ReGap? ReGap is an acronym for “Reducing the Educational Gap for migrants and refugees in EU countries with highly relevant e-learning resources offering strong social belonging”. The ReGap project is co-funded by the Erasmus+ programme, and aims to extend high quality, culturally sensitive open access e-learning resources to adult migrants and refugees of both genders in EU countries. Building on findings from the USA and our Erasmus+ project (Advenus) we know that reducing the education gap for migrants and refugees in European countries will secure employment and social belonging opportunities. The group we are intending to reach, is not in any way uniform. It has proven difficult to reach all. Which is why we are suggesting making some changes regarding teaching methods. This requires on-line learning activities that are culturally and gender sensitive and support in-person learning activities in the context of each European country. The ReGap project will continue to use online- learning as a basis. This will be uniform for all European countries. However, there are differences within Europe, which we need to address. In-learning in the individual countries, with information specific for the country they are staying, will do this. This will make the project far-reaching and the information will still be accurate and useful. We also know that a significant barrier to engagement in on-line learning is that on-line resources are not deemed relevant and fail to engage with the learners needs for knowledge about employment, health, social security, schooling and justice in the new country. To counteract this barrier, we suggest implementing the findings from a recent Stanford study. Research has demonstrated that social identity threat can impair a person’s working memory and academic performance. They managed to apply activities removing the social-identity threat, a fear of being seen as less competent because of social identity, leading to participants not completing MOOCS. This was achieved by creating a sense of belonging, with an online activity at the beginning of the course. The results being highly successful. We wish to include their activities, and in addition have online-groups for discussion and participation at certain times to even increase the sense of belonging. The ReGap project will develop on-line learning activities that enhance the contextual knowledge of migrants and refugees across key topics and their sense of social belonging. The ReGap project is a follow-up of the LIBE project, and the Advenus project.Read more » ReGap learning materials ReGap consists of six courses plus an introduction. The courses consist of an online course, as well as activities you can do “face to face”, for instance in a class room. Below, you will find a short description of the courses and links to the material developed for the face to face activities. To access the online courses, please go to the Advenus-ReGap website (opens in a new tab). The courses are available in five languages: English, Portuguese, Italian, Macedonian and Norwegian. All kind of feedback is warmly welcome, please contact us here. Good luck! Are you an educator or instructor? Please click here for a Booklet for Educators and a Demo Video. Introduction The Introduction course gives an overview of the different courses with examples, and explains how to use and navigate in the online platform. You will also find special information for educators. Link to the Introduction course (please click on preferred language): English – Italian – Portuguese – Macedonian – Norwegian. Employment The course on Employment is aimed at presenting the different kinds of contracts, acquiring the lexicon related to different kinds of jobs, explaining the sections and the key words of a CV/job ad, presenting people’s rights at the work place, explaining where to go to find a job (job centres/agencies), showing how to use public transportation/cars to go to work, acquiring lexicon/communicative expression to ask for information on the street understanding norms on driving licence. The course is made up of 6 sections: Introductory video on employment Finding a job Different kinds of employment contracts Losing your job Going to work Do you remember? Link to the Employment course (please click on preferred language): English – Italian – Portuguese – Macedonian – Norwegian. Face to face activitites Activity Employment – Finding a job Section: Finding a job Target group: Migrants and refugees in search for a job Objective: Enhancing lexicon on finding a job; communicative expressions in formal settings in relation to job findings (describe your own skills, understand the requested documents etc.) Tools: Videos on employment centres in the hosting country language (e.g. for Italy Samira at the employment centre); or photos showing interviews at the employment centre of your country; or leaflets about the services provided by the employment centre of your country. Examples (for Italy): Video (https://www.youtube.com/watch?v=VsEBjpQTlyY) Leaflet http://romalabor.cittametropolitanaroma.gov.it/sites/default/files/Pieghevole%2007.06.18.pdf) Cards to be distributed among students (see tables 1-2-3 below) Duration: 45 minutes. Background: before showing your students the tool you have selected, briefly recall the information about the CV sections and how to find a job in your country. Step 1 – brainstorming: Both the video and the photo/leaflet are meant as tools for brainstorming and for eliciting students’ knowledge on the topic (lexicon about employment, communicative expressions to be used in an interview, but also possible previous experiences in an employment centre of the hosting country). if you choose the video, ask your students to watch it at least twice (one time for global comprehension and the second time to take notes on the most relevant information) if you instead choose the photo/leaflet, let the students work in pairs and discuss the image/information included in the tool selected. Step 2 – wrap up in plenary: in plenary, discuss with your students about the meaning of the selected tool and their previous knowledge on the topic. This phase is crucial in order to understand how to set up the role-play activity, according to the students’ language knowledge level and their possible experiences in an employment centre. if students have low language skills and/or have never experienced an interview at the employment centre, present to them the most important words related to the topic and explain how the employment centre works. You can prepare printed version of the words and dedicate a couple of minutes each to read the word aloud, or to use the word in a sentence. See table 1. Step 3 – setting up the role-play activity: divide your students in pairs, possibly with the same level of language knowledge. One student will be the interviewer at the employment centre, the other student will be a person who’s looking for a job. For students with low language knowledge skills: provide them with some basic guidelines on the implementation of the role-play (e.g. repeat the useful words to use and add a list of useful expressions, indicate briefly how to interact with each other, basic issues that have to be addressed in the role-play – such as, personal data and main skills/qualification of the interviewed). See table 2. For students with high language knowledge skills: students will develop the role-play in autonomy but they will be however asked to plan it in a structured way (e.g. starting with the interviewer personal data, the documents necessary to apply for jobs, then talking about previous jobs, skills acquired during education/previous jobs etc.). See table 3. Step 4 – role-play plenary session: ask each pair of students to perform their role-play in plenary (students should not read notes!!). Take notes about good communicative strategies/expressions and about relevant mistakes. But please do not interrupt nor have judgmental facial expression. Encourage weak students to have a go first and then the others. Give time to start (count in silent to ten before talking again). Step 5 – feedback: provide your students with a general feedback on the strengths and weaknesses that you have observed during the role-play – you can do it orally or using the blackboard. Do not focus on the student but on the mistake. Explain why there’s a problem and how to solve it. If possible, ask collaboration of other students. Table 1 [pdf file] Table 2 [pdf file] Table 3 [pdf file] Health from cradle to grave The overall aim of the course health, is to make the participants reflect upon the topic of health. What is health, and how can we influence it. What do you do when your health fails you ? They are introduced to different kinds of settings where they can get help in the country they reside in. Being hospital, GP, emergency room etc. There are references to national health care programs, family health, vaccinations. Different ways in which they can maintain their health, and prevent illness. Included are mental health, diet, activities and social wellbeing. Approaching the topic of health in a holistic way. The course consists from five sections. Introduction Where to go for help Family health Stay healty Do you remember ? Link to the Health course (please click on preferred language): English – Italian – Portuguese – Macedonian – Norwegian. Face to face activitites Activity: Learning and practicing health related keywords In this activity, the learners learn and practice keywords from the Health Glossary by picking and explaining words from a box in various ways. They could either be two and two together, or in one (bigger) group. Preparations Print the table with words (PDF document below) in as many copies as needed (single page print), use a scissor to cut every row (word and explanation), and fold each piece of paper along the dotted line. You now have the health word on the one side, and the explanation on the other. Please add other relevant words if suitable (there are some empty rows as well) Put the words in a box/bowl/hat or similar. Activity Difficulty Level 1 (easiest) Remove the most difficult words from the box. One of the learners pick a word from the box and read it out load The other tries to explain the meaning of the word. If needed, the person who have the keyword and explanation can help Repeat point 2 and 3 with a new participant. Difficulty Level 2 (more difficult) One of the learners pick a word from the box and read the explanation The other tries to guess or find the correct keyword. If needed the person who have the keyword and explanation can help Repeat point 2 and 3 with a new participant. This activity can be varied in a number of ways, by for instance include role play. Table with words [PDF document] Social Security and Welfare The aim of the course is to develop, consolidate and secure knowledge in area of Social security and welfare, essential for migrants, refugees, asylum seekers inclusion in the society. The course will develop and expand migrants, refugees and asulum seekers knowledge and skills in area of social security and welfare. Using visual media and examples in resources and delivery will break the language barrier in acquiring knowledge through e-learning. The outcome and content are carefully selected based on the key needs of the target group in respect to knowledge and skills for succesful integration in the host countries. The course consist from 5 sections : Introduction What is Social security and welfare Difference between refugees, migrants and assylum seekers Social security and welfare for refugees, migrants and assylum seekers Do you remember ? Link to the Social Security and Welfare course (please click on preferred language): English – Italian – Portuguese – Macedonian – Norwegian. Face to face activitites Activity: What do you know about Social security and welfare? Participants will be welcomed to the activity. Brief introduction of the participants and objective of the activity. 10 minutes Firstly, they will have a relaxed conversation with the facilitator/s as a warm up. They will be asked what they know about social security and welfare programs. 10 minutes After that participants will be introduced with the topic Social security and welfare by the facilitator ...Read more » Partners & Advisory board NorwayItalyNorth MacedoniaPortugalAdvisory Board Inland Norway University of Applied Sciences Inland Norway University of Applied Sciences (INN University) was established 01.01.2017 as a merger between the former Lillehammer University College (first established 1970) and Hedmark University College (established as a merger in 1994). This was approved by the Cabinet of Norway in 2016 and effective from 1st of January 2017. INN University operates on six campuses in south-eastern Norway, and have approximately 13000 students and 950 employees. Centre for Lifelong Learning (CLL) has 20 employees. CLL offers open courses and study programmes, commissioned teaching, conferences and seminars. CLL includes a production unit which makes learning materials (video, audiovisual presentations, web pages, games and other interactive productions etc.) and e-learning solutions (including LMS, MOOCs). The centre also undertakes research and evaluation in these areas. Associate Professor Brit Svoen is project manager at Centre for Lifelong Learning, INN University, and member of the research group “Media, Technology and Lifelong Learning”. Her background is in Informatics and Media Education, and she has extensive experience in developing audiovisual learning resources, as well as on-line and campus-based programmes. Before Svoen joined INN University (former Lillehammer University College), she worked for 10 years in the business sector with ICT and multimedia and 5 years as an assistant professor. Brit Svoen is the coordinator for the ReGap research project, and was also coordinator for the previous Advenus project and Lillehammer University College’s project manager for the LIBE project. Professor Stephen Dobson, guest professor at the Centre for Life Long Learning, INN University, and Dean of Education, Victoria University of Wellington, New Zealand. Dobson was born in Zambia (1963), grew up in England and has previously lived for many years in Norway. Prior to entering higher education he worked for thirteen years with refugees as a community worker. His research and teaching interests include assessment, professional development, refugee studies, bildung, inclusion and classroom studies. He has published one collection of poetry. Dobson is fluent in Scandinavian languages and a member of the Teacher Education Expert Standing Committee for the Australian Institute for Teaching and School Leadership (AITSL). Stephen Dobson is the Chief Scientific Officer for the ReGap project, as he also was for the Advenus project. Linda Tangen Bjørge, is a Higher Executive Officer at INN University since 2016. She has a degree in Nursing, with further education and background in Emergency Medicine. MSc in International Environmental Health completed at Leeds Beckett University, England, in 2002. International field work in disaster area with World Food Program. Worked with refugees as information Officer and Acting CEO. Will contribute through experience from work with target group, as well as experience from the research team in the previous Erasmus+ project “Advenus” at CLL. Lars Teppan Johansen is a project manager with focus on graphic design, web, video, interactive and rich media. Lars began working at INN University in October 2007 and has a diverse background within ICT with developing websites, video, photo, audio, animations, prints and interactive media. Lars is keen to adopt new technology into educational models. John Torstad is office manager at Centre for Lifelong Learning, INN University. His background is from the Tourism and Travel industry and in developing and coordinating new courses at LUC for adults needing further education. John has more than 20 years of experience as a project manager and office manager at Centre or Lifelong Learning. His responsibility in these project is accounting and reporting. Professor in Education, Yngve Nordkvelle, has been a professor at INN University since 1999. He has published on international education, distance education and media education. His most recent project has been to edit an anthology on international perspectives on Digital Storytelling. Nordkvelle is the chief editor of the international e-journal Seminar.net, an international e-journal about Media, technology and lifelong learning and is the former editor of the Norwegian journal for Higher Education (UNIPED). He has led several expert committees, as served as a convenor of Network 6 in EERA, and has been a visiting scholar at several international prestigious universities. Nordkvelle will in particular contribute to this project with his expertise in design and production of learning resources. LUMSA University, Italy LUMSA University was founded in Rome in 1939 and it is characterized by its openness to the idea of universal human citizenship. LUMSA is one of the most important non-state universities of central Italy, with about 9000 students and 800 teachers and professors; it has three Faculties situated in neighbouring locations, and other branches operating in Palermo and Taranto. The university is located in the historic centre of Rome and in one of the most beautiful and historically rich areas of the whole city. It thus provides its students with the opportunity to avail themselves of the advantages that Rome has to offer. In particular, LUMSA strives to promote an overall education of the person and for this reason, the university devotes especial care to its students and their professional and human education through the employment of constant services of direction and tutoring, and of procedures designed to give full expression to their right to be engaged in study. LUMSA University offers four main subject areas of teaching and research activities: Economics, Humanities, Languages and Law. Professor Gabriella Agrusti, PhD, teaches Multimedia learning, Educational research methods and Assessment in Education at LUMSA University (Italy). She is a member of the Joint management committee that runs the IEA-ICCS 2016 study on civic and citizenship education in 28 countries in the world. She was senior scientific advisor for the coordination to Lifelong Learning Program KA3 – ICT Multilateral projects, with LIBE – Supporting Lifelong Learning with Inquiry-based Education project (24 months). LIBE project is aimed at design and develop Open Educational Resources for the reconstruction of transversal basic skills (literacy, numeracy, problem solving) in young low educational achievers in Europe). Recent publications include: G. Agrusti, Marta Pinto, João Caramelo, Susana Coimbra, Stephen Dobson, Brit Svoen, Alex Pulovassilis, George Magoulas, Bernard Veldkamp, Maaike Heitnik, Francesco Agrusti, LIBE e-booklet for educators and teachers, http://libeproject.it/wp-content/uploads/2015/11/LIBE-eBooklet.pdf, DOI: 10.13140/RG.2.1.2383.592512/2015 G. Agrusti, F Corradi, “Teachers’ perceptions and conceptualizations of low educational achievers: A self-fulfilling prophecy of disengagement for future NEETs”, The Qualitative Report, 20(8), 2015, pp. 1313-1328 (ISSN 1052-0147). Retrieved from http://www.nova.edu/ssss/QR/QR20/8/agrusti8.pdf Valeria Damiani has a PhD in Education, is currently research fellow at Roma Tre University and is member of the research group for the Erasmus+ projects at LUMSA University, Rome. Her research interests include citizenship education, education for global citizenship, teaching and assessing key/ transversal competences and e-learning. Cittadinanza e identità. Educazione alla cittadinanza globale e identità multiple in studenti di terza media. (2016); Large-scale assessments and educational policies in Italy (2016); Searching for quality in open educational resources (OERs): an Italian case study (2016, with Gabriella Agrusti). Elisa Muscillo, Psychologist, psychotherapist, expert in forensic psychiatry and child development, and PhD student in educational sciences. She has always been interested in educational psychology, particularly in risk factors and linguistic skills. Vincenzo Schirripa is research fellow at LUMSA University (Italy). He teaches History of Childhood and Educational Institutions and Children’s Literature. His early studies concern the history of scout movement, non-violent education and pacifism in contemporary Italy. He also has experience as a freelancer in social workers and teachers training on citizenship education and social issues. Relevant publications include: 2006. Schirripa, Giovani sulla frontiera. Guide e scout cattolici nell’Italia repubblicana (1943-1974), Studium, Roma 2006. 2007. Schirripa, Borgo di Dio. La Sicilia di Danilo Dolci (1952-1956), Franco Angeli, Milano 2010. 2008. Baglio, V. Schirripa, “Tutti a Comiso”. La lotta contro gli euromissili in Italia, 1981-1983, in “Italia contemporanea”, 276, 2014, pp. 448-475. Valeria Caricaterra teaches Intercultural education at LUMSA University (Italy). Her early studies concern citizenship education, teaching and assessing competences, special educational needs. Recent publications include:“Insegnare per competenze e formazione dei docenti” in Rivista Lasalliana, n° 4/2017 “Valutazione e inclusione: ecco perché sono due facce della stessa medaglia” in Tuttoscuola.com, area Cantiere della didattica, 04/04/2017 https://www.tuttoscuola.com/valutazione-inclusione-perche-due-facce-della-stessa-medaglia/ “L’inclusione è una questione di stile … educativo!” in Tuttoscuola.com, area Cantiere della didattica, 22/02/2017 http://www.tuttoscuola.com/linclusione-e-una-questione-di-stile/ “Il territorio a più dimensioni” in “Geograficamente Laboratorio permanente di ricerca-azione per lo sviluppo del pensiero geografico e del rapporto Ricerca-Didattica”, area di raccordo, approfondimenti, 1/10/2015 http://aiig.it/area-di-raccordo/ Giulia Vertecchi, PhD in urban history, has been ...Read more » Coordinator Brit Svoen Centre for Lifelong Learning Inland Norway University of Applied Sciences Chief Scientific Officer Stephen Dobson Centre for Lifelong Learning Inland Norway University of Applied SciencesRead more »
<urn:uuid:a56d08bc-3b8e-44fe-bb23-e00a4629b497>
CC-MAIN-2022-33
https://www.regap-edu.net/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00699.warc.gz
en
0.929866
4,820
2.78125
3
For centuries, maps have provided critical pieces of information. In the past, cartographers used to visit far-flung places around the globe to be able to accurately place where they were. Nowadays, computers and specialized programs like Geographic Information Mapping System create the maps that you probably rely on every day without even thinking about it. Beyond just looking for the quickest route to work or the nearest local Starbucks, maps help us to understand the world, how its population is dispersed, and even how animals migrate. If you've ever wondered what the world looks like from different perspectives, then start scrolling this amazing collection of maps and infographs. Greenland Compared To South America World maps have a tendency to be deceiving. They make some continents or islands look small, and others look gigantic by comparison. Greenland, when placed in the Northern Hemisphere, always looks massive. Yet, when placed side by side with South America, it is clear that South America could completely swallow the island. It is a whopping 8.2 times bigger. The Actual Population Dispersion Of The United States The United States has a population size of just over 300 million. You would think that Americans would be evenly dispersed throughout the States, but this is not the case. If you look at the 'red' spots on the map, you will notice that most of the population is concentrated on the West and East coasts. The interior is not a hot spot for people to move to. The Midwest is excellent for farming, and if you're interested in leading a quiet life raising livestock and growing crops then that appears to be the place to be. Texas Vs. Africa As the saying goes, "Everything is bigger in Texas," and that is true for things like trucks, belt buckles, and steak. When it comes to the actual area of Texas when compared to the continent of Africa, there is a clear winner. Texas is the size of an African country, but it cannot compete with the entire continent. The continent itself engulfs Texas, as it is 45 times bigger. The continent of Africa is not shown to scale on most maps, which is why a misconception that it is so small exists. Staring up at the Milky Way and watching the stars dance across the sky is a fabulous nighttime activity. Large cities like New York City and Los Angeles make it difficult to see the stars. For the stargazing lovers, the States with the least amount of light pollution are the ones in the middle of the country. Head to North Dakota or Wyoming and enjoy the night sky. If you're on the coasts, you may have trouble seeing the stars twinkle. Welcome To Middle America To understand this map, you need to know what the orange and the red sections mean. Essentially, the population size in red is equal to the population size in orange. That means almost the same amount of people live on the coasts, as almost all of middle America. If you're looking to buy some land and start a homestead, then head to the Midwest. Air Traffic Control Hundreds of airports dot the continental United States, and instead of being in their own individual zones, they belong to this unique set of borders. Instead of being named after the states, they surround, they are named after the major city that they service. There are 21 zones in total and within those, each airport is responsible for the airspace above it. That airspace has a radius of about five miles. Understanding Land Use The United States is fortunate to have a wide variety of ecosystems and biomes that allow for different sorts of industries and land uses. Each part of the country has a designated area for sectors like agriculture, defense, and forestry, and the list goes on and on. If you examine this map closely, you will see exactly where flowers for florists come from and where biodiesel is allowed to be produced. Most of the country is dedicated to farming in the center, with heavier industry along the coasts. The Mongolian Empire Throughout history, empires have risen and fallen. In 1279, the Mongolian Empire was the biggest empire by immediate landmass that the world had ever seen, and it retains this title to this day. While empires like the Roman and the Greek were also large, they were dispersed and not a cohesive landmass like the Mongolian one. Genghis Khan was the mastermind behind much of this empire. The Island Nation Of New Zealand By now, we know that the United States of America is a massive country that dwarfs smaller nations. New Zealand, found in Oceania, has been superimposed over America to once more prove how big it is. New Zealand, the country responsible for the magnificent landscapes in The Lord of the Rings sits comfortably inside the borders of America. In fact, it is 3,558 percent smaller, which puts it around the same size as the United Kingdom. Where The Imperial System Is Used There are two systems of measurement used around the globe: the imperial system and the metric system. Most countries, with a few minor exceptions and one very big one, utilize the metric system of meters, liters, and grams. The United States uses the imperial system which involves miles, yards, and gallons. What's interesting, is that the British are responsible for the introduction of the imperial system, and it continues to be a holdover from the colonial period. Liberia, in West Africa, and Myanmar, in Southeast Asia are the only other countries that use it. Here's Where American Forests Are Forestry is a huge industry, and in order to make it sustainable, trees cut down need to be replaced. The United States is home to almost one-tenth of the globe's forests, which makes it a major contributor of timber on the world market. Forestry is a booming industry, and while heavy logging does contribute to the decline of forests, tree-planting programs have actually helped abate this. City mayors and local councilors consistently institute tree planting programs, which is keeping America green. Railroad To Nowhere Catching the train, and watching the landscape pass by, is one of the most relaxing modes of transportation out there. During the Industrial Revolution in America, companies began to build railroads in the belief that every city and town would be connected. By 1893, this dream came to an end. It was known as 'the Panic', and it was a time of economic turmoil which saw many businesses and families go bankrupt. Due to that, there are railroads to nowhere all over the United States. The Great Flamingo Migration Bright pink flamingoes can be found in the wild, but only in very specific coastal regions in the world. Flamingoes don't enjoy cold climates, which means their migratory patterns bring them to the sunny shores of the Southern Hemisphere. The pink on this map shows that flamingoes enjoy hanging out in South America, Africa, and parts of Asia. Next time you see a flamingo perched on one leg, think about how far they have flown to get there. The State Of California California, known for its size, the plethora of celebs, and sunny beaches could be a country itself based on its size. The state itself has a population of over 39 million, which is more than the population of all of Canada. To put this into even more perspective, California is bigger than Italy by about 25 percent. Italy does win in population size though, as it has more than 60 million citizens living within its borders. Highways To Everywhere While railroads might have faded into the past in the United States, highways have not. The United States' Federal Highway Administration (FHWA) is the administrative body responsible for expanding the already impressive amount of miles of highway in existence. To date, there are 157,724 miles of highway running throughout the continental United States. These highways connect each state, town, and county to one another and require a lot of maintenance. Oddly enough, maintenance is not done on the federal level and is instead left to state authorities. That's why some highways are better maintained than others. Australia And The United States Showdown Australia is both an island and a continent, which puts it in a very special position. This island nation, found in the Pacific Ocean, is home to the largest coral reef in the world, and some of the most poisonous plants and spiders on the planet. In terms of size, it gives the United States a run for its money. In terms of size, Australia is 27 percent smaller than the United States, which is illustrated on this map. It may be a smaller country, but it also has a much smaller population to support. America needs all the space it has to support its growing population. The Population Of LA County With such a huge population, some of the counties in the United States are stuffed to the brim with people. Los Angeles County, found in the state of California has a population size that is bigger than the state population for the majority of America. Stuffed within that tiny county are over 10 million people who call the place home! The county population is similar to the state populations in Georgia, Pennsylvania, and New York, just to name a few. Finding decent housing here must cost some serious dollars! Cuba Compared To Hudson Bay The island nation of Cuba has been a popular holiday destination for decades, and with its recent opening to the United States, Americans have begun to flock there. Cuba might look like it is fairly big on a world map, but when compared to Hudson Bay in North America, it looks smaller than a lot of the provinces surrounding the Bay. Cuba is about one-twelfth the size of Hudson Bay, and when it's placed smack dab in the center of it, it looks like it is sitting underneath the water. If Cuba starts sinking like the Maldives, it will become nothing more than a sand bar, but that's not going to happen anytime soon. Book a vacation, and go lounge on its sunny shores. Sizing And Population Density A lot of the states within America are sparsely populated, especially those in middle America and in the Pacific Northwest. It's not because people don't want to live here, but for the Pacific area it is quite expensive, and in the Midwest, there are not as many job opportunities as in the cities. Based on population density, this is how the states would look if they were turned into a map. As you can see, most of the population continues to be along the coasts and along the border with Canada. If you feel like moving and getting away from densely populated centers, then this map will help you. Land Mass Of China And America The most populous country in the world is China. The country in East Asia has held this title for many years, and luckily, it does have a large landmass to support the ever-growing population. However, the United States does give it a run for its money, in terms of land size. When superimposed over each other, the United States just squeaks by and is a tad bigger in terms of land size. The two populous countries are almost identical in size, but America is 9,833,517 sg km and China is 9,596,961 sq km. If this was a competition, the United States would be the land size winner. Favorite Coffee Chains The United States is full of little bistros and cafes with their own carefully made brews. Some serve up delicious cold brews, while others focus on fancy coffees full of frothy designs on top that baristas dream up. When it comes to coffee chains though, there are three brands that reign supreme. For most of the states, Starbucks, which originated in Seattle, Washington is the go-to for most coffee lovers. On the Eastern seaboard, Dunkin' Donuts is a contender for second place, with most of the east coast states preferring it. Caribou Coffee in Minnesota gets an honorable mention, as it is the only state that likes that chain. India And The United States Behind China, India is the country with the second-largest population. Its borders contain roughly 1 billion people, dispersed all throughout the country. When looking at a world map, India does not appear to be that big, but in reality, it is the seventh biggest country in the world. With India superimposed over the United States, we are once more reminded of the size of America. India may be large and have a massive population, but it is still dwarfed by the North American giant. It is about half the size of America, which is still very big. World maps need to start doing it a bit more justice and showing its scale. Countries With Lower Populations While we have seen a lot of maps featuring the United States and its population size and density, there are still dozens of other countries that deserve some attention. This map shows all of the countries that have populations less than 100 million, which is quite a few. One of the reasons countries like New Zealand, Canada, and Australia have significantly smaller populations and large land sizes are due to the climate itself. The harsh arctic climate in Northern Canada makes it very difficult for people to live there, and the arid desert climate in parts of Australia is the same. That means that countries with pleasant climates, tend to have bigger populations. China Meet Russia When you stare at a world map, one country immediately stands out for its size - Russia. By land size, Russia is the largest country in the world. However, like other countries on here, it has areas that are sparsely populated due to the harsh winter climate. To put it into perspective just look at how big Russia is after taking a look at China superimposed over it. China is not as small as maps make it look in comparison to Russia, although it is still substantially smaller. if you're measuring by land size itself, Russia is a whopping 44 percent larger than China. All Of Antarctica Most maps show Antarctica as this long-snaking continent that lives at the bottom of the world. Unless you are looking at a globe, it is hard to conceptualize that Antarctica is actually a rounder continent instead of one that is long and skinny. Antarctica does not have a permanent population the way the other continents do, as it is made up of scientists, researchers, and support staff who work on a rotational basis. To understand just how big Antarctica is, you need to compare it to all of North America. It might not be as big, but if it was a country, Canada would be 40 percent smaller than it. Noise Pollution In America Anybody who has spent time in a city knows that it is all about the hustle and bustle. Cars honk, nightclubs blare music, and sirens go off. This cacophony of sound all contributes to what is known as noise pollution. This map shows lit-up areas, which are all focused in major cities like Los Angeles, New York, and Miami. Each of these cities has massive populations, which in turn leads to more noise. If you enjoy the quiet, then you will need to head to the center of the country. We hear Maine is a good choice. In order for a country to get bigger, it needs to have a steadily growing population. For example, Japan has an aging population and little immigration which has led to a decline in population size. This map shows where all of the spikes or population growth are happening in the United States. A lot of these spikes are concentrated in the major cities, which regularly experience an influx of immigrants. Along the Canadian border, there aren't really any spikes, and that might be due to the fact that there aren't major cities in middle America along the border. Montana And Mongolia Montana and Mongolia share a few similarities. Both are sparsely populated, have huge swathes of land for livestock to roam on, and a thriving agricultural sector. When it comes to land size that is where the two begin to differ, amongst many other aspects. In terms of land size, Montana is 4.1 times smaller. What is odd, is that mapmakers like to make Mongolia look like a massive country in Asia, when that is not actually the case. Road Tripping Through Springfield For some odd reason, Springfield is one of the most popular names for towns in the United States. In 25 states alone, there are a grand total of 33 towns which all have the name of Springfield. Due to this, road trippers have begun to chart the best route that allows them to visit all 33. If you have been looking to embark on an adventure, then heading to one of the many Springfields would make for an excellent road trip. This map shows the route which allows you to visit the majority of the Springfields. There are a few townships that also share the name. Poland Versus Texas The Eastern European country of Poland is known for its scrumptious perogies and beautiful architecture that makes you feel as if you have gone back in time. In yet another edition of Texas versus a country, Poland does not exactly come out on top. Poland itself is large, but Texas is larger. In this instance, everything is bigger in Texas. Although, we are not sure if the food in Texas is better. Poland might win in a cuisine competition. Countries Inside The United States It can be a lot of fun to make your own maps, especially when it comes to seeing just how many countries can fit inside the continental United States. One Redditor decided to see how many random foreign countries from Europe and Asia could fill the continental US' borders. As you can see, a ton of countries can fit inside America. Even though America is big, and densely populated in some states, there is still a lot of it that is rather unpopulated. North America At Scale Lots of people go to school to learn about geography or the art of mapmaking through applications like Geographic Information Mapping System. For anyone who has put the time in to learn about size comparisons, they know that maps are not always scaled correctly. The United States tends to be given a spot of prominence and enlarged in maps when really, South America and Africa are much larger. When looking at maps, always keep scale in the back of your mind. Cities With 100,000+ People Cities like New York City, Miami, and Seoul in South Korea have populations that number in the millions. Many other cities don't come anywhere close to sharing a population that size. To put it into perspective, take a look at this map, which features black dots for cities with 100,000+ people in them. You will notice, that there are not that many cities that actually have large populations. In some cases, the majority of the populations live outside of urban centers, which might account for the smaller populations. The Majority Of The Globe's Population By now, we know which countries are the most populous in the world: China and India. They also happen to be on the same continent, which means much of the world's population is located within the Asia-Pacific region. The highlighted circle on this map is where the majority of the world's population resides and it is all in Asia. If you thought the United States was densely populated, then think again. It only accounts for 5 percent of the global population. The Number One Sport Over the centuries different sports have risen in popularity. In ancient Greek times, sports like javelin throwing and running were the norm. Now in the 21st century, there is one sport that continues to dominate around the globe. If you guessed soccer or football as it is known in many other countries, then you are correct! Soccer is played around the globe, and hooligans continue to attend games and watch them on television whenever they are broadcast. Wild Fires Around The Globe Climate change is real, and many countries around the world are experiencing the effects of it. One of the effects is a rise in wildfires, which have begun to sweep through countries like Australia, the United States, and Canada. In an effort to raise awareness, the Earth Observatory Team run by NASA released a map of where all the active fires were burning. This image from 2020 highlights the importance of stemming climate change, and actively doing what we can to slow it down. What Side Of The Road To Drive On Anyone who has traveled outside of North America to the United Kingdom or a Commonwealth country is well acquainted with driving on the left side of the road. Left-side driving stems from the past when automobiles were first invented. In the United Kingdom, they have continued to drive on the left side, and their former colonies that are part of the Commonwealth have continued with this as well. When in doubt about which side to drive on, always look at traffic, and refer to this handy map. Populations Across The Map While landmass helps to determine the population size that it can support, there are a number of other factors that come into play. For example, populations tend to grow in areas that have a climate that isn't harsh, access to resources like fresh water and arable land, and ports that can be used for trade. Looking at this map, which shows how the world's population is dispersed, you can see that these factors are important. Much of the population is concentrated along the coasts of continents and sparsely populated in regions that experience arctic temperatures. Nobody enjoys being cold! A World Map To Scale With so many maps not drawn to scale, it is time to look at one that accurately represents what countries' landmass actually looks like. The Hobo-Dyer Equal Area Projection team produced this map which shows just how big continents and countries are in relation to one another. Africa is much bigger than it is normally shown, which is important as it is becoming an economic powerhouse, and is undergoing massive population growth. The United States is also much smaller than it normally appears, which is important because while it is economically important, it is not the largest by landmass. Just How Big Is Australia? The land down under, or Australia has a total size of 7.7 million square kilometers, which makes it one of the bigger countries and continents on the map. It can be hard to put into perspective just how big that is, without comparing it to other countries. This map shows how countries like Thailand, South Africa, and many more comfortably fit within its borders. There is even room to spare for a few smaller countries, like Luxembourg, if we wanted to add a few more. If you want to live in a country with a hot climate, a ton of wildlife, and very few people, then Australia is the place to be. Understanding Mercator's Projection Mercator's Projection is by no means a new way of designing maps. In 1569, a man by the name of Gerardus Mercator introduced his mode of mapping. It is the process of using meridians, lines of latitude and longitude, and following the equator to create a cylindrical projection. This method of mapping continues to be used, especially in navigation charts. The problem with this method of mapping is that it is not always accurate. As the lines used do not always display exactly where continents should be situated on a flat map. How Big Is Japan? East Asia is primarily composed of South Korea, China, and Japan. Japan tends to be shown as a small island nation off the east coast of Asia, but that is not exactly accurate. Mercator's Projection is responsible for making Japan smaller than it appears on maps, but this should put it into perspective. Japan, when compared to the east coast of the United States is actually quite large. On top of that, it contains a population of over 126 million people. That is about a third of the United State's population, which is impressive considering how much smaller Japan is in terms of landmass. Where Canadians Live There are a lot of stereotypes surrounding Canada. Canadians do not live in igloos, ride moose to school, and drizzle maple syrup on top of everything. That being said, most Canadians do tend to live close to the border with the United States or along the coasts. This red line showcases where most of the Canadian population lives. It is concentrated on the east coast in the provinces of Ontario and Quebec. Part of this is due to the fact that Toronto and Montreal are two of the biggest cities in Canada, and are in these provinces. With 20 million people living in that area, the rest of Canada is largely untouched. Migratory Routes In North America While humans may choose not to live in Northern Canada, many animals continue to roam around the landscape. In the Northwest Territories, Nunavut, and the Yukon, herds of caribou follow strict migratory paths each year. That is true for many other animals from birds like Canadian geese to the great big water mammals such as beluga whales. Every animal that migrates follows a route, typically from north to south, so that it can enjoy warmer temperatures in the frigid winter months. This map primarily focused on mammals that migrate, but insects like monarch butterflies are also known to follow specific paths. Countries That Share Borders By examining a map, it is obvious what countries share borders. The Topologist's Map of the World shares a slightly different view though. Instead of staring at a traditional map of the world, the Topologist's Map places countries that have borders that touch by continent. Each continent is separated by a blue line, and the small curved rectangles that circle the continents are islands that don't border them directly but are in the same vicinity. Conceptually, this map is very easy to understand, although it does not accurately show landmass to scale which is a limitation. This is just another way of looking at the world The Rivers Of America Countries which are rich in water are able to irrigate land used for farming, provide clean water to their populations, and even export water as a commodity. America is fortunate to be home to a whopping 250,000 rivers, which traverse over 3.5 million miles of land. Everywhere you turn, it feels like there is a river or a stream trickling by. With so many rivers in the continental United States, it should come as no surprise that there is one over 2,540 miles long. The Missouri River wins the title for the river spanning the greatest distance in America. The Mighty Mississippi River The Missouri River might have won the title for the longest river, but it is not the river that contains the largest volume of water. That title goes to the Mississippi River. The red section on this map shows all of the rivers that feed into the Mississippi. It is estimated that around 7,000 rivers that traverse the United States eventually lead into the Mississippi River and that is what turns it into the raging river we know and love. You will see that the Mississippi River eventually flows into the ocean, and straight into the Gulf of Mexico. Reaching 1 Billion With over 7 billion people on this planet, it is hard to conceptualize that most of that population is situated in Asia, Africa, and South America. To detail this, take a look at this color-coded map. Each color shows how the world's populations combine to reach 1 billion people in specific regions. For example, it would take North America, South America, and Australia to reach 1 billion people. India and China have one billion people apiece, with Africa, Europe combined with the Middle East, parts of Asia, and Oceania making up the rest. That is a lot of people for Mother Earth to support. To The Moon And Back It's finally time to go off-planet and take a look at the moon. Space exploration is still in its infancy, but slowly but surely we are getting closer to exploring other planets and what the universe has to offer. The moon is something every person on this planet sees, but how big is it really? With the United States of America superimposed over the surface of the moon, you can see that it takes up almost the entire side of it. If Asia had been superimposed, it would have covered almost the entire landmass of the moon. The moon is pretty small when you think about it in those terms.
<urn:uuid:fc9d41a1-414d-4dbe-b179-2c8268c55a75>
CC-MAIN-2022-33
https://www.travlerz.com/en/40-maps-that-show-how-the-world-really-is?utm_source=network&utm_medium=rec&utm_campaign=kueez
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573540.20/warc/CC-MAIN-20220819005802-20220819035802-00499.warc.gz
en
0.959951
5,876
3.09375
3
Marine Drugs: Implication and Future Studies R. Arthur James Natural product compounds are the source of numerous therapeutic agents. Recent progress to discover drugs from natural product sources has resulted in compounds that are being developed to treat cancer, resistant bacteria and viruses and immunosuppressive disorders. Many of these compounds were discovered by applying recent advances in understanding the genetics of secondary metabolism in microorganisms, exploring the marine environment and applying new screening technologies. Microbes have made a phenomenal/unique contribution to the health and well-being of people throughout the world. In addition to producing many primary metabolites, such as amino acids, vitamins and nucleotides, they are capable of making secondary metabolites, which constitute half of the pharmaceuticals on the market today (and provide agriculture with many essential products). A growing number of marine microorganisms are the sources of novel and potentially life-saving bioactive secondary metabolites. Here, we have discussed some of these novel antibacterial, antiviral, anticancer compounds isolated from marine-derived microbes and their possible roles in disease eradication and commercial exploitation of these compounds for possible drug development using many approaches. Received: June 03, 2010; Accepted: July 24, 2010; Published: November 16, 2010 The oceans cover over 70% of the earth's surface and contain an extraordinary diversity of life. Our interest in understanding the function of marine ecosystems has been accelerated in recent years with growing recognition of their importance in human life. Marine microbes have defined the chemistry of the oceans and atmosphere over evolutionary time. Thousands of different species of bacteria, fungi and viruses exist in marine ecosystems comprising complex microbial food webs. These microorganisms play highly diverse roles in terms of ecology and biochemistry, in the most different ecosystems and each drop of water taken from the ocean will contain microbial species unknown to humans in a 9:1 ratio (Colwell, 2002). The ocean represents a rich resource for ever more novel compounds with great potential as pharmaceutical, nutritional supplements, cosmetics, agrichemicals and enzymes, where each of these marine bioproducts has a strong potential market value (Faulkner, 2002). A lot of structurally and pharmacologically important substances have been isolated with novel antimicrobial, antitumor and anti-inflammatory properties (Bhadury and Wright, 2004). In many cases, natural products provide compounds as clinical/marketed drugs, or as biochemical tools that demonstrate the role of specific pathways in disease and the potential of finding drugs. In the areas of cancer and infectious disease, 60 and 75%, respectively, of new drugs, originate from natural sources. Raja et al. (2010) reported that new antibiotics active against resistant bacteria are required. Bacteria live on earth for several billion years. During this time, they encountered by range of naturally occurring antibiotics. To survive, bacteria developed antibiotics resistance mechanism (Hoskeri et al., 2010). Natural products with industrial/human applications can be produced from primary or secondary metabolism of living organisms such as microorganisms. Among, them 50-60% are produced by plants (alkaloids, flavonoids, terpenoids, steroids, carbohydrates, etc.) and 5% have a microbial origin. Furthermore, from the 22,500 biologically active compounds that has been obtained so far from microbes, 45% are produced by actinomycetes, 38% by fungi and 17% by unicellular bacteria (Berdy, 2005). The increasing role of microorganisms in the production of antibiotics and other drugs for treatment of serious diseases has been dramatic. However, the development of resistance in microbes and tumor cells has become a major problem and requires much research effort to combat Several reviews explore the development of marine compounds as drugs. There have been reviews on aspects of the chemistry and bioactivity of compounds from microbes, soft corals, cyanobacteria and microalgae, cyanobacteria and macroalgae, sponges, echinoderms, ascidians, fish, the sponge genus Halichondria, terpenes from the soft coral genus Sinularia and specific types of bioactivity associated with marine natural products have been reviewed in articles on anticancer drugs, agents for treating tuberculosis, malaria, osteoporosis and Alzheimers disease, treatments for neurological disorders, anti-inflammatory agents anti anti-HIV compounds (Blunt et al., 2007). Secondary metabolites, especially drugs have exerted a major impact on the control of infectious diseases and other medical conditions and the development of pharmaceutical industry. Their use has contributed to an increase in the average life expectancy in the USA, which increased from 47 years in 1900 to 74 years (in men) and 80 years (in women) in 2000 (Lederberg, 2000). As a great promising source for new natural products which have not been observed from terrestrial microorganisms, marine bacteria are being developed for the discovery of bioactive substances with new types of structure, with growing intensive interest. The achievements have been well reviewed, where many new antibiotics were obtained from microorganisms. With drug resistant strains of microbes appearing more frequently the biopharmaceutical industry has to move towards novel molecules in their development of new drugs. The oceans provide us with an opportunity to discover many new compounds, with over 13,000 molecules described already and 3,000 of them having active properties. Marine organisms have long been recognized as a source of novel metabolites with applications in human disease therapy. HISTORY OF ANTIBIOTIC Back in 1928, Alexander Fleming1 began the microbial drug an era. When, he discovered in a Petri dish seeded with Staphylococcus aureus that a compound produced by a mold killed the bacteria. The mold, identified as Penicillium notatum, produced an active agent that was named Penicillin. Later, penicillin was isolated as a yellow powder and used as a potent antibacterial compound during World War II. By using Flemings method, other naturally occurring substances, such as chloramphenicol and streptomycin, were isolated. Naturally occurring antibiotics are produced by fermentation, an old technique that can be traced back almost 8000 years, initially for beverages and food production (Balaban and DellAcqua, 2005). REASONS FOR DEVELOPING NEW ANTIBIOTICS FROM MARINE SOURCES The WHO has predicted that between 2000 and 2020, nearly 1 billion people will become infected with Mycobacterium tuberculosis (TB). Sexually transmitted diseases have also increased during these decades, especially in young people (aged 15-24 years). HIV/AIDS has infected more than 40 million people in the world. Together with other diseases such as tuberculosis and malaria, HIV/AIDS accounts for over 300 million illnesses and more than 5 million deaths each year. Additional evolving pathogens include the Ebola virus, which causes the viral hemorrhagic fever syndrome with a resultant mortality rate of 88%. It is estimated that this bacterium causes infection in more than 70,000 patients a year in the USA (Balaban and DellAcqua, 2005). The Infectious Disease Society of America (IDSA) reported in 2004 that in US hospitals alone, around 2 million people acquire bacterial infections each year Staphylococcus aureus is responsible for half of the hospital-associated infections and takes the lives of approximately 100, 000 patients each year in the USA alone (Hancock, 2007). New antibiotics that are active against resistant bacteria are required. The problem is not just antibiotic resistance but also multidrug resistance. In 2004, more than 70% of pathogenic bacteria were estimated to be resistant to at least one of the currently available antibiotics (Cragg and Newman, 2001). Among them, Pseudomonas aeruginosa accounts for almost 80% of these opportunistic infections. They represent a serious problem in patients hospitalized with cancer, cystic fibrosis and burns, causing death in 50% of cases. Other infections caused by Pseudomonas species include endocarditis, pneumonia and infections of the urinary tract, central nervous system, wounds, eyes, ears, skin and musculoskeletal system. This bacterium is another example of a natural multi drug-resistant microorganism (Balaban and DellAcqua, 2005). Several viruses responsible for human epidemics have made a transition from animal host to humans and are now transmitted from human to human. In addition, the major viral causes of respiratory infections include respiratory syncytial virus, human parainfluenza viruses 1 and 3, influenza viruses A and B, as well as some adenoviruses. These diseases are highly destructive in economic and social as well as in human terms and cause approximately 17 million deaths year-1 and innumerable serious illnesses besides affecting the economic growth, development and prosperity of human societies (Morse, 1997). METABOLITES FROM MARINE MICROORGANISMS Marine organisms comprise approximately half of the total biodiversity on the earth and the marine ecosystem is the greatest source to discover useful therapeutics. Sessile marine invertebrates such as sponges, bryozoans, tunicates, mostly lacking morphological defense structures have developed the largest number of marine-derived secondary metabolites including some of the most interesting drug candidates. || Potential antimicrobial/anticancer compounds from marine In recent years, a significant number of novel metabolites with potent pharmacological properties have been discovered from the marine organisms. Although, there are only few marine-derived products currently in the market, several marine natural products are now in the clinical pipeline, with more undergoing development (Rawat et al., 2006). Similar work has been conducted targeting uncultivable microbes of marine sediments and sponges using metagenomic-based techniques to develop recombinant secondary metabolites (Moreira et al., 2004). Marine bacteria are emerging as an exciting resource for the discovery of new classes of therapeutics. The promising anticancer clinical candidates like salinosporamide A and bryostatin only hint at the incredible wealth of drug leads hidden just beneath the ocean surface. Salinosporamide A, which is isolated from marine bacteria that is currently in several phase I clinical trials for the treatment of drug-resistant multiple myelomas and three other types of cancers (Ahn et al., Microbes generally lack an active means of defense and thus have resulted in developing chemical warfare to protect them from attack. In addition, many invertebrates (including sponges, tunicates, bivalves, etc.) are filter feeders, resulting in high concentrations of marine viruses and bacteria in their systems. For their survival, potent antiviral and antibacterials had to be developed to combat any opportunistic infectious organisms (Table 1). It is hoped that many of these chemicals can be used as the basis for future generations of antimicrobials usable in humans. MARINE NATURAL PRODUCTS BEING THE NEW SOURCE OF LEAD COMPOUNDS In the past natural products have been a strong source for novel drug products, or have been a model for a drug that has made it to market (Cragg et al., 2006). The reasons for the strong showing of drug discovery from natural products can be attributed to the diverse structures, intricate carbon skeletons and the ease that human bodies will accept these molecules with minimal manipulation. The current trend within drug development is to find new precursor molecules from synthetic molecules as it is more cost-effective. This is because the techniques used with natural products include complex screening procedures that are time-inefficient and expensive. In addition, a biological response from the mixture containing the compound may not be attributed to the chemical entity in question, but by another substance within the extract interfering with the screening procedure. The modern pharmaceutical shelves house a variety of compounds; however, there are a limited number of products on store shelves that are derived from a marine source. Historically, the first two compounds to make it to market from a marine source are Ara-A (Vidarabine®, Vidarabin®, Thilo®) and Ara-C (Cytarabine, Alexan®, Udicil®) (Patrzykat and Douglas, 2003). These compounds were isolated by Bergmann and Feeney (1951) and are still prescribed today. Ara-A is an anti-viral compound isolated from a sponge; Ara-C is isolated from the same sponge (Cryptotethya crypta) and has anti-leukemic properties. Natural products are becoming more popular again as marine organisms, both multi- and single-cellular, are an excellent resource with which to find novel chemical entities. Further, many chemical compounds isolated from marine organisms have great potential as antimicrobials or cytotoxic compounds due to the reliance of marine organisms on antimicrobial compounds or cytotoxic molecules as their innate defense mechanisms (Fig. 1a-e). There are currently over 3000 new substances identified from marine organisms in the past three decades, giving researchers a large pool of novel molecules from which to find new compounds to develop (Florida Atlantic University, http://www.science.fau.edu/drugs.htm). ||Chemical structure of metabolites from marine sources. (a) convalutamines-bryozoans non-halogenated SesQuiterpene molluscs, (b) 3-heptacosoxypropane-1,2-diol sponges, (c) kalkitoxin-cyanobacteria, (d) lornemides A-actinomycetes aigialomycin D-fungi and (e) IB-96212-bacteria For example, if properly developed, marine bacteria could provide the drugs needed to sustain us for the next 100 years in our battle against drug-resistant infectious diseases. Over the past century, the therapeutic use of bacterial natural products such as actinomycin D, daunorubicin, mitomycin, tetracycline and vancomycin has had a profound impact on human health, saving millions of lives. In the past 10 years (1997-2008), 659 marine bacterial compounds have been described. Marine fungi have proved to be a rich source of bioactive natural products. Most of these micro-organisms grow in a unique and extreme habitat and therefore they have the capability to produce unique and unusual secondary metabolites. To date, more than 272 new compounds have been isolated from the marine fungi and the number of compounds is on the increase (Tziveleka et al., 2003). According to the World Health Organization 100 million of people in the developing countries are affected by infectious diseases (Lee et al., 2009). NEW DRUG FROM ENGINEERED MICROORGANISMS Many chemicals and biological molecules that have been used as drugs are found in microorganisms, plants and animals. As these drugs are synthesized in only minute amounts, it is difficult to obtain them in suitable amounts. This is where metabolic engineering comes into play. The sequencing of genomes from cultivable microorganisms, chromosomal DNA is used to generate genomic libraries. Large genomic DNA fragments are directly isolated from the sample and cloned into suitable host vector systems (Fig. 2). Establishment of comprehensive gene libraries attempts to cover all genome sequences from sample, to gather as much information as possible on the biosynthetic machinery of a microflora. Recent advances in our understanding on the metabolic pathways for the synthesis of these drugs together with the development of various genetic and analytical tools have enabled more systematic and rigorous engineering of microorganisms for enhanced drug production. Much rapid growth of microbial cells compared with higher organisms is another obvious advantage. Furthermore, metabolic engineering of microorganisms can be performed more easily than mammalian and plant cells, which allows modification of metabolic pathways for the production of structurally more diverse analogs with potent biological activities, as in the cases of polyketides and non-ribosomal peptides (Minami et al., 2008). ||Common schematic representation of rDNA (recombinant DNA) preparation from Marine environmental (Microorganisms) samples (Thakur et al., 2008) Although production of drugs at their final forms may be most desirable, biosynthesis of drug precursors is also favored experimentally and economically in several cases. High impact of microbial metabolic engineering toward the biosynthesis of drug precursors is well illustrated by the recent development of microbial Various drug molecules can be produced by employing metabolically engineered S. cerevisiae with appropriate heterologous genes using the same precursor synthesized by engineered E. coli. This is a good example of what metabolic engineering can do for the design and production of drug precursors that are difficult to obtain otherwise Biosynthetic capacity of marine Verrucosispora and Salinospora strains demonstrate that marine actinomycetes represent a new and potent source of bioactive secondary metabolites (De Vries and Beart, 1995). Shizuya et al. (1992) developed the bacterial cloning system Bacterial Artificial Chromosome (BAC) for mapping and analysis of complex genomes. Because of its high cloning efficiency and the stable maintenance of inserted DNA, the BAC system is able to facilitate the construction of DNA libraries of complex genomic samples but also provides a comprehensive representation of genome sequence of one organism. The ability to clone long stretches of DNA has become an important tool for genome analyses of uncultivated marine microorganisms (Fig. 3). We may be able to incorporate the genes that produce the molecules scientists are interested in within plasmids of bacteria that we can easily grow. Drug production by metabolically engineered microorganisms has several advantages over total chemical synthesis or extraction from natural resources. IDENTIFICATION OF NEW ANTIMICROBIAL COMPOUNDS Most of the antimicrobial compounds currently on the market were screened based on whole cell antimicrobial screening programs. By application of new genome-driven techniques more directed, target-based approaches are possible. These new screening strategies are directly coupled to potential drug targets, which have been identified by genome sequencing projects. Such antimicrobial targets are for example proteins that are essential for microbial growth or cell survival. The sequencing of the genome of a microorganism that has been identified as a potent producer of bioactive compounds allows the identification of the gene clusters involved in the pathways for the production of these natural compounds (Fig. 4). SCREENING FOR NEW METABOLITES The screening results depend on the quality of screening material, collection and storage of organisms, cultivation, extraction, storage of extracts and preparation of test samples. A directed (preselected) screening offers better chances of finding interesting metabolites than an undirected (blind) screening. Such a directed screening could be based on ecological observations on traditional experiences or search in novel organisms. Mode and solvent of extraction determine which substances are extracted. Solid phase extraction is a suitable method for automated sample preparation (Schmid et al., 1999). Chemical and physicochemical screening is the search for new chemical structures regardless of their biological activities. The chemical reactivity or physicochemical properties of the separated compounds are analyzed by spectroscopic methods (UV/VIS, MS, NMR) or by detection with special detection reagents in the TLC. The development of HPLC-DAD-MS systems allows the specific detection of single components in a complex mixture (e.g., an extract), regardless of the background of other metabolites. During biological screening test samples (extracts, fractions, pure compounds and compound libraries) are screened for their bioactivities in vitro and/or in vivo. In the case of extracts, active metabolites could be isolated by bioactivity-guided isolation processes. The finding of structurally known compounds (dereplication) in active extracts is possible. In vitro tests could be done on a molecular or on a cellular level. An assay that requires careful interpretation but provides a lot of information per assay is ideal for marine natural products research. Tests on the molecular level are based, e.g., on receptor systems (identification of those compounds which bind to one receptor) or on enzyme systems (enzyme-catalyzed turnover Tests on the genome, transcriptome, or proteome level will become more and more important. Targets of high pharmacological relevance are G-protein coupled receptors, tyrosine kinase receptors, nuclear hormone receptors, ion channels, proteases, kinases, phosphatases and transporter molecules. The detection of a reaction on the molecular level could be done by biochemical assays (e.g., spectrophotometric measurement of the product of an enzymatic reaction), ligand binding assays (readout by labeling with a tracer) or functional assays (reporter gene assays quantifying the expression level of a specific reporter gene product, second messenger assays, two hybrid assays for measuring protein-protein interactions). Fluorescence-based assay technologies, isotopic labeling, colorimetry and chemoluminescence are very often used as detection methods. Cell-based assays are more complex and more physiologically relevant than tests on the molecular level. On the other hand, they are still labor intensive and more difficult to validate than molecular assays. With the potential of so many new compounds to combat bacteria, viruses and debilitating diseases such as alzheimers, osteoporosis and cancer why have marine sources not been thoroughly investigated before? The disclosure of compound, which organism it is isolated from and its structure become devalues leads to pharmaceutical companies losing the advantages. Many marine organisms are found in remote locations and can require large sums of money just to travel to and from these locations. Additional expenses including the specialized services of divers, submersibles and the personnels safety and costs can become quite steep. An example of the prohibitive costs associated with collection of marine organisms is that a ship and submersible costs $14,500 per day (Hale et al., 2002). FUTURE OF MARINE SOURCES The future looks bright for the pharmaceutical industry to develop new drugs from chemical structures isolated from marine sources. As of 2001 over 13,000 compounds, with 3000 of those denoted as being active compounds (those that have exhibited potential pharmaceutical effects), have had their chemical structures determined and documented (Fig. 5). ||Metabolites from marine microorganisms (Schweder et al., 2005). (a) antitumor compound, (b) antibiotic compound, (c) antiviral compound, (d) anti-inflammatory compound and (e) antifungal The vast majority of these compounds are being developed in the hopes of treating cancer, tumour growth and leukaemia-over 67% of compounds isolated from marine origins have cytotoxic activity (Cragg et al., 2006). Fifty years ago the search of drugs from marine sources was in its infancy and even though progress has been slow pharmaceutical companies are beginning to embrace the use of natural marine sources. In the research being conducted today, we also see a future trend towards marine natural resources as the number of papers reporting total syntheses or synthetic analogues are quite extensive. Partial and formal syntheses of compounds with their origins from marine sources are not documented in review in comparison, thus there are many more lead compounds with their origins from marine natural sources than previously thought (Bourguet-Kondracki and Kornprobst, 2005). Investigators have a large amount of compounds to begin their investigations with and will provide the basis for future generations of drug products (Table 2). Anticancer drugs derived from marine sources have not yet been approved for market, yet a significant number are undergoing clinical trials and the future appears to hold a cancer treatment based on a marine natural source. Natural products have played a significant role in drug discovery. Over the past 75 years, natural product derived compounds have led to the discovery of many drugs to treat human disease. Drugs developed from marine sources give us this hope and also give us novel mechanisms to fight some of the most debilitating diseases encountered today, including: HIV, osteoporosis, Alzheimers disease and cancer. Although, the costs associated with developing drugs from marine sources have been prohibitive in the past, the development of new technology and a greater understanding of marine organisms and their ecosystem are allowing us to further develop our research into this area of drug development. This present review article will attempt to link these developments with some global issues and begin to present a convergent vision of many disparate views of the development of medicinal and biological agents from marine natural sources. This study is, in part, a commentary on finding a middle way, an as yet untrodden path in drug discovery, for the global health benefits of humankind from marine environment. Ahn, K.S., G. Sethi, T.H. Chao, S.T. Neuteboom and M.M. Chaturvedi et al Salinosporamide A (NPI-0052) potentiates apoptosis, suppresses osteoclastogenesis and inhibits invasion through down-modulation of NF-kB-regulated gene products. Blood, 10: 2286-2295.PubMed | Balaban, N. and G. Dell'Acqua, 2005. Barriers on the road to new antibiotics. Scientist, 19: 42-43.Direct Link | Berdy, J., 2005. Bioactive microbial metabolites: A personal view. J. Antibiot., 58: 1-26.CrossRef | Direct Link | Bergmann, W. and R. Feeney, 1951. Contributions to the study of marine products. XXXII. The nucleosides of sponges.I. J. Org. Chem., 16: 981-987.Direct Link | Bhadury, P. and P.C. Wright, 2004. Exploitation of marine algae: Biogenic compounds for potential antifouling applications. Planta, 219: 561-578.CrossRef | Direct Link | Bourguet-Kondracki, M.L. and J.M. Kornprobst, 2005. Marine pharmacology: Potentialities in the treatment of infectious diseases, osteoporosis and alzheimer`s disease. Adv. Biochem. Eng. Biotechnol., 97: 105-131.PubMed | Chandran, S.S., J. Yi, K.M. Draths, R. von Daeniken, W. Weber and J.W. Frost, 2003. Phosphoenolpyruvate availability and the biosynthesis of shikimic acid. Biotechnol. Prog., 19: 808-814.Direct Link | Christie, S.N., C. McCaughey, M. McBride and P.V. Coyle, 1997. Herpes simplex type 1 and genital herpes in northern Ireland. Int. J. STD AIDS, 8: 68-69.PubMed | Colwell, R.R., 2002. Fulfilling the promise of biotechnology. Biotechnol. Adv., 20: 215-228.PubMed | Cragg, G.M. and D.J. Newman, 2001. Medicinals for the millennia: The historical record. Ann. N. Y. Acad. Sci., 953: 3-25.CrossRef | Cragg, G.M., D.J. Newman and S.S. Yang, 2006. Natural product extracts of plant and marine origin having antileukemia potential. The NCI Experience. J. Nat. Prod., 69: 488-498.CrossRef | Direct Link | De Vries, D.J. and P.M. Beart, 1995. Fishing for drugs from the sea: Status and strategies. Trends Pharmacol. Sci., 16: 275-279.CrossRef | Direct Link | Faulkner, D.J., 2002. Marine natural products. Nat. Prod. Rep., 19: 1-49.Direct Link | Feling, R.H., G.O. Buchanan, T.J. Mincer, C.A. Kauffman, P.R. Jensen and W. Fenical, 2003. Salinosporamide A: A highly cytotoxic proteasome inhibitor from a novel microbial source, a marine bacterium of the new genus Salinospora. Angew. Chem. Int. Ed. Engl., 42: 355-357.CrossRef | Direct Link | Hancock, R.E.W., 2007. The end of an era. Nat. Rev. Drug Discov., 6: 28-28. Hale, K.J., M.G. Hummersone, S. Manaviazar and M. Frigerio, 2002. The chemistry and biology of the bryostatin antitumour macrolides. Nat. Prod. Rep., 19: 413-453.PubMed | Hoskeri, H.J., V. Krishna and C. Amruthavalli, 2010. Effects of extracts from lichen Ramalina pacifica against clinically infectious bacteria. Researcher, 2: 81-85.Direct Link | Isaka, M., C. Suyarnsestakorn, M. Tanticharoen, P. Kongsaeree and Y. Thebtaranonth, 2002. Aigialomycins A-E, new resorcylic macrolides from the marine mangrove fungus Aigialus parvus . J. Org. Chem., 67: 1561-1566.Direct Link | Blunt, J.W., B.R. Copp, W.P. Hu, M.H.G. Munro, P.T. Northcotec and M.R. Prinsepd, 2007. Marine natural products. Nat. Prod. Rep., 24: 31-86.Direct Link | Jung, W.S., S.K. Lee, J.S.J. Hong, S.R. Park and S.J. Jeong et al Heterologous expression of tylosin polyketide synthase and production of a hybrid bioactive macrolide in Streptomyces venezuelae. Applied Microbiol. Biotechnol., 72: 763-769.Direct Link | Martin, V.J.J., D.J. Pitera, S.T. Withers, J.D. Newman and J.D. Keasling, 2003. Engineering a mevalonate pathway in Escherichia coli for production of terpenoids. Nat. Biotech., 21: 796-802. Minami, H., J.S. Kim, N. Ikezawa, T. Takemura, T. Katayama, H. Kumagai and F. Sato, 2008. Microbial production of plant benzylisoquinoline alkaloids. Proc. Natl. Acad. Sci. USA., 105: 7393-7398.PubMed | Morse, S.S., 1997. The public health threat of emerging viral disease. J. Nutr., 127: 951S-957S.PubMed | Moreira, D., F. Rodriguez-Valera and P. Lopez-Garcia, 2004. Analysis of a genome fragment of a deepsea uncultivated Group II euryarchaeote containing 16S rDNA, a spectinomycin-like operon and several energy metabolism genes. Environ. Microbiol., 6: 959-969.PubMed | Thakur, N.L., R. Jain, F. Natalio, B. Hamer, A.N. Thakur and W.E.G. Muller, 2008. Marine molecular biology: An emerging field of biological sciences. Biotechnol. Adv., 26: 233-245.CrossRef | Lederberg, J., 2000. Infectious history. Science, 288: 287-293.Direct Link | Liu, Z., P.R. Jensen and W. Fenical, 2003. A cyclic carbonate and related polyketides from a marine derived fungus of the genus Phoma. Phytochemistry, 64: 571-574.CrossRef | Luesch, H., W.Y. Yoshida, R.E. Moore, V.J. Paul and T.H. Corbett, 2001. Total structure determination of apratoxin a, a potent novel cytotoxin from the marine cyanobacterium Lyngbya majuscula . J. Am. Chem. Soc., 123: 5418-5423.Direct Link | Okazaki, T., T. Kitahara and Y. Okami, 1975. Studies on marine microorganisms. IV. A new antibiotic SS-228 Y produced by Chainia isolated from shallow sea mud. J. Antibiot., 28: 176-184.PubMed | Patrzykat, A. and S.E. Douglas, 2003. Gone gene fishing: How to catch novel marine antimicrobials. Trends Biotechnol., 21: 362-369.CrossRef | Raja, A., P. Prabakaran and P. Gajalakshmi, 2010. Isolation and screening of antibiotic producing psychrophilic actinomycetes and its nature from rothang hill soil against viridans Streptococcus sp. Res. J. Microbiol., 5: 44-49.CrossRef | Direct Link | Rawat, D.S., M.C. Joshi, P. Joshi and H. Atheaya, 2006. Marine peptides and related compounds in clinical trial. AntiCancer Agents Med. Chem., 6: 33-40.PubMed | Rowley, D.C., S. Kelly, C.A. Kauffman, P.R. Jensen and W. Fenical, 2003. Halovirs A-E, new antiviral agents from a marinederived fungus of the genus Scytalidium. Bioorg. Med. Chem., 11: 4263-4274.PubMed | Lee, S.Y., H.U. Kim, J.H. Park, J.M. Park and T.Y. Kim, 2009. Metabolic engineering microorganisms: General strategies and drug production. Drug Discovery Today, 14: 78-88.CrossRef | Schmid, I., I. Sattler, S. Grabley and R. Thiericke, 1999. Natural products in high throughput screening: Automated high-quality sample preparation. J. Biomol. Screen, 4: 15-25.CrossRef | Direct Link | Shizuya, H., B. Birren, U.J. Kim, V. Mancino, T. Slepak, Y. Tachiiri and M. Simon, 1992. Cloning and stable maintenance of 300-kilobase-pair fragments of human DNA in Escherichia coli using an F-factor-based vector. Proc. Natl. Acad. Sci. USA., 89: 8794-8797.CrossRef | Direct Link | Sudek, S., N.B. Lopanik, L.E. Waggoner, M. Hildebrand and C. Anderson et al Identification of the putative bryostatin polyketide synthase gene clusters from Candidatus endobugula sertula, the uncultivated microbial symbiont of the marine byrozoan Bugula neritina. J. Nat. Prod., 70: 67-74. Schweder, T., U. Lindequist and M. Lalk, 2005. Screening for new metabolites from marine microorganisms. Marine Biotechnol., 96: 1-48.PubMed | Tsuda, M., T. Mugishima, K. Komatsu, T. Sone, M. Tanaka, Y. Mikami and J. Kobayashi, 2003. Modiolides A and B, two new 10-membered macrolides from a marine-derived fungus. J. Nat. Prod., 66: 412-415.CrossRef | Direct Link | Tziveleka, L.A., C. Vagias and V. Roussis, 2003. Natural products with anti-HIV activity from marine organisms. Curr. Top Med. Chem., 3: 1512-1535.CrossRef | PubMed | Direct Link |
<urn:uuid:78e8b1f1-6533-4c82-8ec3-5e001128d71e>
CC-MAIN-2022-33
https://scialert.net/fulltext/?doi=ijp.2011.22.30
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572304.13/warc/CC-MAIN-20220816120802-20220816150802-00498.warc.gz
en
0.884856
8,142
2.796875
3
How did humans end up here? What was the origin of the cosmos? What is the cosmos? Prominent intellectuals and scientists have sought answers to these questions for thousands of years, however, it is since a very recent time that humanity started to almost achieve the formation of a full portrayal of our awe-inspiringly intricate and formidable cosmos. In this summary, we will provide you with a quick lesson regarding every principal existential question. This summary will teach you the way the cosmos came into being, the way life emerged, and the way the world’s prominent thinkers devised their ideas that led to breakthroughs. However, even though we have come a long way thanks to science, considering from the viewpoint of our knowledge regarding the world, there are a lot of questions that we are unable to answer for now. Numerous life forms existing in the profundities of our oceans, most of what constitutes the cosmos, and even elements of the world underneath our feet continue to be surrounded with mist. Chapter 1 – The Big Bang theory proposes the cosmos came to existence through a singularity in a concise moment. In 1965, two radio mulled over a different noise that caught their attention when they were conducting experiments using a communications antenna. It was later found that this noise was more than mere noise. The sound formed 90 billion trillion miles away, at the direct time of the cosmos’ formation: we call it today the Big Bang. Even though the finding was lucky, the astronomers were awarded the Nobel Prize in physics and played an important role in spreading the Big Bang theory. This declares our cosmos’ root goes back to a single point of nothingness known as a singularity, a point compressed to such extent that we cannot talk about dimensions for that point. In this only one, compact point the main elements of the universe used to be restrained. Abruptly, for reasons beyond our knowledge at the moment, this singularity ‘banged’ in “a single blinding pulse,” throwing around the future ingredients of our cosmos everywhere in the void. Even though the causes of this explosion continue to be a mystery, scientists know more specifically when it comes to what happened after the explosion. In the Big Bang, matter, or the ingredients inside that singularity, grew so quickly that the whole cosmos came into existence, lasting for only the moment in which one can prepare a sandwich. Almost instantly after the ‘bang’, the cosmos swelled substantially, growing twice as much in size every 10^-34 seconds – which means the process took place extremely fast. Today, humanity is cognizant that, together with the essential forces that rule our cosmos, close to 100% of all matter in the cosmos was formed in just three minutes. The cosmos today has a diameter of at a minimum one hundred billion light-years and is still expanding, even at this moment. Chapter 2 – The vastness of the cosmos renders it probable that other thinking organisms may exist. The cosmos’ enormity is such that it is nearly completely difficult for the human imagination to fully describe it. Astronomers guess that approximately 140 billion galaxies exist in the cosmos, all of which are visible to us. Think about how it would be were each galaxy to be frozen peas: there would be a sufficient amount of peas to fit inside a large auditorium! For a very long time, the enormity of the cosmos was a point of debate among scientists, which ended when Edwin Hubble made his appearance in the scene. In 1924, Hubble showed that a constellation that used to be considered a gas cloud was in fact a whole galaxy, situated at a minimum 900,000 light-years away from Earth. This information rendered our minds open to the thinking that our cosmos is not made up of solely the Milky Way galaxy – in which Earth is located – and we understood that there are numerous other galaxies in the universe. Put differently, the cosmos needed to be a lot more gigantic than any person had ever guessed. Through this discovery, humans started to doubt that they were the sole thinking beings in the cosmos. In Professor Frank Drake’s well-known equation from 1961, it is proposed that it is likely that humanity is nothing more than one of the millions of other superior civilizations. Drake arrived at this inference by dividing the number of stars in a chosen portion of the cosmos by the number that possibly had planetary systems. He then divided that number by the number of systems that would be able to theoretically sustain life, ultimately dividing that number by the number where life may go through evolution and acquire intelligence. Even if the number decreases considerably after every division – even when we keep the outcome as low as possible – Drake’s estimations about the number of advanced civilizations that are located in the Milky Way constantly add up to a number in the millions. But, since the cosmos is so gigantic, scientists guessed that the average travel time from any hypothetical civilization to another one lasts at least 200 light-years – remember that one light-year is equal to approximately 5.8 trillion miles. Therefore even though we assume that there are other advanced civilizations except us, the likely travel time between us and them doesn’t allow the thought of a random weekend visit to these civilizations. Thinking about the scope of the cosmos might make you a bit dizzy! Presently, the next chapters will talk about the way we discovered to measure the Earth itself. Chapter 3 – Newton’s ideas as to the way the Earth moves, the way it is shaped, and how much it scales were very logical. Newton was an unusual scientist. Aside from conducting experiments with a needle which he used to poke his eyeball and eye socket, and experiments involving looking fixedly at the sun for as long as he was able to, he was an ingenious and prominent mathematician. His pioneering work, Principia Mathematica, entirely transformed how we considered motion. Apart from describing Newton’s three laws of motion, Principia Mathematica reveals his general law of gravitation, which declares that all entities in the cosmos – no matter how enormous or tiny they are – exert a pull on all other entities. These laws rendered it probable to carry out measurements that had been deemed unthinkable. For instance, Newton’s laws presented us with a means to gauge the weight of the Earth. His laws aided us to comprehend as well that the world isn’t totally round. What Newton’s theories propose is that the force of the Earth’s spin should lead the globe to flatten a bit at its poles and swell at the equator. This finding was a significant hit to scientists whose estimation depended on the presumption that the Earth was round. To give it an example, consider the French astronomer Jean Picard, who had calculated the Earth’s circumference by means of a complex technique of triangulation – a scientific success that presented Picard with a fabulous fountain of pride. Sadly, Newton’s laws rendered Picard’s calculations wholly invalid. Newton’s laws sparked an entirely novel comprehension of the way to gauge celestial objects. Aside from getting more knowledgeable about the Earth’s motion, shape, and weight, scientists expanded their knowledge with respect to motions of other planets, tidal motion, and importantly – what is the reason for our spinning planet to not fling us into the depths of the universe! Chapter 4 – Rocks and fossils proved that the Earth was very aged, however, it was through radioactivity that we learned its age. Even though this might sound surprising, what we know in regard to the Earth’s age is a newer discovery than the creation of instant coffee or the first appearance of television, even the discovery of atomic fission. Actually, for a very long duration, the sole thing geologists knew was that the Earth’s history went far back. Even if they could arrange diverse rocks by age – classifying them depending on the periods when the sediment had deposited – geologists did not know anything about the duration of these periods. When humanity reached the twentieth century, paleontologists had become a part of the studies to estimate the Earth’s age by further separating these ages into epochs via fossil records. However, none of them were able to give a number as to the age of any of the bones found, and their guesses varied between 3 million and 2.4 billion years. Only after humanity had had a knowledge of radioactive materials were the scientists able to estimate the Earth’s age. In 1896, Marie and Pierre Curie found that particular rocks emitted energy without displaying any difference in size or shape. They called this phenomenon radioactivity. Physicist Ernest Rutherford, who later found that radioactive elements decayed into separate elements in a fairly foreseeable manner became interested in their studies. To be more precise, he realized that it continuously lasted for the same amount of time for half the sample to decay – which is called half-life – and that it was possible to utilize this knowledge in calculating a material’s age. Rutherford went on to apply his theory on a piece of uranium to see how old it is, which revealed that the piece of uranium was 700 million years old. Only after 1956 when Clair Cameron Patterson came up with a more exact age-determining method did we began to achieve a real understanding of the Earth’s age. Through analyzing the age of ancient meteorites, he calculated the Earth’s age to be approximately 4.55 billion years old (plus or minus 70 million years) – which is quite similar to the present scientific consensus of 4.54 billion years! As previously mentioned, comprehending the cosmos is an intricate process. The following chapters will examine scientific endeavors to build order from complexity: the theory of relativity and quantum theory. Chapter 5 – Einstein’s theory of relativity played a rather important role in our comprehension of the universe as a whole. In his school years, Albert Einstein wasn’t a successful pupil and student. Following the failure in his earliest college entrance exams, he began to be employed in a patent office. However, while working in the patent office, physics caught his interest and he wanted to study it, and in 1905 wrote a paper that would transform the world entirely. His pioneering Special Theory of Relativity demonstrates that the concept of time is relative, and does not go further all the time in the same fashion as an arrow. This idea is hard to comprehend for many since people don’t feel the impacts of relative time in our everyday lives. For light, gravity, and the universe itself, though, Einstein’s theory means a lot. Basically, the theory puts forward that the speed of light is fixed, which indicates that it doesn’t alter for observers no matter how rapid they may travel. But, the opposite goes for time: should one entity moves faster than another, it’ll experience time such that time will appear to be slow. What’s even further difficult to grasp apart from his special theory is Einstein’s General Theory of Relativity, which completely transformed the way we think about gravity. Upon watching a workman fall from a roof, Einstein started pondering more about gravity, which was the last component of h,s special theory but he overlooked. Issued in 1917, his general theory put forward that time is entwined with the three dimensions of space as spacetime. It is possible to view spacetime as a sheet of stretched rubber. Should you put a large round object in the middle, the sheet will stretch and sag a little. Heavy objects, like the sun itself, perform the same thing to spacetime. When you place a tinier object across the sheet, the object will strive to move in a straight line. But, when the smaller object approaches the bigger object and the gradient of the fabric, it will then begin rolling downward. Basically, gravity functions as a product of the bending of spacetime. In one ingenious theory, Einstein demonstrated to the world the way time and gravity work! Chapter 6 – Quantum theory played a large role in elucidating the subatomic world, however, it led physics to have two bodies of laws. As the number of scientists who study atoms increased, so did the realization that it is improbable to explain atoms through the traditional laws of physics. The traditional laws said that atoms aren’t supposed to exist: the positively loaded protons in the nucleus should push each other, leading the atoms to destroy, while the electrons that spin around them should be colliding with one another continuously. Scientists surmounted this issue by introducing a novel theory, disclosing how the subatomic world works. In 1900, the German physicist Max Planck proposed a quantum theory, according to which energy isn’t something eternal but rather is formed in separate packets known as quanta, particles even tinier than atoms. His idea didn’t go beyond theory until 1926, in which another German physicist, Werner Heisenberg, introduced the notion of “quantum mechanics” whose objective was to render atoms’ strange behavior understandable. What occupied the core of this discipline was the principle of uncertainty belongs to him, which showed that electrons possess the properties of both particles and waves. Consequently, it is impossible to foretell with full accuracy at which location an electron will be at any given time – The sole thing that we are able to do is to determine the probability that it is located at a particular point in space. The development of quantum theory created as much uncertainty as it did elucidation, finally splitting physics into two sets of laws: one of them used for the subatomic world and the other for the bigger cosmos. The theory of relativity does not apply to this subatomic world, and the quantum theory has no capability at all with respect to clarifying phenomena such as gravity or time. This disorderedness annoyed Einstein to such a degree that the rest of his life passed trying to devise something he names as a Grand Unified Theory. However, he couldn’t succeed in his endeavor. There are people for whom the most impressive things regarding atoms are the observable things they create and we can see, such as mountains and oceans. In the following chapter, you’ll continue learning about Earth and what makes Earth habitable. Chapter 7 – Even if life on Earth is full of hurdles, it’s many thanks to the universe that it can even be present. Notwithstanding the remarkable variety of life on Earth, the world is not even close to being a peaceful place to dwell. Actually, as stated by one estimate, close to 100 percent of the Earth’s livable area is entirely inaccessible to humans since our species require land and oxygen to survive. What’s more, we don’t have much chance on land either: solely a little more than 10 percent of the world’s entire landmass has the right conditions that support human life. There are scientists who have made huge efforts to show simply how weak humans actually are. Parent and child team John and Jack Haldane carried out experiments on their own bodies to demonstrate simply how difficult the conditions are if a human departs from the surface world. Jack constructed a decompression chamber to reflect life at the most profound section of the oceans, and while carrying out this experiment, actually would poison himself when he felt the higher oxygen levels present in the deep sea. In one experiment, oxygen saturation made him experience such a fit so that some of his vertebrae were crushed. Thinking about simply how difficult it is to exist on much of the Earth, it’s fascinating that there is such a thing as humanity! Examining the discovered planets, it’s plain that locating a planet that supports life can be seldom done. Actually, a planet must fulfill four precise standards to be life-supporting. For starters, the planet is required to be situated in the appropriate distance from a star – if the planet is too close, then anything on the planet will be in flames; if it is too far, basically the planet freezes. The next thing is that the planet must have the capacity of forming an atmosphere that can protect us from cosmic radiation. Third, humans would require a moon to balance the numerıus gravitational influences on this planet, important for spinning at precisely the right speed and angle. Lastly, timing matters a lot. The complicated chain of events that brought about human existence had to play out in a specific way at specific times to allow life and dodge catastrophe. Chapter 8 – There is very scarce information with regard to the laws that govern life in the oceans. It is really interesting that we name this planet “Earth” and not “water” since you can find water anywhere on Earth! Consider this: almost seven-tenth of our bodies are made up of water; and moreover, the volume of water that cover Earth is beyınd 1.3 billion cubic kilometers of water. Thinking about the significant position of water for life, it’s interesting that humans engaged with seas scientifically very recently. Despite the fact that almost a hundred percent of all water on the planet belongs to the ocean, the earliest genuine investigation of the oceans wasn’t arranged until a recent date. In 1872, a retired English war vessel was on a mission for three and a half years that involved navigate the planet, gather specimens from the waters, and get novel kinds of marine life forms, hence bringing about a novel scientific field: oceanography. This exploration went on into the dark seas with two American adventurers whose names were Otis Barton and William Beebe. In 1930, the Americans broke a world record by going 183 meters down into the ocean depths in a small iron chamber known as a bathysphere. By 1934, they went down the ocean depths exceeding 900 meters. Sadly, though, the Americans weren’t professional oceanographers at all and lacked enough lighting and tools. The only thing they were able to report was that the ocean depths were full of odd things. Consequently, academics and scientists mostly disregarded their discoveries. In the present day, scientists have knowledge of more than 10,918 meters into the ocean’s depths, however, even with this knowledge, they don’t have anything else. To be more precise, humanity has been able to create better maps of Mars than they have of the seabeds. As pointed out in one estimate, humanity might have merely studied a millionth or even a billionth of the ocean abyss. There may be as many as 30 million kinds of sea-dwelling life forms in the ocean depths – many of which continue to wait to be discovered. Even details of the lives of the ocean creatures that we can see, like the blue whale, are shrouded with mist. Chapter 9 – Bacteria are the most populous life form of the Earth, and they’re the reason for our existence – without them, we wouldn’t be here today. Germaphobes are profuse. However much you may be clean, there will constantly be a great number of bacteria around you. Think about this: when you’re healthful, nearly one trillion bacteria dwell on your skin! This is a fact that bacteria make up a huge portion of the Earth’s population and can adapt to Earth’s diverse life forms. Actually, the amount of bacteria is so large that were we to get the total of the mass of every living being on the Earth, these small bacteria would make up eight-tenth of that total. This is so because bacteria can propagate very fast. Bacteria are extremely procreative; it’s possible for them to propagate a novel generation in less than 10 minutes. The significance of this is that, when there are no outside impacts, a single bacterium could in theory generate so many offspring in two days that the number of bacteria would surpass the number of protons in the cosmos! What’s more, bacteria are able to survive and increase in number on nearly anything. The only thing they need is a little moisture and then they are capable of living in even the severest environments, like in the waste tanks of nuclear reactors. Some bacteria have such strength that they seem impossible to eradicate. Even İF a bacterium’s DNA is exposed to extreme levels of radiation, it will just regenerate as though everything is the same as before the radiation exposure. However, we should feel grateful that bacteria exist all around– they are highly vital to our survival. Bacteria recycle our wastes, clean our water, maintain our soil’s fertility, turn our food into beneficial vitamins and sugars, and transfer the nitrogen in the air to us – which are among other vital things. Actually, many of the bacteria are present as neutral or helpful for humans. But, roughly one in 1,000 bacteria cause diseases; and even this small demographic accounts for the third-most deadly killer of humans everywhere across the planet. Several of the most lethal diseases, from plague to tuberculosis, stem from bacteria. Chapter 10 – Life began impromptu as a package of genetic material that managed to replicate itself. Picture this: precisely the right contents from your kitchen cupboard mysteriously began blending and made themselves into a scrumptious cake and that this cake later started replicating itself to create more scrumptious cakes. Have you found this odd? What’s odder is the reality that groups of molecules, like amino acids, carry out this exact process always. After closer examination, though, we see this impromptu process isn’t that magical. Self-combining processes take place all the time: from the symmetry that snowflakes have to the rings of Saturn, it is possible to notice patterned complexity everywhere in the cosmos. Therefore it looks natural that amino acids would order themselves into the proteins responsible for the creation of bodies. Well, a living organism is nothing but an assembling of molecules. The sole genuine discrepancy between inorganic and organic matter – it can be a carrot or goldfish – is the main components – carbon, hydrogen, oxygen, and nitrogen. Therefore, impromptu life is likely. However, how did life arise? Life, we are familiar with is the outcome of one genetic trick, which has passed down to generations for roughly 4 billion years. This time of formation also known as the Big Birth among biologists took place after a small package of chemicals found a way to split itself, hence transferring a replication of its genetic code into the primordial soup. This process ultimately brought bacteria into existence, which continued to be the only life forms on the Earth for 2 billion years. These bacteria over time found a way to utilize water molecules, hence forming the process of photosynthesis and supplying the Earth with oxygen. Later, some 3.5 billion years ago, the Earth’s earliest ecosystems started to emerge in shallow waters. After oxygen levels attained today’s levels, complicated life forms came into existence, split into those that emit oxygen (such as plants) and those that use it (such as humans). Therefore, however different living organisms look, all single life forms utilize the same genetic dictionary and “reads” the same code. The similarities between chimpanzees and banana are much more than the discrepancies between them. Chapter 11 – Although the world enables the living of a numerous number of species, we can consider all life as one. The statement that reads as there are a lot of diverse species on the Earth is an underemphasis. The number of species is thought to vary between 3 million to 200 million. What a report in The Economist puts forward is that we have probably not yet discovered 97 percent of all species, including both plants and animals. Furthermore, there doesn’t even exist a principal registry of the species that are known to us, rendering us even more confused by the variety of life on the planet. However, in spite of the discrepancies between and among species, every living thing has a connection still. In 1859, in his book The Origin of Species, Charles Darwin demonstrated that every living thing has a connection with one another and that species undergo differentiation and start to be “fitter” by means of a process of natural selection, hence putting forward a shared common ancestor in the remote history. Contemporary inspections into our genes and DNA emphasized that humans share with each other many more things than they used to think. For instance, were you to put side by side your DNA and that of another person, it would be easy to see that 99.9 percent of your codes would be precisely the same. However, the similarities are found outside the same species as well. You might find it surprising and not believe this but roughly half of your DNA would be exactly the same as the DNA of a banana. Furthermore, six-tenth of our genes are perfectly the same as those we see in the fruit fly, and at a minimum nine-tenth of our genes match up on some level with the genes in mice. What’s even odder, scientists have found that it is possible to share parts of our DNA with species. To give an example, it is possible to put human DNA into specific cells of flies that can “accept” this DNA as though the DNA belonged to them, further implying that life came into existence from a single pattern. Through this we are able to consider human beings as archives of a long history of change, going as far back as to the times in which life first originated. Examining the rich variety of life resembles seems very much like a miracle. Our last chapter will analyze if this miracle could suddenly come to a halt or not. Chapter 12 – The Earth is constantly in danger of asteroid impacts, volcanic eruptions, or earthquake destruction. Even if the Earth has experienced a relatively peaceful time for very long, it shouldn’t suggest that there doesn’t exist perils present within the solar system or even on the Earth itself. Actually, our solar system is a highly hazardous area to exist. Many times, the asteroids, objects that follow different orbits within our solar system and are similar to rocks, approach the world in a way that we can deem highly perilous. We can say that at least a billion asteroids fall through near space, and most of these asteroids frequently follow a trajectory close to the Earth. Actually, More than one hundred million asteroids bigger than ten meters across frequently follow a trajectory close to the Earth’s orbit. Scientists predict that as numerous as 2,000 of these are so big that they can endanger civilization that is known to humanity. What’s further distressing is the reality that close-misses with destructive asteroids could be occurring roughly twice or three times per week and go completely unnoticed. Furthermore, apart from external perils, there are also internal ones, such as earthquakes, which have the potential of occurring at any given time. An earthquake happens in the scenario where two tectonic plates push each other and create pressure until ultimately, one of them cannot withstand. This is a critical issue for areas like Tokyo, which sits on the meeting point of three tectonic plates. Moreover, a distinctive kind of earthquake, an intraplate quake, can occur remote distance from plate edges. Because they start out at a highly deep location in the Earth’s crust, we cannot foretell when they will happen. Volcanoes pose a threat, too. In 1980, Mount St. Helens erupted in Washington, the US state, causing the death of 57 people. Although many of the government’s volcanologists were actively watching and gauging the volcano’s behavior, volcanologists didn’t anticipate something like this. Despite all the scrutinization, Mount St. Helens erupted. There is a reason to worry since a gigantic volcanic hot spot is positioned straight beneath the western United States. According to scientists’ estimation, it erupts every six millennia, causing a three-meter coat of ash on anything that is situated within 1,600 kilometers. Very disturbing news for humanity, its last eruption goes back to six millennia ago! A Short History of Nearly Everything by Bill Bryson Book Review In the recent several centuries years, humanity has gradually gathered pieces to the puzzle of our existence. Today, humanity has more knowledge with respect to our cosmos, the Earth, and ourselves than anyone could have ever dreamed. However, many things continue to remain as a mystery but the process of scientific exploration never ends!
<urn:uuid:4b5180ba-a93d-498b-bc82-7a2dd0520825>
CC-MAIN-2022-33
https://goodbooksummary.com/a-short-history-of-nearly-everything-by-bill-bryson-book-summary-review/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00298.warc.gz
en
0.960596
5,917
3.421875
3
You must be signed in to read the rest of this article. Registration on CDEWorld is free. You may also login to CDEWorld with your DentalAegis.com account. Mass-fatality incidents, whether natural or manmade, occur often and can overwhelm local, state, and government agencies, resources, and personnel quickly.1 Of critical importance is a rapid and effective response from skilled, multidisciplinary teams that are trained to manage each incident’s aftermath, including the identification of the deceased.2 Dental hygienists are widely distributed, and when trained in this area, can add to response capabilities during mass-fatality incidents in all aspects of postmortem dental examinations.2 Hence, preparation and training in anticipation of mass-fatality incidents is vital.2 The literature is devoid of models for mass-fatality preparedness and victim identification in dental hygiene curriculum; however, mass-fatality training has been recommended for predoctoral dental school curriculum.3-6 Mass-fatality training that incorporates computer-based multimedia to present topics through integrated text, sound, graphics, animation, video, imaging, and spatial modeling has been used in developing forensic training in dental curricula.3,7 Some dental educators believe exposure to and participation in forensic specialty coursework might also stimulate students’ interest in serving their community as a disaster responder.5,8 Dental hygiene education provides competencies in administrative skills, dental radiology, dental examinations, and documentation of the oral cavity applicable to a clinical setting. However, currently, there are no accreditation standards for mass-fatality training in dental or dental hygiene curriculum.3-6 Disaster-victim identification during a mass-fatality incident is the most important dental forensic specialty area for dental hygienists to participate in, and they are recommended as viable responders for disaster-victim identification efforts.9-11 The defined role of dental hygienists as a mass-fatality team member includes serving as dental registrars for managing antemortem and postmortem dental records, providing surgical assistance for jaw resections, imaging postmortem dental radiographs, and performing clinical examinations of the oral cavity as part of the postmortem or records-comparison teams.9-16 Identifying the deceased must be safe for emergency responders, as well as reliable and accurate.2,17,18 However, dental hygiene participation and education in mass disasters has been inadequately addressed in the literature. Expansive training is needed and recommended because practitioners with special forensics training and experience are better able to accomplish duties needed for identifications.2,4-9 There are a limited number of studies addressing how disaster preparedness should be developed in dental curriculum. In dental education, More et al specifically recommends a multimedia approach for catastrophe preparedness with “hands-on” simulations to provide an active learning experience, including mock disaster scenarios.3 More et al’s publication on the development of a curriculum to prepare dental students response to catastrophic events cites technology as “ideal” in combination with case studies, drills, and dramatizations using multimedia and simulated events.3 Investigators have suggested that mass-fatality training be interactive and provide assessments of skill acquisition, because regular practice and learning keeps skills and best practices for emergency preparedness and response current.5 Stoeckel et al5 and Hermsen et al6 recommend that forensic dental education for predoctoral dental school curriculums include identifying victims of a mass disaster using portable radiology equipment and victim identification software systems. Repeated practice is required to strengthen skills in radiographic imaging technique for exposure of postmortem dental remains.19 Meckfessel et al demonstrated that multimedia was effective in a dental radiology course.19 The Department of Oral and Maxillofacial Surgery of the Hannover Medical School introduced an online, multimedia dental radiology course called “Medical Schoolbook,” for predoctoral third-year dental students.19 It was designed to support multimedia learning modules.10 In the low-media module group, only 15 out of 42 students failed the radiology final examination. Two years after initiating the multimedia, only 1 out of 67 students failed the radiology final examination.19 The authors concluded that the radiology program benefited from additional media for teaching difficult concepts and transfer of knowledge. Multimedia presentations of simulated events can provide an environment with authentic learning situations that facilitate knowledge transfer and retention beneficial for safe practice.20 Mayer found that media supports the way the human brain learns.22 His theory on the cognitive theory of multimedia learning supports dental educator’s recommendations for use of multimedia.21 Mayer’s theory centers on the idea that learners attempt to build meaningful connections between words and pictures, learning more deeply than with words or pictures alone.20,21 In the absence of an actual mass-fatality incident, learners need training resources that connect their established competencies with the additional competencies or skills needed for mass-fatality training and forensics. Multimedia could provide easily deployable training modules, which could be reviewed repetitively with actual demonstrations for just-in-time training, including abbreviated training sessions for untrained volunteers during the time of an actual incident. The key elements of Mayer’s theory are based on three assumptions.20 First, the dual-channel assumption is that working memory has auditory and visual channels. Mayer’s “Modality Principle” states that people learn better from words and pictures when words are spoken rather than printed.20,21 Next, the limited capacity assumption is that working memory is limited in the amount of knowledge it can process at one time, so that only a few images can be held in the visual channel and only a few sounds can be held in the audio channel.20,21 Lastly, the active processing assumption explains that it is necessary to engage our cognitive processes actively to construct a coherent mental representation and to retain what we have seen and heard. Learners need to be actively engaged to attain or remember, organize, and integrate the new information with other or prior knowledge.20,21 Use of multimedia has several advantages, including observation of simulated experiences and opportunities for visualizing a process or procedure before being involved physically.20-23 This provides the potential for increased cognitive knowledge, analysis, and application of new knowledge in a “safe” environment.20-23 Stegeman and Zydney also found that learners who have repeated access to information and videos had an advantage over students who did not have access to the materials for further study.23 Mayer identifies improvement in learning as the “multimedia effect.”20 The presentation of audio and video are held in working memory simultaneously to create referential links between the two. In another study, Mayer and Moreno found that onscreen text and images can overload the learner’s visual processing system, whereas narration is processed in the verbal information processing system, requiring the student to both read and simultaneously view the mage.22 Both activities use a single channel, the visual channel. Video is single-channeled because our brains already pull the underlying video and audio together, and is considered multimedia.20,21 Because multimedia uses a single channel only, researchers believe information is easier to remember and retain.20,21 An image with accompanying narration is using dual channels, whereas narration is processed in the verbal information processing system, part of the auditory channel.20,21 Dual-channeling usually involves pictures and sounds, such as a narrated PowerPoint. Emergency experts have underlined disaster preparedness as a way to reduce the many challenges that occur during incident response and management.1-18 This study investigates the effectiveness of strategies for mass-fatality training among dental professionals. More specifically, it assesses whether the use of multimedia is likely to enhance educational outcomes related to mass-fatality training. Multiple-choice examination scores and clinical competency-based radiology lab scores of two groups of second-year dental hygiene students were completed. Interest in this specialty area for each training approach was also assessed. Methods and Materials Mayer’s “Modality Principles” and Stoeckel et al’s recommendations for mass-fatality training in dental students were the basis for the use of multimedia and a “hands on” clinical competency-based radiology lab for the mass-fatality training in this study.3-6,20 A two-group, randomized, double-blind, pre- and post-test research design was used (Table 1). The sample for this educational evaluation included dental hygiene students in their first semesters of the second year of an entry-level baccalaureate degree program. All participants were required to have completed prerequisite coursework, to have completed 1 year in oral radiology, and to be certified in Virginia radiation safety. Pregnancy or suspected pregnancies were part of the exclusion criteria, due to the use of portable radiation devices in atypical positions. After Institutional Review Board approval, the researchers invited students to participate in the study via on an online announcement. Participation was voluntary, and students could withdraw from the study at any time without impacting their status in the dental hygiene program; 42 participants completed informed consent documents and were enrolled and randomly assigned to either the control group (n = 21) or experimental group (n = 21). The control group viewed an educational module with low media (dual channeling), while the experimental group viewed information with multimedia (single channel). For the purpose of this study, multimedia was defined as media that integrated text, graphics, audio, and video demonstrations to allow for self-pacing, repetition of reading text, listening to and viewing materials, and/or guided demonstrations. Low media was defined as using teaching presentation software with text and graphics (PowerPoint) that also allowed for self-pacing and repetition, but only through reading and in an one-dimensional visual context. The content for both of the educational modules was comparable and developed by an instructional designer and dental hygiene faculty members who have emergency preparedness and response training. All student participants viewed their assigned educational module with unrestricted access before participating in the clinical competency-based radiology lab. Both educational modules were deployed online via the university-supported Blackboard Learn system® (Blackboard, Inc). The educational modules for both multimedia and low media were of parallel content and included the definition of forensic odontology, the role of the dental hygienist during a mass-fatality incident, and victim identification. The educational module specifically addressed biosafety considerations, personal protective equipment, and sterilization procedures in the mortuary setting. Dental radiography topics included techniques for using portable handheld radiographic equipment when imaging simulated victim remains and safe exposure of postmortem radiographs. An online pretest was given before viewing the educational module. The post-test was administered after student participants viewed the educational module and completed the clinical competency-based radiology lab. The multiple-choice pre- and post-tests had the same 15 forced-choice questions on interest in mass-fatality training and on taking radiographs of victim remains (two questions), knowledge of forensics (two questions), personal protective equipment, and infection control in a mortuary setting (four questions), radiation safety (three questions), and radiographic technique when imaging simulated victim remains (four questions). Students had 1 week prior to their clinical competency-based radiology lab to view the educational module in full. The clinical competency-based radiology lab included exposure of 11 intraoral radiographs of six fragments of lubricated and real human skulls with bitewing, anterior, and posterior periapical images. To evaluate the performance of students on their technique when imaging dental remains, all radiographic images were scored by two calibrated examiners, and a radiographic evaluation form was used to identify errors in the following categories: angulation, placement, exposure, and density. Errors were entered as: 0 = no error, 1 = slight error not indicating a retake of the image, and 2 = nondiagnostic error requiring retake of the image. Students received instructions on technique through the educational module. No instruction on radiographic technique was given during the radiology lab portion of the study, and there were no retake exposures. Lab equipment included a portable handheld x-ray device (Nomad Pro®; Aribex, Inc), a direct digital image sensor (Schick Elite®; Sirona Dental), and a modified image receptor holder, which is used at onsite, temporary morgues during mass-fatality incidents. Quantitative data analysis of interactions, pre- and post-test results, and radiology laboratory results were performed using SAS® 9.3 software. Significant differences existed at α = 0.05 for analysis of variance (ANOVA), after the assumption of normality and equality of variance had been met. Assumptions of equality of variances to validate the statistical tests performed were also conducted. More specifically, Levene’s test, Brown-Forsythe test, and Bartlett’s test for homogeneity of variance were found to have high P values, indicating that additional corrections were not necessary prior to making comparisons between groups. A total of 39 participants out of 42 (92.8%) completed the pre- and post-tests for the multiple choice exam (experimental group [n = 20], and the control group [n = 19]); 38 participants completed the radiology lab portion of the study (experimental group [n = 20], control group [n = 18]). One and two participants were excluded from each experimental and control groups, respectively, because they did not complete the research protocol in its entirety. The means and standard deviations for the experimental and control groups were calculated. The mean sum pretest score for both groups combined was 8.1 (SD = 1.32). The mean sum pre-test score within the experimental group was 8.4 (SD = 1.35), and 8.2 (SD = 1.32) in the control group. Post-test scores for the groups combined was 9.9 (SD = 1.40), 9.95 (SD = 1.23) within the experimental group and 10 (SD = 1.6) within the control group. ANOVA indicated no significant gain between the groups; however, there was significant improvement in scores within each group (Table 2). In the control group, the mean score for the pre-test was 8.2 (SD = 1.31), with a mean post-test score of 10 (SD = 1.59). Similar analysis revealed a significant improvement in scores with P value < .0001. Students reported similar interest in learning more about the role of the dental hygienist in disaster-victim identification for mass-fatality incidents from baseline (99.9%) to post-test (94.8%). Students reported slightly more interest in exposure to radiographic images on postmortem remains at the post-test (94.7%) compared to baseline (88.6%). Specifically, interest in disaster-victim identification had significant gain from pre-test to post-test scores, where the mean difference score was -0.07 (SD = 0.634) (P = .45). Results also suggest students from both groups showed an increased interested in postmortem radiographic imaging after the educational modules and clinical competency-based radiology lab, with a mean difference score of 0.12 (SD = 0.57). Overall, the participants performed well in both the educational modules and clinical competency-based radiology lab with some improvement from pre- and post-test scores within the groups and little difference in scores between the two groups. In the experimental group, the mean score of 0.3 (SD = 1.09) revealed no significant gain in radiation technique knowledge (P = .16). Within the control group, there was also no significant difference in radiation technique knowledge, with a mean score of 0.26 (SD = 0.81). For radiation safety, there was a statistically significant gain in knowledge from pre- to post-test sums between the groups with a mean score of 0.69 (SD = 0.76) (P < .0001). The experimental group mean scores were 0.55 (SD = 0.68) and the control group mean scores were 0.84 (SD = 0.76). There was no significant gain in scores between the two groups for forensic knowledge (P = .210). Mean scores for the experimental group were -0.10 (SD = 0.45) and the control group means score was -0.1 (SD = 0.57). Lastly, a statistically significant difference was found between the two groups in terms of infection control scores (P < .0001). The experimental group had a mean score of 0.75 (SD = 0.79), and the control group was 0.79 (SD = 0.63). The correlation between radiation safety and technique was 0.33 (P = .0406). Therefore, a strong relation existed between the two variables. The greater the radiation safety scores, the greater the radiation technique score in both groups. For the clinical competency-based radiology lab portion of the study, the higher the score on the radiographic evaluation form, the worse the performance or increase in errors per radiographic image. The experimental group had an overall mean score of 21.95, and the control group had an overall mean score of 21.94. No significant difference was found between the experimental and control groups in overall laboratory scores (P = .997). Comparisons were also made between the experimental and control groups in the specific error categories, which included errors in placement of the digital image receptor, vertical and horizontal angulation errors of the position indication device, exposure errors, mounting errors, and an “other” category for errors that did not fall within one of the above mentioned categories. Between the two groups, there were no significant differences within the four categories of radiographic technique errors. Table 3 presents the means, standard deviations, and related P values for each category. Because there were no mounting errors recorded for either group, this category was omitted. This study compared a low-media and multimedia approach to mass-fatality training via a multiple-choice examination, competency-based radiology lab, and an assessment of changes in interest in mass fatality as a specialty. This type of research is not currently found in dental hygiene literature. The mass-fatality training review suggested that approaches to preparing dental hygienists for disaster response and victim identification needs to be further explored. This study addressed this gap in the literature by looking specifically at dental-hygiene mass-fatality training within the framework of what has been published in the dental curriculum. The majority of participants in each group at the post-test reported a high level of interest in mass-fatality training and in disaster-victim identification through exposing radiographic images on simulated victim remains, which supports the idea by Stoeckel et al that exposure to specialty coursework can encourage interest.5 Exposure to training in the forensic specialty area also gives dental and dental hygiene students the opportunity to decide whether they are interested in pursuing further training. No statistically significant differences existed between the two groups; however, scores increased within each group. Both approaches resulted in increased scores. This increase in scores supports the recommendation by More et al for the use of multimedia for mass-fatality training.3 The discrepancy between the groups may be explained by what Jonassen et al describes as focusing on the student rather than a focus on the media.24 Jonassen et al states that “any reasonable interpretation of an instructional medium should be more than a mere vehicle.”24 He explains that educators should not assume that by simply adding media, the student’s cognitive processes will integrate the new information with the old.24 Students may not have been fully engaged with the media during the lesson. Also, while multimedia modules are designed to facilitate a way for students to repeat, interrupt, and resume the lesson at will, there is a large assumption that they will take advantage of those benefits. Students may choose to “cram” with technology and multimedia-based modules. Another explanation could be due to the small sample size (n = 21 in each group), which may have limited statistical power. In general, the results of our evaluation revealed that providing mass-fatality training can be offered through a multimedia approach. For the clinical competency-based radiology lab assessment, both groups had a similar mean score from baseline to post-test with a 0.01 difference. In radiology education, a multimedia module with visual, audio demonstrations, and supplemental face-to-face instructor-guided lab demonstrations for skill acquisition may produce improved lab scores in the future. The educational modules allowed students to view demonstrations as needed, prior to the lab for review of difficult radiology concepts. This study supports that for difficult, hands-on skills such as radiographic technique, media could be used to enhance the learning process. These results support Stoeckel et al and Hermsen and Johnson’s recommendation for simulated exercises that allow students to practice clinical competencies such as the use of the portable radiology equipment and postmortem radiographic imaging.5,6 This study has some general limitations that preclude generalizing results to practice. Threats to the validity of the pre- and post-tests include the small sample size and the use of a convenience sample of dental hygiene students from an entry-level baccalaureate degree program. Because students were in the same program, it is possible that participants in the experimental group could have shown participants in the control group the multimedia educational module; participants could have also shared their clinical competency-based radiology lab experience with participants who had not taken that portion of the research study. The amount of study time is unknown because both educational modules were delivered online. Future studies should include larger samples sizes with a diverse sample of dental and dental hygiene students, practicing dentists and dental hygienists, and other dental team members from various universities and colleges. Additionally, this study did not utilize a full-curriculum approach because participants were evaluated based on one educational module and one attempt at the clinical competency-based radiology lab; researchers did not test long-term knowledge retention. Glotzer et al4 and More et al3 recommend catastrophe preparedness curriculum that is offered through multiple semesters by “supplementing the established curriculum with units of instruction.” Future research should identify educational methodologies that improve learning. The pre- and post-test limitations include asking 15 multiple-choice questions; a more reliable instrument would include questions covering a wider span of information. Modifications in research design and implementation may be required for application of instruction in different environments to include dental curriculum or just-in-time training during an actual mass-fatality incident. Additionally, researchers were not able to test whether multimedia might have an impact on the participant’s level of function during a mass-fatality incident; it is unknown whether or not a multimedia training approach would lead to better outcomes and recall in higher-stress situations. This study contributes to the dental hygiene literature by assessing the effectiveness of multimedia in incorporating mass-fatality training and radiographic imaging of dental remains specific to dental hygiene. Multimedia approaches have been identified in the dental publications and curriculum; however, there are no peer-reviewed publications on what type of educational methodology should be used for mass-fatality training for dental hygienists.5,19 These findings, although based on a small sample size, demonstrated minimal differences when using a multimedia versus low-media approach to mass-fatality training. A combined approach could be used to develop training modules specific to dental hygiene mass-fatality preparedness, response training and simulated lab exercises allowing students to practice clinical competencies that are beneficial for taking radiographs on simulated victim remains. Future research should include more diverse, multidisciplinary samples, and longitudinal data. Dental hygienists have participated in mass-fatality incidents and show promise in acts of community service and volunteerism. Training in anticipation of a mass-fatality incident is important for increasing the number of skilled and deployable dental professionals for recovery efforts.10 As training applicable to dental hygiene is developed and tested, dental hygienists can continue to add to response capabilities during a mass-fatality incident. Additional research in this area could contribute to the identification of teaching methods to to better prepare dental hygienists for a mass-fatality incident. About the Authors Tara L. Newcomb, BSDH, MS, is an Assistant Professor at the Gene W. Hirschfeld School of Dental Hygiene, Old Dominion University. Ann M. Bruhn, BSDH, MS, is an Assistant Professor at the Gene W. Hirschfeld School of Dental Hygiene, Old Dominion University. Loreta H. Ulmer, EdD, is a Senior Instructional Designer and faculty for the Center for Learning and Teaching, Old Dominion University. Norou Diawara, PhD, is an Associate Professor for the College of Mathematics and Statistics, Old Dominion University. This research was supported by an Old Dominion University Faculty Innovator Grant. 1. Teahen P. Mass Fatalities: Managing the Community Response. 1st ed. Baco Raton, FL: CRC Press; 2012:421. 2. Brannon RB, Kessler HP. Problems in mass-disaster dental identification: a retrospective review. J Forensic Sci. 1999;44(1):123-127. 3. More FG, Phelan J, Boylan R, et al. Predoctoral dental school curriculum for catastrophe preparedness. J Dent Edu. 2004;68(8):851-858. 4. Glotzer DL, More FG, Phelan J, et al. Introducing a senior course on catastrophe preparedness into the dental school curriculum. J Dent Edu. 2006;70(3):225-230. 5. Stoeckel DC, Merkley PJ, McGivney J. Forensic dental training in the dental school curriculum. J Forensic Sci. 2007;52(3):684-686. 6. Hermsen KP, Johnson JD. A model for forensic dental education in the predoctoral dental school curriculum. J Dent Edu. 2012;76(5):553-561. 7. Von Wodtke M. Mind Over Media: Creative Thinking Skills for Electronic Media. New York, NY: McGraw-Hill; 1993. 8. Markenson D, DiMaggio C, Redlener I. Preparing health professions students for terrorism, disaster, and public health emergencies: core competencies. Acad Med. 2005;80(6):517-526. 9. Ferguson DA, Sweet DJ, Craig BJ. Forensic dentistry and dental hygiene: how can the dental hygienist contribute? Can J Dent Hyg. 2008;42(4):203-211. 10. Brannon RB, Connick CM. The role of the dental hygienist in mass disasters. J Forensic Sci. 2000;45(2):381-383. 11. Guay AH. The role dentists can play in mass casualty and disaster events. Dent Clin North Am. 2007;51(4):767-778. 12. Hinchliffe J. Forensic odontology, part 2: major disasters. Brit Dent J. 2011;210(6):271-273. 13. Rawson RD, Nelson BA, Koot AC. Mass disaster and the dental hygienist: the MGM fire. Dent Hyg (Chic). 1983;57(4):12-18. 14. Berketa J, James H, Lake A. Forensic odontology involvement in fatality victim identification. Forensic Sci Med Path. 2012;8(2):148-156. 15. Avon SL. Forensic odontology: the roles and responsibilities of the dentist. J Can Dent Assoc. 2004;70(7):453-458. 16. Gambhir R, Kappor D, Singh G, et al. Disaster management: role of dental professionals. Int J Med Sci Public Health. 2013;2(2):169-173. 17. Zohn HK, Dashkow S, Aschheim K, et al. The odontology victim identification skill assessment system. J Forensic Sci. 2010;55(3):788-791. 18. Petju M, Suteerayongprasert A, Thongpud R, Hassiri K. Importance of dental records for victim identification following the Indian Ocean tsunami disaster in Thailand. Public Health. 2007;121:251-257. 19. Meckfessel S, Stühmer C, Bormann K, et al. Introduction of e-learning in dental radiology reveals significantly improved results in final examination. J Craniomaxillofac Surg. 2011;39(1):40-48. 20. Mayer R. Multimedia Learning. 2nd ed. New York, NY: Cambridge University Press; 2004. 21. Mayer R, Fennell S, Farmer L, Campbell J. A personalization effect in multimedia learning: students learn better when words are in conversational style rather than formal style. J Edu Psychol. 2004;96(2):389-395. 22. Mayer R, Moreno R. A split-attention effect in multimedia learning: evidence for dual processing systems in working memory. J Edu Psychol. 1998;90:312-320. 23. Stegeman C, Zydney J. Effectiveness of multimedia instruction in health professions education compared to traditional instruction. J Dent Hyg. 2010;84(3):130-136. 24. Jonassen DH, Campbell JP, Davidson ME. Learning with media: restructuring the debate. Edu Tech Res Dev. 1994;42:31-39.
<urn:uuid:7ec53130-847b-4829-a01e-f9cf0f07a9e3>
CC-MAIN-2022-33
https://adha.cdeworld.com/courses/20368-performance-of-dental-hygiene-students-in-mass-fatality-training-and-radiographic-imaging-of-dental-remains
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00698.warc.gz
en
0.927651
6,219
3.015625
3
Much of the ancient art known in the renaissance was not excavated but already part of the visual environment. A life-size youth, naked except for a shepherd’s hat and sandals, stands triumphant, one foot resting upon his foe’s severed head. The hand near his supporting leg holds a massive sword, while his other hand, placed at his hip, clasps a stone, presumably the one used to defeat the hulking warrior at his feet. This androgynous figure—cast in bronze by Donatello in the 1440s—is the adolescent King David, ancestor of Christ and hero of the Hebrew Bible. David famously defeated the giant Goliath, liberating his people armed only with a sling-shot and the grace of God. His story was a popular subject for renaissance artists. David’s triumph of faith over brawn communicated anti-tyrannical political themes and epitomized the Christian message that God’s chosen people would inevitably prevail. While the subject is Judeo-Christian, the form of this figure communicates a different set of visual interests. Nowhere in the bible is David described as stripping down to his bare body to wage battle. David’s sensual nudity and his contrapposto pose reflect the forms of ancient Greek and Roman pagan sculpture rather than those of the medieval era. The work is fully free-standing and, like an ancient sculpture, was originally displayed upon an elevated column. This placement, as well as the scale and bronze medium, are also characteristic of ancient Greco-Roman art. Donatello’s David is an example of humanist interests crossing over into the realm of visual art: beginning in the thirteenth century, Italian buildings, sculptures, and paintings began to look increasingly like they did in the ancient Greco-Roman world, even if the subject matter, contexts, and functions were vastly different. Eager to serve the interests of their classically inclined patrons and to demonstrate their own ingenuity, visual artists explored new approaches to form inspired by surviving art and architecture from antiquity as well as ancient authors’ discussions of them. (See this essay for a detailed discussion of humanism and its origins in literature.) Humanism can be understood as an educational program rooted in the writings of ancient Greek and Latin authors that extolled the active life of the citizen and praised humanity’s capacity to achieve greatness through knowledge and free will. Humanism looked to antiquity for inspiration in reforming society and had a tremendous impact on all aspects of life in renaissance Italy—and Europe more broadly—from government to the arts. What Did They See? The Roman Emperor Nero built his Domus Aurea (Latin for “Golden House”), an extravagant palace in the heart of Rome, after the great fire of 64 C.E. which had destroyed a large part of the city. The palace was considered embarrassingly decadent, and was ultimately dismantled and built over by his successors. Few ancient paintings survived for renaissance people to see. Excavations of Nero’s Golden House in the 1480s did reveal fanciful paintings of animals, plants, people, architecture, and hybrids of these, which came to be called “grotesques”. However, the main repositories of Roman painting known to us, such as the extensive decorations at Pompeii, were not excavated until long after the renaissance and no painting from ancient Greece was known to have survived. Although the term “grotesque” has negative connotations in modern English, the origin of the word is the Italian “grotto,” or cave. When Nero’s Golden House was discovered underground in late fifteenth century Rome, it was thought to be a cave. The inventive and highly decorative painted forms there were thus called “grottesche.” Archeological pursuits went hand-in-hand with humanist scholarship. Perhaps the most famous example is the Laocoön, a 1st century C.E. marble sculpture described by the ancient author Pliny as the greatest of all works of art. Excavated in 1506, the sculpture was discovered in the ruins of the palace of the Roman Emperor Titus—just where Pliny said it would be. Much of the ancient art known in the renaissance, however, was not excavated but already part of the visual environment: buildings and sculptures, mostly in a ruinous or fragmented state. Roman sarcophagi were used as tombs, fountains, and other forms of decoration. Massive structures like the Pantheon and Colosseum, numerous friezes and bronzes, triumphal arches, and scores of ancient coins had never ceased to be part of the Italian visual world. Still, until the rise of humanism in the late medieval period, interest in these works was sporadic. The Belvedere Torso, for example, a fragmentary work from the 1st century B.C.E., is the epitome of an ancient sculpture that inspired renaissance artists such as Michelangelo, who adapted it for his famed ignudi on the Sistine Chapel ceiling. The work was likely unearthed for at least a century before anyone paid much attention to it. And, while Greco-Roman antiquity was revered, its material remains had not always been treated with reverence. Renaissance artists not only drew inspiration from ancient works, they also played an important role in preserving it. Raphael famously pleaded with Pope Leo X to preserve ancient sites in Rome and protect them from pillaging. Artists also drew inspiration from written descriptions of art from antiquity. The detailed descriptions (ekphrases), of art by ancient authors like Pliny, Lucian, and Philostratus were influential on renaissance visual culture. Botticelli’s Calumny of Apelles is perhaps the most famous example; the artist re-creates a work described by Lucian (whose text was widely translated in the fifteenth century) by the Greek artist, Apelles. In his 1435/6 treatise On Painting, Leon Battista Alberti specifically singles out this ancient image—known only through verbal description—as a worthy source for artists and a model for narrative art. While it is helpful for a broad overview such as this, to use a blanket term like “Antique” to designate cultural production of the ancient Greek and Roman worlds is misleading as it suggests there was a single artistic tradition, passed from Greece to Rome, that constituted art before the Christian era. In fact, what we designate as “ancient art” includes a vast range of subjects and styles. Renaissance artists responded to different facets of ancient art at different times, often due to their own or their patron’s interests or to broader stylistic trends. Donatello, for example, drew upon Etruscan, Early Christian, Neo-Attic, as well as ancient Greek and Roman art forms from various periods throughout his career. Scholars note that artists gravitated to those ancient art forms that best aligned with their personal or regional world view. Venice, for instance, with its deep ties to the Byzantine visual tradition and nostalgia for terra firma, embraced the world of romantic, erotic landscapes evoked by ancient pastoral poetry and realized in works like Giorgione’s Sleeping Venus. Venice has very different topography from her land-locked peers like Florence and Rome. Floating in the waves of the Adriatic sea, Venice is a cluster of islands connected by bridges and canals. While the Republic of Venice held vast mainland territories, the capital city was completely detached from the terra firma, the mainland of the Italian peninsula, and accessible only by boat until the nineteenth century. Many scholars attribute the early interest in landscape, sensuous beauty, and inventive color technique of Venetian painting to the city’s placement among the waves. Old Forms, New Meaning Greco-Roman art comprised a rich language of gestures and symbols to visually communicate to broad audiences, and it was adapted to new contexts. Architects designed buildings that re-used the vocabulary of ancient structures. Michelozzo’s inventive designs for the Medici Palace in Florence (begun 1444) combine ancient and modern (medieval) forms in entirely novel ways. Topped with a massive overhanging cornice (classical), the structure’s second and third stories display round-arched (classical) and biforate windows (medieval), while the heavy rustication of the ground floor recalls the Palazzo dei Priori, Florence’s (medieval) town hall. Christian spaces, like the church of Sant’Andrea in Mantua (begun 1470) designed by Alberti, creatively combines the form of an ancient triumphal arch—like the Arch of Titus—with that of a temple front—like the Pantheon—while the flanking colossal Corinthian pilasters recall the ancient Roman arches of Septimius Severus and Constantine in the Roman Forum. These architectural forms are given new meaning by being re-used in a renaissance context. This hybrid of classical forms is ultimately new and not found in antiquity—these are old forms translated to new forms and given new meanings. At Sant’Andrea, the massive arch, a structure used to celebrate military triumph in ancient Rome, becomes a symbol of Christian triumph. Bodies, Bodies Everywhere The idealized nude bodies of ancient figures, with their careful rendering of anatomical detail, appealed to renaissance Christians who understood humanity to be created in the image of God. Brunelleschi’s polychromed wooden crucifix in Santa Maria Novella shows Christ’s life-sized body rendered with anatomical precision comparable to that of ancient sculptures, giving powerful physical presence to the Christian savior. This kind of palpable realism aligned with the new, urban religiosity of the early renaissance that promoted physically and emotionally charged piety. The reality of Christ’s suffering body would have encouraged sympathy and compassion in worshipers. Anatomical specificity, like that demonstrated in Brunellschi’s crucifix, reminded viewers of the reality of Christ’s humanity just as it conjured the idealized anatomy of ancient art. At Orsanmichele, a public grainery and shrine centrally located in Florence, the leading guilds of the city were tasked with providing large-scale sculptures of their patron saints to adorn the building’s exterior. Conscious of the high visibility of this site, these powerful institutions sought to one-up each other by commissioning the most avant-garde work from the city’s leading sculptors, including Nanni di Banco, Lorenzo Ghiberti, and Donatello. Cutting edge here meant sculpture all’antica, or “after the antique.” Works were carved from marble or cast in bronze on a massive scale not common since the Roman era. Ghiberti’s Saint John the Baptist, standing a looming 8’4” was the first monumental bronze figure of the renaissance. Created for the guild of cloth finishers and merchants in foreign cloth, this costly bronze statue advertised the wealth of the sponsoring guild. Ghiberti’s saint also marks the development towards free-standing figural sculpture, a tradition popular in the ancient world but largely unused in the medieval. While Saint John does occupy an architectural niche, the figure is rendered wholly in the round and stands within the space rather than being attached to the structure’s walls (see the figures on the portal of Chartres Cathedral for an example). This movement towards fully free-standing, or sculpture in-the-round, became a hallmark of the renaissance tradition, familiar to us in the David sculptures of Donatello and Michelangelo. The new figurative style embraced by sculptors like Ghiberti and Donatello was also adopted by artist’s working in two dimensional media. In Masaccio’s cycle of frescoes dedicated to the life of Saint Peter in Florence’s Brancacci Chapel, the figures are modeled in light and shade to produce sculptural effects that suggest their three-dimensionality. In the scene of the Tribute Money, the figures occupy a firm ground-line and their volumetric bodies cast regular shadows as though lit from a light source from the right. The figure types Masaccio develops for his apostles are relatable, rugged men of the street like Donatello’s Saint Mark at Orsanmichele and evoke the strong psychological presence found in images of Roman senators. Each figure is presented with a distinct human personality, responding to the unfolding scene through varied facial expressions. Creating “Real” Space The development of one-point linear perspective, a key invention of the early Italian renaissance, was also informed by humanism. Brunelleschi, the famed architect of the Florentine Duomo, is credited with devising this system, inspired by his careful study of ancient Roman monuments and medieval Arabic and Latin theories of optics. Brunelleschi’s theories were codified by Alberti in his treatise on painting, which gave detailed instructions for constructing mathematically defined space so that painters may create the appearance of three dimensions in their art. To give credibility to this type of contrived spatial construction, Alberti draws upon the writing of the ancient author Vitruvius whose widely read 1st century B.C.E. treatise, On Architecture, celebrated the illusionistic mathematical space of ancient Roman painting. One of the critical developments of the fifteenth century was the transformation of the subjects of ancient mythology into large-scale imagery. Previously used primarily for religious subjects, renaissance artists employed the monumental scale for the gods and heroes of antiquity. In the Hall of the Months at the Palazzo Schifanoia in Ferrara, the artists Francesco del Cossa, Cosme Tura, and Ercole de’ Roberti frescoed the walls of Duke Borso d’ Este’s pleasure palace with large-scale scenes that combined images of contemporary court life with pagan subjects. Divided into twelve sections, each set of images is separated by Corinthian columns, and includes at the lowest register a scene from Borso’s court—such as the duke dispensing justice or interacting with courtiers—above which is the zodiac sign for the month depicted and, at the highest register, a triumphal depiction of the pagan god associated with that month. Throughout the imagery at Schifanoia are recognizable portrait likenesses of Duke Borso. While incorporated into a larger narrative, these likenesses do point to another key humanist-inspired development of the early renaissance: the genre of independent portraiture. Living Sinners Depicted in Art While popular in ancient Greece and Rome, portraiture had all but disappeared in European art until the fifteenth century. The many surviving portrait busts of Roman senators and the heads of emperors on imperial coins fascinated renaissance audiences and provided ready models for the rising genre. Initially used to record the features of the ruling class, portraiture spread in popularity and became a key form of commemoration for the rising merchant and artisan classes. An early example is Pisanello’s portraits of Leonello d’Este which show the Ferrarese ruler in profile and bust-length, an approach drawn from the tradition of imperial profile portraits on ancient coinage. By the late fifteenth century the profile view was largely displaced by the three-quarter view (initially popular in northern Europe), seen in Botticelli’s Portrait of a Man with a Medal of Cosimo de’Medici. Turning the sitter towards the viewer encouraged a new kind of viewing experience, one that suggested a reciprocity with the audience—the viewer looks upon the sitter who appears to look back. Humanism and the Rising Status of the Artist Humanist interests informed another key development of the renaissance: the rising social status of the visual artist. The authors of antiquity celebrated the creations of their own leading artists and described the honors paid to them, giving renaissance patrons reason to encourage artistic ingenuity and artists worthy models to emulate. Traditionally seen as humble craftsmen and valued as manual workers (however skillful), visual artists became increasingly appreciated for their intellectual abilities during the early renaissance. Artists commanded ever greater social prestige so that by the sixteenth century it meant something to own a work by Raphael, Michelangelo, or Mantenga. Works of art were seen as expressions of individual ingenuity, valued for the virtues attributed to their creators. Renaissance Italy and Beyond Humanism and its reflection in contemporaneous art was certainly not confined to the Italian world. Artists north of the Alps and beyond Europe also responded to the forms of ancient art, but as in the Italian peninsula, they did so in ways that reflected their personal and regional interests. There is no single renaissance style inspired by the “Antique” anymore than there was a single antique world. Much the same can be said for our own, modern-day artistic engagements with the Greek and Roman past. The whitewashed world of the 1959 movie Ben Hur or the heavily muscled hunks of the 2006 epic 300 say much more about us than they do the ancient past. Such was always the case. - Leonard Barkan, The Gods Made Flesh: Metamorphosis and the Pursuit of Paganism (New Haven, Yale UP, 1990). - Leonard Barkan, Unearthing the Past: Archeology and Aesthetics in the Making of Renaissance Culture (New Haven: Yale UP, 1999). - Phyllis Pray Bober and Ruth Rubinstein, Renaissance Artists and Antique Sculpture: A Handbook of Sources, 2nd revised edition (London: Harvey Miller Publishers, 2010). - Patrick Baker, Italian Renaissance Humanism in the Mirror (Cambridge: Cambridge UP, 2015). - Malcolm Bull, The Mirror of the Gods: How Renaissance Artists Rediscovered Pagan Gods (London: Oxford UP, 2005). - Stephen J. Campbell, The Cabinet of Eros: Renaissance Mythological Painting and the Studiolo of Isabella d’Este (New Haven: Yale UP, 2004). - Rebekah Compton, Venus and the Portrayal of Love in Renaissance Florence (Cambridge: Cambridge UP, 2021). - Charles Dempsey, The Portrayal of Love: Botticelli’s Primavera and Humanist Culture at the Time of Lorenzo the Magnificent (Princeton, NJ: Princeton UP: 1992). - Luba Freedman, The Revival of the Olympian Gods in Renaissance Art (Cambridge: Cambridge UP, 2003). - Luba Freedman, Classical Myths in Italian Renaissance Painting (Cambridge: Cambridge UP, 2011). - Kristen Lippincott, The Frescoes of the Salone dei Mesi in the Palazzo Schifanoia in Ferrara: Style, Iconography and Cultural Context. Ph.D. Dissertation, University of Chicago, 1987. - Sarah Blake McHam, “Donatello’s Bronze “David” and “Judith” as Metaphors of Medici Rule in Florence,” The Art Bulletin 83, no. 1 (2001), pp. 32-47. - Jean Seznec, The Survival of the Pagan Gods: The Mythological Tradition and Its Place in Renaissance Humanism and Art (Princeton, NJ: Princeton UP, 1972) - Robert Weiss, The Renaissance Discovery of Classical Antiquity (London: Oxford UP, 1988) Originally published by Smarthistory, 08.01.2021, under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.
<urn:uuid:7cecf158-91e6-4483-a054-2f996967ff3e>
CC-MAIN-2022-33
https://brewminate.com/humanism-in-italian-renaissance-art/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.95/warc/CC-MAIN-20220817032054-20220817062054-00495.warc.gz
en
0.938028
4,115
3.53125
4
black hole(redirected from Hypermass) Also found in: Dictionary, Thesaurus, Financial. black hole, in astronomy, celestial object of such extremely intense gravity that it attracts everything near it and in some instances prevents everything, including light, from escaping. The term was first used in reference to a star in the last phases of gravitational collapse (the final stage in the life history of certain stars; see stellar evolution) by the American physicist John A. Wheeler. Gravitational collapse begins when a star has depleted its steady sources of nuclear energy and can no longer produce the expansive force, a result of normal gas pressure, that supports the star against the compressive force of its own gravitation. As the star shrinks in size (and increases in density), it may assume one of several forms depending upon its mass. A less massive star may become a white dwarf, while a more massive one would become a supernova. If the mass is less than three times that of the sun, it will then form a neutron star. However, if the final mass of the remaining stellar core is more than three solar masses, as shown by the American physicists J. Robert Oppenheimer and Hartland S. Snyder in 1939, nothing remains to prevent the star from collapsing without limit to an indefinitely small size and infinitely large density, a point called the “singularity.” At the point of singularity the effects of Einstein's general theory of relativity become paramount. According to this theory, space becomes curved in the vicinity of matter; the greater the concentration of matter, the greater the curvature. When the star (or supernova remnant) shrinks below a certain size determined by its mass, the extreme curvature of space seals off contact with the outside world. The place beyond which no radiation can escape is called the event horizon, and its radius is called the Schwarzschild radius after the German astronomer Karl Schwarzschild, who in 1916 postulated the existence of collapsed celestial objects that emit no radiation. For a star with a mass equal to that of the sun, this limit is a radius of only 1.86 mi (3.0 km). Even light cannot escape a black hole, but is turned back by the enormous pull of gravitation. It is now believed that the origin of some black holes is nonstellar. Some astrophysicists suggest that immense volumes of interstellar matter can collect and collapse into supermassive black holes, such as are found at the center of large galaxies. The British physicist Stephen Hawking has postulated still another kind of nonstellar black hole. Called a primordial, or mini, black hole, it would have been created during the “big bang,” in which the universe was created (see cosmology). Unlike stellar black holes, primordial black holes create and emit elementary particles, called Hawking radiation, until they exhaust their energy and expire. It has also been suggested that the formation of black holes may be associated with intense gamma ray bursts. Beginning with a giant star collapsing on itself or the collision of two neutron stars, waves of radiation and subatomic particles are propelled outward from the nascent black hole and collide with one another, releasing the gamma radiation. Also released is longer-lasting electromagnetic radiation in the form of X rays, radio waves, and visible wavelengths that can be used to pinpoint the location of the disturbance. Because light and other forms of energy and matter are permanently trapped inside a black hole, it can never be observed directly. However, a black hole can be detected by the effect of its gravitational field on nearby objects (e.g., if it is orbited by a visible star), during the collapse while it was forming, or by the X rays and radio frequency signals emitted by rapidly swirling matter being pulled into the black hole. The first discovery (1971) of a possible black hole was Cygnus X-1, an X-ray source in the constellation Cygnus. In 1994 astronomers employing the Hubble Space Telescope announced that they had found conclusive evidence of a supermassive black hole in the M87 galaxy in the constellation Virgo. Since then others have been found, and in 2011 astronomers announced the discovery of one, in NGC 4889 in the constellation Coma, whose mass may be as great as 21 billion times that of the sun. It is now believed that in most cases supermassive black holes are found at the center of spiral and elliptical galaxies. Sagittarius A*, a compact radio source at the center of the Milky Way is believed to be a supermassive black hole with a mass about 4.6 million times that of the sun. The Chandra observatory has also discovered that massive black holes were associated with galaxies that existed 13 billion years ago. The first evidence (2002) of a binary black hole, two supermassive black holes circling one another, was detected in images from the orbiting Chandra X-ray Observatory. Located in the galaxy NGC6240, the pair are 3,000 light years apart, travel around each other at a speed of about 22,000 mph (35,415 km/hr), and have the mass of 100 million suns each. As the distance between them shrinks over 100 million years, the circling speed will increase until it approaches the speed of light, about 671 million mph (1,080 million km/hr). The black holes will then collide spectacularly, spewing radiation and gravitational waves across the universe. Subsequently, the Laser Interferometer Gravitational Wave Observatory (since 2015) and the European Gravitational Observatory (since 2017) several times have detected gravitational waves that resulted from the merging of other black hole pairs. In 2019, researchers with the Event Horizon Telescope, a very long baseline array of radio telescopes, imaged the halo of gas and dust outlining the black hole at the heart of the M87 galaxy. See S. W. Hawking, Black Holes and Baby Universes and Other Essays (1994); P. Strathern, The Big Idea: Hawking and Black Holes (1998); J. A. Wheeler, Geons, Black Holes, and Quantum Foam: A Life in Physics (1998); H. Falcke and F. W. Hehl, The Galactic Black Hole: Studies in High Energy Physics, Cosmology and Gravitation (2002); M. Bartusiak, Black Hole (2015). black holeAn object so collapsed that its escape velocity exceeds the speed of light. It becomes a black hole when its radius has shrunk to its Schwarzschild radius and light can no longer escape. Although gravity will make the object shrink beyond this limit, the ‘surface’ having this critical value of radius – the event horizon – marks the boundary inside which all information is trapped. Calculations, however, indicate that space and time (see spacetime) become highly distorted inside the event horizon, and that the collapsing object's ultimate fate is to be compressed to an infinitely dense singularity at the center of the black hole. It has also been shown that the distortion of spacetime just outside the Schwarzschild radius causes the production of particles and radiation that gradually rob the black hole of energy and thus slowly diminish its mass. This Hawking radiation (Stephen Hawking, 1974) depends inversely on the black hole's mass, so the most massive black holes ‘evaporate’ most slowly. A black hole of the Sun's mass would last 1066 years. Once matter (or antimatter) has disappeared into a black hole, only three of its original properties can be ascertained: the total mass, the net electric charge, and the total angular momentum. Because all black holes must have mass, there are four possible types of black hole: a Schwarzschild black hole (1916) has no charge and no angular momentum; a Reissner–Nordstrom black hole (1918) has charge but no angular momentum; a Kerr black hole (1963) has angular momentum but no charge; a Kerr-Newman black hole (1965) has charge and angular momentum. (The dates in brackets indicate when the named mathematician(s) solved the equations of general relativity for these particular cases.) In astrophysics the simple Schwarzschild solution is often used, but real black holes are almost certainly rotating and have very little electric charge, so that the Kerr solution should be the most applicable. The most promising candidates for black holes are massive stars that explode as supernovae, leaving a core in excess of 3 solar masses. This core must undergo complete gravitational collapse because it is above the stable limit for both white dwarfs and neutron stars. Once formed, a black hole can be detected only by its gravity. Finding black holes only a few kilometers across (the size of the event horizon for a single-star black hole) is exceedingly difficult, but chances are increased if the black hole is a member of a close binary system. If the components of a binary system are close enough, mass transfer can occur between the primary star and its more compact companion (see equipotential surfaces). Matter will not fall directly on to the companion, however, for it has too much angular momentum; instead it forms a rapidly spinning disk – an accretion disk – around the compact object. If the latter is a black hole considerable energy can be produced, predominantly at X-ray wavelengths, as matter in the accretion disk loses angular momentum and spirals in. When the accreting matter is unable to cool efficiently, most of the energy generated in the disk due to viscosity is not radiated away, but instead stored in the gas as internal energy and advected inward in an advection-dominated accretion flow onto the central object. Candidates for black holes in a binary system fall into two classes, massive and low-mass black-hole binaries, depending on the mass of the companion star. The first massive black-hole candidate to be identified was the X-ray binary Cygnus X-1, comprising a 20 solar mass B0 supergiant accompanied by an invisible companion with a mass about 10 times that of the Sun. This massive nonluminous object is probably a black hole emitting X-rays from its accretion disk. Another promising massive black-hole candidate is the X-ray binary LMC X-3 in the Large Magellanic Cloud. While the probable mass of the compact objects in this system is of the order of 10 solar masses, rigorous lower limits are as low as 3 solar masses (possibly even lower), marginally consistent with the maximum theoretically predicted mass of a neutron star. Even better evidence for a stellar-mass black hole comes from some of the low-mass black-hole candidates, in particular A0620-00 in Monoceros and V404 Cygni. Rigorous limits on the masses of the compact objects in A0620-00 and V404 Cygni are 3 and 6 solar masses, respectively. See also gamma-ray transients. Supermassive black holes of 106 to 109 solar masses probably lie in the centers of some galaxies and give rise to the quasar phenomenon and the phenomena of other active galaxies. If a huge black hole is able to form and to capture sufficient gas and/or stars from its surroundings, the rest-mass energy of infalling material can be converted into radiation or energetic particles. There is now observational evidence to support this hypothesis. the dynamical motion of stars and ionized gas in the cores of nearby galaxies show that they are responding to a strong gravitational field, in excess of that expected from the number of stars accounting for the light at the galaxy center. These motions are thought to be due to the presence of the supermassive black hole (see Seyfert galaxy; Virgo A). Other strong evidence for the presence of supermassive black holes is found from recent observations of an extremely broad iron fluorescence line in the X-rays from several Seyfert galaxies; this line has a very particular shape, revealing its origin in an accretion disk very close to the central black hole. At the other mass extreme are the more speculative mini black holes, weighing only 1011 kg and with radii of 10–10 meters. These could have formed in the highly turbulent conditions existing after the Big Bang. They create such intense localized gravitational fields that their Hawking radiation makes them explode within the lifetime of the Universe; their final burst of gamma rays and microwaves should be detectable but has not yet been found. a celestial object that is formed as a result of the relativistic gravitational collapse of a massive body. In particular, the evolution of a star whose mass at the moment of collapse exceeds some critical value may terminate in catastrophic gravitational collapse. The value of the critical mass is not precisely determined and, depending on the equation of state of matter used, ranges from 1.5 to 3 solar masses (Mʘ). For any equation of state of matter, the general theory of relativity predicts that no stable equilibrium exists for cold stars of several solar masses. If, after a star becomes unstable, not enough energy is released to halt the collapse or to cause a partial explosion after which the remaining mass would be less than the critical mass, the central portions of the star collapse and, in a short time, reach the gravitational radius rg. No forces whatsoever can prevent the further collapse of a star if the radius of the star shrinks down to rg, which is also known as the Schwarzschild radius and is the radius of a sphere whose surface is called the event horizon. A fundamental property of the event horizon is that no signals emitted from the surface of the star and reaching the event horizon can escape from the region inside that horizon. Thus, as a result of the gravitational collapse of a massive star, a region in space-time is formed from which no information whatsoever about physical processes occurring within the region can emerge. A black hole has a gravitational field whose properties are determined by the hole’s mass, angular momentum, and—if the collapsing star was electrically charged—electric charge. At large distances, the gravitational field of a black hole is virtually indistinguishable from the gravitational field of a normal star. In addition, the motion of other objects that interact with a black hole at large distances is governed by the laws of Newtonian mechanics. Calculations show that a region known as the ergosphere, which is bounded by a surface called the static, or stationary, limit, should exist outside the event horizon of a rotating black hole. The attractive force that a black hole exerts on a stationary object situated in the ergosphere tends to infinity. However, the attractive force is finite if the object has an angular momentum whose direction coincides with that of the black hole’s angular momentum. Therefore, any particles that happen to be in the ergosphere will revolve around the black hole. The presence of an ergosphere may lead to energy losses by a rotating black hole. In particular, energy losses are possible in the case where some object that has entered the ergosphere breaks up (for example, as a result of an explosion) into two fragments near the event horizon of the black hole. In this case, one of the fragments continues to fall into the black hole, but the other fragment escapes from the ergosphere. The parameters of the explosion may be such that the energy of the fragment that escapes from the ergosphere is higher than the energy of the original object. The additional energy in this case is drawn from the rotational energy of the black hole. As the angular momentum of a rotating black hole decreases, the static limit comes closer to the event horizon; when the angular momentum is zero, the static limit and the event horizon coincide and the ergosphere disappears. Owing to the effects of the centrifugal force of rotation, the rapid rotation of a collapsing object prevents the formation of a black hole. Therefore, a black hole cannot have an angular momentum that is greater than some extreme value. Quantum-mechanical calculations show that particles—such as photons, neutrinos, gravitons, and electron-positron pairs—may be produced in the strong gravitational field of a black hole. As a result, a black hole radiates like a blackbody with an effective temperature of T = 10–6 (Mʘ/M) °K, where M is the mass of the black hole, even in cases where no matter whatsoever falls into the hole. The energy of the radiation is drawn from the energy of the black hole’s gravitational field; as a result, the mass of the black hole decreases with time. However, owing to their low efficiency, the quantum radiation processes are unimportant for massive black holes, which are formed as a result of stellar collapse. In the early stages of the evolution of the universe, which were hot and ultradense, black holes with masses ranging from 10–5 g to a solar mass or higher may have been formed as a result of an inhomogeneous distribution of matter. In contrast to the black holes that are formed by collapsed stars, these objects are called primordial black holes. Since quantum radiation processes reduce the mass of a black hole, all primordial black holes with a mass of less than 1015 g should have evaporated by the present time. The intensity and effective temperature of black-hole radiation increase as the mass of a black hole decreases. Therefore, in the final stage, the evaporation of a black hole with a mass of the order of 3 × 109 g would be an explosion accompanied by an energy release of 1030 erg in 0.1 sec. Primordial black holes with a mass of greater than 1015 g have remained virtually unchanged. The detection of primordial black holes on the basis of their radiation would make it possible to draw important conclusions about the physical processes that occurred in the early stages of the evolution of the universe. The search for black holes in the universe is a task of current interest in modern astronomy. Searches are carried out on the assumption that black holes may be the invisible components of certain binary star systems. However, this inference is not definite, since the normal star in a binary system may be invisible against the higher luminosity of the second component. Another method of identifying black holes in binary systems is based on the radiation emitted by matter flowing from the companion, which is a normal star, to the black hole. In this case, a disk consisting of matter flowing to the black hole is formed near the hole; the layers of the disk move around the hole with various velocities (see Figure 1). Owing to friction between adjacent layers, the matter in the disk is heated to tens of millions of degrees. The inner regions of the disk emit energy in the X-ray region of the electromagnetic spectrum. The same type of radiation is produced in the case where a binary system contains a neutron star rather than a black hole. However, a neutron star cannot have a mass higher than some limiting value. As a result of space studies, a large number of X-ray sources in binary systems have been discovered. The X-ray source Cygnus X-l is the most likely candidate for a black hole. In this binary system, the mass of the X-ray source, which may be estimated from the observed orbital velocity of the optical star and from Kepler’s laws, exceeds 5 Mʘ, that is, is higher than the limiting mass for a neutron star. The hypothesis has also been advanced that supermassive black holes—that is, black holes with a mass M ≃ 106–108Mʘ—may be located in the nuclei of active galaxies and in quasars. In this case, the activity of the active galactic nuclei and quasars is attributed to the infall of ambient gas onto the black hole. REFERENCESZel’dovich, Ia. B., and I. D. Novikov. Teoriia tiagoteniia i evoliutsiia zvezd. Moscow, 1971. Penrose, R. “Chernydyry.” Uspekhi fizicheskik nauk, 1973, vol. 109, issue 2. Shklovskii, I. S. Zvezdy: Ikh rozhdenie, zhizn’ i smert’. Moscow, 1975. Thorne, K. “Poiski chernykh dyr.” Uspekhi fizicheskikh nauk, 1976, vol. 118, issue 3. (Article translated from English.) Frolov, V. P. “Chernye dyry i kvantovye protsessy v nikh.” Uspekhi fizicheskik nauk, 1976, vol. 118, issue 3. Shakura, N. I. Neitronnye zvezdy i “chernye dyry” v dvoinykh zvezdnykh sistemakh. Moscow, 1976. Novikov, I. D. Chernye dyry vo Vselennoi. Moscow, 1977. Misner, C, K. Thorne, and J. Wheeler. Gravitatsiia, vols. 1–3. Moscow, 1977. (Translated from English.) N. I. SHAKURA black hole[¦blak ′hōl] x = x + 1 then it will try to evaluate the black hole which will usually print an error message and abort the program. A secondary effect is that, once the root of the expression has been black-holed, parts of the expression which are no longer required may be freed for garbage collection. Without black holes the usual result of attempting to evaluate an expression which depends on itself would be a stack overflow. If the expression is evaluated successfully then the black hole will be updated with the value. Expressions such as ones = 1 : ones are not black holes because the list constructor, : is lazy so the reference to ones is not evaluated when evaluating ones to WHNF. blackholingDiscarding packets in a network based on some criterion. For example, an ISP might blackhole packets coming from a known spammer or from a file sharing application such as BitTorrent. An ISP that has its own packet-switched phone service may blackhole VoIP packets from another vendor. Blacklist of Internet AdvertisersA popular and frequently cited report that describes the offending activities of spammers that routinely distribute large mailings via email or post unwelcome advertising on newsgroups. Updated regularly and posted on several sites, the Blacklist can be found by searching for it by name with your favorite search engine. Also visit http://spam.abuse.net. See ROKSO, blackholing and spam. spam filterA software routine that deletes incoming spam or diverts it to a "junk" mailbox (see spam folder). Also called "spam blockers," spam filters are built into a user's email program. They are also built into or added onto a mail server, in which case the spam never reaches the user's mailbox. See spam, email program and mail server. Spam filtering can be configured to trap messages based on a variety of criteria, including sender's email address, specific words in the subject or message body or by the type of attachment that accompanies the message. Blacklists and Whitelists Address lists of habitual spammers, known as "blacklists," are continuously updated by various organizations and ISPs. Mail from blacklist addresses is rejected at the mail server. See Blacklist of Internet Advertisers. The opposite approach is taken to ensure that bona fide mail is not automatically rejected. Users or network administrators can create a list of allowed email addresses, known as a "whitelist," and the mail client or mail server will always accept mail from whitelist addresses. Mail client software typically treats the user's address book as a whitelist, presuming that mail coming from an address maintained by the user is always valid. Analyze the Content In order to more effectively analyze the content and not trash a real message, sophisticated spam filters use artificial intelligence (AI) techniques that look for key words and attempt to decipher their meaning in sentences (see Bayesian filtering). See spam trap, spam relay and spamdexing. See also ad blocker and popup blocker.
<urn:uuid:9029e86a-2da3-4c1a-994d-e7e9ed77e79f>
CC-MAIN-2022-33
https://encyclopedia2.thefreedictionary.com/Hypermass
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00299.warc.gz
en
0.921872
5,035
3.90625
4
Automotive means "relating to cars and other vehicles." In many places, it is very difficult to get from one place to another without a car. In these places, almost everyone owns a car. Cars are a great convenience for travel, and larger vehicles make it possible for stores to move a large amount of goods from place to place. But cars are also complicated – they require a lot of care. So there are many jobs related to cars. Some people make them, some people sell them, some people clean them, and some people fix them. To do any of these jobs, there are a lot of words related to cars that are important to know. This lesson is part of the English Vocabulary Illustrated Word Lists section. Let's get started! Cars are complicated and expensive machines. So there are a lot of places related to their creation, care, and repair. People and businesses who buy cars get them from a place that sells cars. People need to care for their cars, so there are several different kinds of businesses that provide fuel for cars and provide services to keep cars looking nice and running well. And cars also need repair when parts of the car stop working or when the car is in an accident. All of these services happen at different businesses. If you are going to be working with cars, you should be familiar with the variety of businesses that take care of cars in one way or another. A car dealership is a place where cars are sold. A car factory is a place where cars are made. A car wash is a business that washes cars. The washing might be done by people or by machines. A repair shop fixes cars. A body shop is a type of car repair shop that focuses on the outside of the car. Most cars need gas in order to run. A gas station is a place where gas can be bought. A service station is a gas station that also repairs cars. The tire is a thick rubber ring that connects the car to the ground. Cars normally have four tires. A spare tire is an extra tire that is kept in the car in case one of the regular tires cannot be used. It is usually smaller and weaker than the regular tires and is only meant to be used for a short amount of time until the regular tire can be fixed or replaced. The "tread" is the pattern on a tire that makes the tire safer by helping it grip the road better. A hubcap is a round piece of metal that goes over the wheel of a car. The mudflap is a piece of thick plastic that hangs behind the rear tires of a car in order to stop mud and water from splashing up. A jack is a metal device that is placed under the car in order to raise the car up. This is usually done if one of the tires on the car needs to be changed. If a car's battery is not working, jumper cables are attached to the battery and also to a battery in another car that works. This makes it possible to transmit electricity from the working battery to the other battery. The trunk is the space in the back of a car where boxes and other items can be placed. A shock absorber is found on each wheel of a car. It makes it so that the people in the car cannot feel all of the bumps on the road. The hood covers the front part of the car. The engine is usually located under the hood. The license plate is a piece of metal with numbers and letters on it. There is usually one on the front and the back of the car. The government usually requires each car to have a license plate so the car can be identified. The bumper is the metal or plastic bar at the front and back of the car. It is supposed to limit the damage if the car is hit or if it hits something else. Headlights are lights on the front of the car that a driver can turn on at night to make it easier to drive in the dark. Cars also usually have a "bright" setting for headlights. Taillights are lights on the back of a car that the driver can turn on at night to make it easier for other drivers to see the car. The brakelights are red lights on the back of the car that are lit when the driver presses on the brakes so that other drivers will know that the car is slowing down or stopping. The "body" of a car refers to the painted metal sheets that make up the outside of the car. The gas tank holds the gas until it is needed. The driver can open the "gas cap" in order to put more gas into the car. The door is the part of the car that can be opened to allow people to enter or leave the car. Most cars have four doors. The window is the part of the car that is made of glass and that the driver and passengers can see through. The types of windows on a car are: A visor is a piece of, usually, plastic that the driver can move into position to stop the sun from getting into the driver's eyes. Windshield wipers are things that move back and forth across the windshield to remove rain so that the driver can see better. The driver controls the windshield wipers by a device that is usually attached to the steering wheel. Some cars also have windshield wipers on the back window of the car. The front windshield wipers usually have a setting that will spray cleaner onto the window and then wipe it off in order to clean the windshield. Cars usually have several mirrors that the driver uses in order to see the traffic around the car. These include: The floor mats are pieces of carpet and/or plastic that are placed on the floor inside of a car to protect the car from dirt and water. Accelerator (or gas pedal) The accelerator is a pedal that the driver presses on with his or her foot in order to control the car's speed. Turn Signal (also called a blinker) Drivers must let other drivers know when they are going to turn. They do this by pressing a lever near or on the steering wheel that causes a light on the back left or back right of the car to blink. Both the lever the driver presses and the blinking light are called the turn signal or the blinker. The horn makes a sound to warn other drivers of a problem. The driver makes the horn sound by pressing a button, which is usually on the steering wheel. The glove compartment is a small box that locks and is located in front of the front seat passenger. It does not usually contain gloves, but rather other things that the driver or passengers might need, such as maps. The brakes are a pedal that the driver pushes with his or her foot to slow down or to stop the car. The brakes are also the mechanical system that causes the car to slow down or stop. Cars also have an emergency brake (also called a parking brake) that is used when the car is parked in order to stop it from moving. The brake pad is the thin pad that presses against the wheel in order to stop the car. In some cars, the gears change automatically. In other cars, the driver has to press a pedal, called the clutch, and then change the gears. The steering wheel is a wheel that the driver turns in order to turn a car. The dashboard is the part of the car that is visible to the driver and contains the car's controls and instrument panel. In some cars, the driver can press a button so that the car will automatically move at a certain speed without the driver needing to press the accelerator. The fuel gauge shows the driver how much gas is left in the car. When it is at the "F," the car is "full" of gas. When it is at the "E," the car is empty of gas. Cars will also have a warning light that shows when the gas is getting low. A warning light is a light that goes on to let the driver know that there is a problem with the car. The speedometer shows the driver the speed at which the car is moving. It is located on the dashboard. The odometer shows how many miles and kilometers a car has been driven. It is located on the dashboard. A surface inside the car where the driver or a passenger can sit is called a seat. Cars usually have two rows of seats: The seatbelt is a thin strip of heavy fabric that holds a driver or passenger in place in case the car stops suddenly. The purpose of a seatbelt is to keep the person safe. A surface in the car that a driver or passenger can rest an arm on is called the "armrest." The air conditioning is the system that blows cold air into a car. The heater is the system that blows hot air into a car. An airbag is a safety device. If the car is in an accident, the airbag quickly fills with air so that the driver or passenger hits it instead of another part of the car. It is supposed to protect the people in the car from getting hurt. It then empties very quickly so that the driver can see what is happening. A car seat is a small seat designed to keep a child safe in a car. It can usually be removed from the car. This pipe takes the waste gases from the engine to the back of the car, where the pipe ends and the gases are released. A muffler makes the sound of the engine much more quiet than it would otherwise be. "Electrics" means the system of wires that provide electricity to a car. The crumple zone refers to the parts of the car, usually the front and back, that are designed to be crushed easily in order to protect the people in the car from getting hurt. The metal frame that a car is built on is called the "chassis." The axle is a straight piece of metal that connects two wheels on a car. The roof is the top part of a car. Some cars have a window in the roof that lets light in. This is called a sunroof. Some cars have roof racks, which allow people to place large items, such as suitcases, on top of the car and keep them secure while the car is moving. The engine is the machine that makes it possible for the car to move. The parts of the engine include: The cylinder block is the main part of the engine. It is where the fuel is combusted. All of the other parts of the engine are connected to the cylinder block. Cars usually have four, six, or eight cylinders. The cylinders can be arranged in different ways: The cylinder head goes on top of the cylinder block. It forms a seal. Gaskets create a seal between the cylinder block and the cylinder head. The connecting rod connects the crank shaft to the piston so that the motion created by the engine can be moved from the piston to the crank shaft. The piston goes into the cylinder. It moves up and down so that the motion created by the engine is transferred to the connecting rod. It compresses (which means to make smaller) the mixture of air and fuel and changes the fuel's energy so the car can use it. The piston ring creates a seal between the cylinder and the piston. The crankshaft is at the bottom of the cylinder block. It transfers the motion from the piston so that it becomes a rotary motion, which means a motion that moves in a circle. Then, that circular motion rotates the wheels of the car. The oil sump is found at the bottom of the cylinder block. It holds the oil, which is needed to cover the parts of the engine so that they can move smoothly. The camshaft makes it possible for the valves to open and close at the right time. The valves attach to the cylinder head. They manage the flow of air and fuel and exhaust gases. The purpose of the ignition system is to create an electrical charge and send it to the spark plugs. It is sent on the ignition wires. The electrical charge flows on the ignition wires to the distributor, which sends the charge to each spark plug. The spark plugs attach to the cylinder head. They create a spark in order to set the air and fuel mixture on fire. The push rod manages the timing of the valves so that they open and close at the right time. The manifold attaches to the cylinder head. It distributes the air and fuel mix when it comes in. A second manifold collects the exhaust gases and takes them out. Bearings provide support to the moving parts of the engine. The timing belt joins the camshaft to the crankshaft. The radiator cools off water that has passed around the cylinders. The job of the water pump is to pump water around the cylinders and to the radiator. The radiator and water pump combine to form the cooling system. The carburetor is the part of the engine that mixes air and gas. The purpose of the catalytic converter is to limit damage to the environment by changing what gases the car releases. The battery stores electricity for the parts of the car that require it. The alternator generates energy so that the battery can be recharged. Oil is a thick, black liquid that helps stop the car parts from damaging each other when they make contact. The decompressor is a device for cutting down the amount of pressure in the engine. The fan belt transfers movement in order to cool the car. A "four door" car has two doors on each side. A "two door" car has one door on each side. Truck (or: Pick-up Truck) A truck usually has one door on each side and an open space in the back for carrying large loads. If a car is a convertible, it is possible to take the roof off and drive around without a roof. A jeep is a small truck designed to drive over uneven ground. An eighteen-wheeler is a very large truck designed to move large amounts of goods in its back compartment, which is covered. A van is a large, covered vehicle. It usually does not have windows in the back. It is used to moving large amounts of materials or many people. A minivan is a small van. It usually has windows in the back and will fit six to eight people. A hatchback is a car, usually with two doors, that has another door in the back that slopes from the roof to the trunk and opens from the bottom to the top (not side to side). A hybrid car has two sources of energy: gas and a battery. A bus is a very large vehicle that can transport 20-40 people. A fuel-efficient vehicle is one that meets certain standards (usually set by the government) for using only a small amount of fuel. Driverless Car (or: Autonomous Vehicle) A driverless car is one where the driving is done by a computer and not by a human driver. A car is new if it has not been owned by anyone else before. A car is used if someone else has owned and driven it before. If a person who is buying a car sells their old car to the car dealer at the same time that they buy a new one, that old car is called a "trade-in." If a car has an automatic transmission, the driver does not need to change gears because the car will do it automatically. A car with a manual transmission does not automatically change gears. The driver has to change gears. A mechanic is someone who fixes cars. At a car wash, cars are sometimes coated in wax to make them shine. "Wax" is both a noun (meaning the substance placed on the car) and a verb (meaning the action of applying the wax to the car). Some people get their car "detailed" at a car wash. This is a special cleaning that is very, very thorough and much more expensive than a regular car wash. A driver is someone who operates a car. A passenger is someone who rides in a car but does not drive it. A funnel is a device, usually made of plastic, that has a wide top and a narrow bottom and is used to pour liquid into a small opening. A gas can is a small can, usually made of plastic, that holds gas. "Unleaded" describes gas that is put in a car. (In past years, some gas had lead in it. It is now illegal to put lead in gas, so there is no more "leaded" gas, but the term "unleaded" is still used to describe "regular" gas.) Premium gas is gas that claims to be better for the car than unleaded gas and is more expensive. The oil used in a car must be changed a few times per year. An "oil change" is the service of draining the oil and replacing it with new oil. The oil filter is usually replaced as well. Sometimes, other services are done at the same time, such as changing the windshield wiper blades. All cars need service from time to time in order to stay running well. "Maintenance" is the word used for these services. For most cars, the car manufacturer recommends maintenance at certain mileages, such as a "70,000 mile maintenance," and recommends what exactly should be done to the car at that time. "Overcharge" means to charge a customer more money than you should for a service. All new cars have a "sticker price," which is the price that the manufacturer suggests that the car be sold for. This price is usually placed on a sticker on the car. However, customers usually pay less than the sticker price for the car. Haggle (or: Bargain) Since customers usually pay less than the sticker price for a new car, this means that they have to "haggle" or "bargain" with the car dealer for what price they will pay. This means that they go back and forth discussing the price for the car until they can both agree on a price. To "inspect" means to check something. In most areas, cars must have a yearly inspection to be sure that they are safe and do not pollute too much. For cars, the "make" means the company that made the car. The "model" of a car is the kind of car, named by the manufacturer. Most car manufacturers produce several kinds of cars per year and each is a different model. The "year" of a car refers to the year it was made. However, this is often not exact, since a car manufacturer will often begin to sell the next year's models during the fall of the previous year. To "diagnose" means to figure out what is wrong with something. Most new cars (and some used cars) come with a "warranty." This means that if certain things go wrong with the car, the car dealer will pay to repair them. Usually, a warranty is good for a certain amount of time (such as five years) or a certain number of miles driven (such as 50,000 miles). Repair shops will usually divide their bill into two parts: the "labor charge" is the cost for the time spent working on the car. Repair shops will usually divide their bill into two parts: the "parts" charge is the charge for the car parts that they used during the repair. A thin, long piece of metal that is dipped into a container to determine how much liquid is in the container. In a car, a dipstick is used to measure the amount of oil. Every single car has a unique number called a VIN number. It can usually be seen through the front windshield on the top of the dashboard. A junkyard is a place where old cars are placed when no one wants to fix them. Sometimes, people will go to a junkyard in order to find a certain car part that they need. A company that makes cars is called an "automaker." The showroom is the room in a car dealership where new cars are displayed. If someone is thinking about buying a car, they will often take it for a "test drive," which means that they drive it briefly (without buying it first) to see if they like it.
<urn:uuid:bab5a4ac-7f3d-4684-981f-fdf057b32c06>
CC-MAIN-2022-33
https://www.really-learn-english.com/english-vocabulary-for-the-workplace-automotive.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00499.warc.gz
en
0.966245
4,695
3.078125
3
Arboviruses Slide Set St Louis Encephalitis of Arthropod-Borne Viruses Infection 2. St Louis Encephalitis St. Louis encephalitis occurs in endemic and epidemic form throughout the Americas and is the most important arboviral disease of North America. It is closely related to Japanese encephalitis and the Murray Valley encephalitis viruses. From 1955 to 1988, over 5000 cases of SLE have been reported to the Centers for Disease Control. The reported cases are only a fraction of those that actually occur. The largest epidemic occurred in 1975 when 1815 cases were reported. The virus is maintained in nature by a bird-mosquito-bird cycle. The incubation period is 21 days. The ratio of inapparent to apparent infection ranges from 16:1 to 425:1. Children are much more likely to have inapparent infection than adults. The morbidity and mortality rate increases with age. Patients who are symptomatic will usually present with or progress to one of three syndromes (1) febrile headache (2) aseptic meningitis (3) encephalitis. The laboratory diagnosis is usually made by serology. Treatment is supportive and no vaccine is available. 2. Japanese Encephalitis Japanese encephalitis is a major public health problem in Asia, SE Asia, and the Indian subcontinent. Prior to 1967, thousands of cases with several hundred deaths were reported each year. In endemic areas where vector control and vaccination had been undertaken, the incidence had dropped dramatically. Epidemics had been reported from Japan, China, Korea, Taiwan, USSR, Vietnam, Philippines, ASEAN countries, India and Bangladesh. The transmission cycle in nature involve the Culex and Aedes mosquitoes and domestic animals, birds, bats, and reptiles. Man is not a preferred host for Culex mosquitoes and transmission of JE virus does not usually occur until mosquito populations are large. Japanese encephalitis produces a high inapparent to apparent infection ratio, ranging from 25:1 to 500:1 case of encephalitis. However, when encephalitis occur, the mortality rate is in the range of 20 to 50%. Some patients will only show an undifferentiated febrile illness or have mild respiratory tract complaints. The diagnosis is usually made serologically as virus isolation is not usually successful. No specific treatment is available. An inactivated suckling mouse brain vaccine had been available since the early 1960s which had been extensively used throughout Asia. The efficacy rate ranges from 60 to 90%. Despite the vaccine being a mouse brain preparation, no postvaccination demyelinating allergic encephalitis had been reported. Mild symptoms occur in 1% of vaccinees and thus the vaccine is generally to be considered as safe. 3. Murray Valley Encephalitis This virus is closely related to Japanese encephalitis and resembles JE clinically. It is confined to the Australia and New Guinea, where it is an important cause of epidemic encephalitis periodically. In the 8 epidemics that took place between 1917 to 1988, 330 cases were reported in Australia. The diagnosis is made by serology and no specific treatment or vaccine is available. 4. West Nile Fever West Nile fever is a dengue-like illness that occurs in both epidemic and endemic forms in Africa, Asia, and the Mediterranean countries. Areas of high endemicity include Egypt and Iran where most of the adult population will have antibodies. West Nile virus is a member of the St Louis encephalitis complex and is transmitted by Culex mosquitoes. The virus is maintained in nature through a transmission cycle involving mosquitoes and birds. Children will usually experience an inapparent or a mild febrile illness. Adults may experience a dengue-like illness whilst the elderly may develop an encephalitis which is sometimes fatal. The diagnosis is usually made by serology although the virus can be isolated from the blood in tissue culture. No vaccine fro the virus is available and there is no specific therapy. Among the arboviruses, it is difficult to distinguish clinically between West Nile, dengue and Chikungunya. In the absence of a rash, a number of toga and bunyaviruses should also be considered in the differential diagnosis. 5. Ilheus Virus This virus is found in Latin America where it causes a febrile illness with arthralgia. Occasionally a mild encephalitis is seen. The virus can often be confused with dengue, St Louis encephalitis, yellow fever and influenza viruses. II. Tick-Borne Encephalitis Viruses Tick-borne encephalitis viruses occur in temperate climates of Western and Eastern Europe and the USSR. These viruses are so closely related antigenically that it is uncertain whether to group them as separate viruses or as variants of the same virus. TBE viruses can be transmitted to a wide range of animals by ticks and is probably maintained in nature by small rodents. Humans can be infected via tick bites or by drinking milk of infected animals such as goat, cows and sheep. The clinical presentation vary from asymptomatic infection to fulminant encephalitis and death. The diagnosis is made serologically. By the time overt clinical manifestations are seen, the viraemia had already subsided so that the virus cannot be isolated from the blood or CSF. The treatment of TBE is supportive. A formalin-inactivated vaccine is available for use in the USSR which is recommended for persons living in endemic areas and for laboratory workers who may be exposed to the virus. Louping ill is primarily a disease of sheep in England, Ireland and Scotland. Cattle, pigs, deer and some small mammals and ground-dwelling birds are also infected. It is relatively rare disease of humans caused by contact with infected tissue of sheep (butchers and vet), laboratory accidents and through tick bites. The disease caused resembles that of a mild form of tick-borne encephalitis. The disease starts of with a mild influenza-like illness which may proceed to a mild meningoencephalitis. A vaccine is available for sheep which should reduce human disease. 1. Yellow Fever Yellow fever, once a scourge of the port cities of North America and Europe, remains an important endemic and epidemic disease of Africa and South America. Yellow fever occurs in 2 major forms: urban and jungle (sylvatic) yellow fever. Jungle YF is the natural reservoir of the disease in a cycle involving nonhuman primates and forest mosquitoes. Man may become incidentally infected on venturing into jungle areas. The S American monkeys are more prone to mortality once infected with YF than the old world monkeys, suggesting that American YF probably originated from the old world as a result of sailing ships. The urban form is transmitted between humans by the Aedes aegypti mosquito and thus the potential distribution of urban YF is in any areas where infestation with Aedes aegypti occurs, including Africa, S and N America and Asia. Although the urban vector is present in Asia, yellow fever has never been established there. The majority of reported human YF cases come from Africa (Angola, Cameroon, Gambia, Ghana, Nigeria, Sudan, and Zaire) and S America (Brazil, Bolivia, Columbia, Peru, Ecuador and Venezuela). Both of these continents have jungle yellow fever transmitted in a monkey-mosquito-monkey cycle. In these areas, YF is reintroduced into urban populations from time to time as a result of contact with jungle areas. YF cases occur more frequently at times of the year when there are high temperatures ad high rainfall, conditions which are most conducive to mosquito reproduction. Once infected, the mosquito vector remains infectious for life. transovarial transmission of Aedes aegypti had been demonstrated and may provide a mechanism for the continuation of the jungle or urban cycle. Once the virus is inoculated into human skin, local replication occurs with eventual spread to the local lymph nodes and viraemia occurs. The target organs are the lymph nodes, liver, spleen, heart, kidney and foregut. b. Clinical Features The incubation period varies from 3 to 6 days, following which there is an abrupt onset of chills, fever, and headache. Generalized myalgias and GI complaints (N+V) follows and signs may include facial flushing, red tongue and conjunctival injection. Some patients may experience an asymptomatic infection or a mild undifferentiated febrile illness. After a period of 3 to 4 days, improvement should be seen in most patients. The moderately ill should begin to recover, however, the more severely ill patients with a classical YF course will see a return of fever, bradycardia (Faget's sign), jaundice, and haemorrhagic manifestations. The haemorrhagic manifestations may vary from petechial lesions to epitaxis, bleeding gums, GI haemorrhage (black vomit of YF). 50% of patients with frank YF will develop fatal disease characterized by severe haemorrhagic manifestations, oliguria and hypotension. Frank renal failure is rare. Rarely, other clinical findings such as meningoencephalitis in the absence of other findings have been described. c. Laboratory Diagnosis The differential diagnosis of YF include typhoid, leptospirosis, tick-borne relapsing fever, typhus, Q fever, malaria, severe viral hepatitis, Rift valley fever, Crimean-Congo haemorrhagic fever, Lassa, Marburg and Ebola fever. Yellow fever can be diagnosed serologically or by virus isolation. The serological diagnosis can be made by HI, CF and PRN tests. Virus isolation can be attempted from the blood which should be obtained within the first 4 days of illness. A variety of techniques are available for virus isolation, such as intracerebral inoculation of newborn Swiss mice or inoculation into Vero, LLC MK-2, BHK, or arthropod cell lines. d. Treatment and Prevention No specific antiviral therapy is available and treatment is supportive. Intensive medical treatment may be required but this is difficult to provide as many epidemics occur in remote areas. Yellow fever is regarded as a quarantinable disease of international public health significance and public health officials should be notified as soon as possible so that vector eradication and mass immunization can be carried out as soon as possible to prevent an epidemic. A live attenuated vaccine known as the 17-D had been available since 1937. The vaccine is regarded as highly effective and generally safe, with mild reactions such as headache, myalgia and low grade fever occurring in 5 to 10% of vaccinees. Vaccination is recommended for residents of endemic areas and should be included in routine vaccination programs. Travelers to endemic areas should also be vaccinated. It is officially recommended that a booster dose should be given every 10 years although this may change in view of recent data on the long persistence of YF antibodies. The contraindications to the use of 17-D vaccine are pregnancy, altered immune states, and hypersensitivity to eggs. 2. Kyasanur Forest Disease This is a tick borne disease closely related to the tick-borne encephalitis complex and is geographically restricted to Karnataka State in India. Haemorrhagic fever and meningoencephalitis may be seen. The case-fatality rate is 5%. Hundreds of thousands of cases of dengue occur every year in endemic and epidemic forms in tropical and subtropical areas of the world. The attack rates during epidemics can reach as high as 50%. Dengue is a prevalent public health problem in SE Asia, the Caribbean, Central America, Northern South America and Africa. (See red coloured areas in map below). In hyperendemic areas, most cases occur in young children as the majority of the population had already been infected with multiple serotypes. In other areas, older children and adults are more likely to be affected. Maximum number of cases occur during the months of the year with the highest rainfall and temperatures, when Aedes aegypti populations are at their highest. A. aegypti mosquitoes deposit their eggs in waterfilled containers and thus reproduction is highest during periods of high rainfall. 4 serologically distinguishable types of dengue are recognized (DEN 1-4). The vector mosquito becomes infected by feeding on a viraemic host. The virus becomes established in the salivary glands of the mosquito from where it can be transmitted to susceptible individuals. Following an incubation period of 2 to 7 days, the virus is disseminated (route unknown), to the organs of the RE system (liver, spleen. bone marrow and lymph nodes). other organs may be involved such as the heart, lungs and GI tract. a. Clinical Manifestations The clinical presentation of dengue in children is varied. The disease may be manifested as an undifferentiated febrile illness, an acute respiratory illness, or as a GI illness: atypical presentations which may not be recognized by clinicians as dengue. Older children and adults infected with dengue the first time will display more classical symptoms: sudden onset of fever, severe muscle aches, bone and joint pains, chills, frontal headache and retroorbital pain, altered taste sensation, lymphadenopathy, and a skin rash which appears 3 days after the onset of fever. The rash may be maculopapular, petechial or purpuric and is often preceded by flushing of the skin. Other haemorrhagic manifestations may be seen such as epitaxis, gingival bleeding, ecchymoses, GI bleeding, vaginal bleeding and haematuria. Severe cases of bleeding should not be diagnosed as Dengue haemorrhagic fever (DHF) or Dengue shock syndrome (DSS) unless they meet the criteria below. DHF or DSS are usually seen in children and usually occurs in 2 stages. The first milder stage resembles that of classical dengue and consists of a fever of acute onset, general malaise, headache, anorexia and vomiting. A cough is frequently present. After 2 to 5 days, the patient's condition rapidly worsens as shock begins to appear. Haemorrhagic manifestations ranging from petechie and bleeding form the gums to GI bleeding may be seen. An enlarged nontender liver is seen in 90% of cases. The WHO recommended the following criteria for the diagnosis of DHF and DSS: 2. Haemorrhagic manifestations including at least a positive tourniquet test. 3. Enlarged liver 5. Thrombocytopenia (<=100,000 ul) 6. Haemoconcentration (HcT increased by =20%) A diagnosis of DSS is made when frank circulatory failure is seen, and occurs in one third of cases of DHF. DHF has been graded by the WHO on the basis of its severity. |Fever accompanied by non-specific constitutional symptoms, the only haemorrhagic manifestation is a positive tourniquet Spontaneous bleeding in addition to the manifestations of Grade I patients, usually in the form of skin and/or other haemorrhages Circulatory failure manifested by rapid and weak pulse, narrowing of pulse pressure (20 mmHg or less) or hypotension, with the presence of cold clammy skin and restlessness Profound shock with undetectable blood pressure and pulse |Grade III and Grade IV DHF are also considered as Dengue Shock Syndrome.| The immunological response to dengue infection depends on the individual's past exposure to flavivirus. The flavivirus group shares cross-reacting antigen(s). Primary infection results in the production of antibodies against the infecting serotype predominantly. Reinfection with another dengue serotype (or other flaviviruses) usually produces a secondary (heterotypic) response characterized by very high titres to all 4 dengue virus serotypes and other flaviviruses, so that serological identification of the infecting agent is quite difficult if not impossible. After a first infection with one dengue serotype, cross-immunity to other serotypes may persists for a few months, but after 6 months, reinfection with another serotype may occur. There are 2 theories proposed for the pathogenesis of DHF and DSS: virus virulence and immunopathological mechanisms. The weight of the available evidence supports the immunopathological theory. DHF and DSS occurred most often in patients with a secondary (reinfection) serological response. However, the observation that DHF and DSS occurred in infants with a primary response cast some doubt on this theory until it was demonstrated that preexisting maternal antibody had a similar effect to acquired antibody. The antibody-dependent theory proposes that in the presence of non-neutralizing heterotypic antibody (whether maternally derived or not) to dengue, virus-antibody complexes are formed which are more capable of infecting permissive mononuclear phagocytes than uncomplexed dengue virus. d. Laboratory Diagnosis 1. Serology - HI, CF and PRN tests are commonly used. The high degree of cross-reactivity between flaviviruses can make the interpretation of serological results very difficult. 2. Virus isolation - this can be accomplished by the intracerebral inoculation of sera from patients into suckling mice. Sera can also be inoculated intrathoracically into Aedes mosquitoes. Head squash preparations are examined for the presence of antigen by the FA technique. Cell cultures such as LLC MK-2 and several mosquito-derived cell lines can be used. e. Treatment and Prevention There is no specific antiviral treatment available. Management is supportive and intensive medical management is required for cases of severe DHF and DSS. No vaccine for dengue is available but a tetravalent live-attenuated vaccine has been evaluated in Thailand with favourable results. Scale-up preparation for commercial production of the vaccine is underway and it is anticipated that the vaccine will become available soon and evaluated in large scale clinical trials. To avoid dengue, travelers to endemic areas should avoid mosquito bites. Prevention of dengue in endemic areas depends on mosquito eradication. The population should remove all containers from their premises that may serve as vessels for egg deposition. Vector surveillance is an integral part in control measures to prevent the spread of dengue outbreaks. Regular inspections as part of law enforcement may be used in the control of mosquito vectors. The object of source reduction is to eliminate the breeding grounds in and around the home environment, construction sites, public parks, schools and cemeteries. Illegal dumping of household refuse provide favorable breeding sites for mosquitoes. Long-term control should be based on health education and community participation, supported by legislation and law enforcement. Domestic water supplies should be improved in order to reduce the use of containers for the storage of water. Orbiviruses are insect-borne viruses primarily of veterinary importance. Orbiviruses contain dsRNA arranged in 10 segments except for the members of the Colorado tick fever serogroup which have 12 segments of dsRNA. They vary in size from 50 to 90nm with 92 capsomers. The capsid is a double-layered protein. The outer coat is diffuse and unstructured while the inner layer is organized in pentameric-hexameric units. Colorado tick fever is caused by a virus belonging to the family of Reoviridae. It is a zoonotic disease of rodents and is transmitted to man via tick bites. It is prevalent in the Rocky Mountains and more western regions of the USA. It is a dengue-like illness albeit with a relatively low incidence. Colorado annually reports 100 to 300 cases but the disease is underreported there and from other western states as well. There is a strong seasonal trend with the majority of cases occurring between February and July. Chipmunks and squirrels serve as amplifying hosts. Following an incubation period of 3 to 6 days, a high fever of acute onset is seen, along with chills, joint and muscle pains, headache, N+V. A maculopapular rash may be seen in a minority of patients. A more severe clinical picture may be seen in children, who may develop haemorrhagic manifestations including severe GI bleeding and DIC. Aseptic meningitis or encephalitis may be seen. CTF may be diagnosed by virus isolation whereby the patient's blood is inoculated into suckling mice or cell culture lines such as Vero, followed by identification by IF, N or CF tests. More rapid diagnosis can be made by performing IF directly on the blood clots. A IgM tests as well as other serological techniques are available for serological diagnosis. No licensed vaccine for CTF is available nor is it practicable because of the rarity and the benign nature of the disease. Public health education remains the most preventative measure. Arboviruses Slide Set
<urn:uuid:135b4ce8-ac50-4d85-bd3b-7020a0406ac6>
CC-MAIN-2022-33
http://virology-online.com/viruses/Arboviruses5.htm
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571911.5/warc/CC-MAIN-20220813081639-20220813111639-00497.warc.gz
en
0.928152
4,598
3.328125
3
Psychology Unit 4 Addiction Revision Notes Sunday, 14 April 13 Addiction What is addiction? It is a repetitive habit pattern that increases risk of disease and/or associated personal and social problems. Elements of Addiction Salience – individuals desire to perform the addictive act/behaviour Mood Modification – people with addictive behaviour often report a ‘high’, ‘buzz’ or a ‘rush’, addicts are known to use addictions for this.Tolerance – addicts tolerance increases therefore they increase the amount to get the same effect Withdrawal Symptoms – unpleasant feelings and physical effects that occur when the addiction is suddenly reduced Relapse – process of stopping the addiction and falling back into it Conflict Maladaptive Behaviour – people with addictive behaviours develop conflict with people around creating social isolation. +/• How many criteria is needed before a person can be said to have an addiction? • Many can tick all above for things like coffee drinking? Are they addicts?It seems like the key is being addicted to something that is harmful. Sunday, 14 April 13 Outline and define what is meant by addiction. (5marks) Addiction is a repetitive habit pattern that increases risk of disease and/ or associated personal and social problems. Most theories say addiction goes through 3 stages, the first is initiation which is how the addiction starts?, then into maintenance, why addict continues?, and finally into relapse, why an addict may stop and start again? Theorists define addiction by 6 sub components of addiction, one is salience this is the desire to perform the addictive act. Another is mood modification such as ‘high’, most addicts perform the addictive act to achieve this. Tolerance is a big one whereby the more the addictive behaviour is done the tolerance levels increase therefore more has to be done to get the same effect. If withdrawing from the addictive behaviour withdrawal symptoms may occur such as unpleasant feelings or physical effects, this may cause relapse to occur. Addicts also tend to develop conflicts with others. Sunday, 14 April 13 The biological approach This model can be broken into sub theories – genetic and neurochemical.Genetic/Hereditary Approach: KEY IDEA – that some people are predisposed to be addicts. Evidence – Twin studies (monozygotic twins) – if one twin is an addict the other twin is likely to be genetically predisposed to be an addict to, and are often used to establish there is biological basis to addiction. Hans study of over 300 monozygotic twins and over 300 dizygotic twins – this study found that there was a connection between genetics and social behaviour such as attention seeking, rejecting social norms and alcoholism. A02 (EVALUATION) • The problem with this is that monozygotic twins and families have common environments too.This means the addiction could have been learnt and may not be biological in origin. • Does not identify the gene • Monozygotic twins are not truly identical Sunday, 14 April 13 Thorgeirssons Study (Genetically susceptibility to smoking) he sent a questionnaire to 50,000 people in Iceland asking questions on smoking etc. They studied DNA of over 10,000 smokers or former smokers, they found a variation at 2 points in chromozone 15 were common along those who had lung cancer, this gene affected the amount of cigarettes smoked. /• Culture difference although cross cultural • Found a correlation not causation • Connects lung cancer with addiction which is not always true • Though finds a specific gene Jellineck (The disease model of addiction) – this model states that addiction has an abnormality in the structure of the CNS. The addict will showing strong desire to stop but will be unable to. At first sight it appears the addict has a choice, however according to this model there is no choice just a compulsion.This view is consistent with the genetic view as the abnormality of the CNS that causes the addiction can be inherited. +/• The model does not explain how addicts can stop • The model is also vague about which part of CNS causes the disease and what mechanisms causes addiction Sunday, 14 April 13 Neuro-Chemical Approach – this explains addictive behaviour by looking at the powerful neuro-chemical systems that the body has. The brain can generate its own powerful chemicals, however can be added to by taking other drugs.Dopamine reward system – dopamine is connected to learning, memory arousal and emotional response and ability to experience pleasure. Altman made a study of rats 1996 in which the rats were given nicotine, the result was an increase in the production of dopamine in the rats. However rats are further down phylogenetic scale. The endogenous opiod system – the body produces its own opiod, neuro chemicals in the forms of endorphins eternally taken opiates (heroin, morphine, coediene) can increase the power of this system.They stimulate receptor sites, in turn of the receptor sites habituate to the increased levels of opiods they increase their tolerance and react at a lower level than before to the presence of opiods. +/• Explanation seems to work very well for substance addiction • However works less well for addictions such as gambling or computer gaming. Sunday, 14 April 13 Physical Dependence & Tolerance • Neurotransmitters – a chemical that moves in the gaps between nerve cells to transmit messages. • Faster the transmitters the more intense the arousal • Addictive behaviour increases transmission.Sunday, 14 April 13 Strengths and weaknesses of the biological model Strengths • The biological approach produces testable statements, if a part of the brain controls addiction it should be possible to identify it. • The biological view also has treatments and therapies to offer addicts, for example nicotine patches have been successful in helping many smokers give up. • It also increases our understanding of genes and their operation have lead to scientists being able to screen people for predispositions towards certain hereditary conditions, in time this may be possible for addiction.Weaknesses • The biological approach is deterministic, behaviour is dictated to us by our body chemistry etc. There is no room for consciousness and free will. The Rational Choice theory of addiction suggests that addicts use their minds to make choice, for example no control over genes! • Experience is ignored. The notion that past experiences does not form our behaviour is false. It is a fact that people who drink a lot are more likely to become addicts. Sunday, 14 April 13 Outline and evaluate the biological approach to explaining smoking behaviour. 4+4 marks) Geneticists within the biological approach believes initiation occurs due to a genetic pre-disposition and that addiction can be inherited, therefore addicts are genetically different, this implies that addiction is due to nature and that the addict has no control of this. Although this approach ignores the possibility that addiction has environmental factors. Though this is supported by Thoregeirsons study of smoking in Iceland, he found that chromozone 15 has a strong correlation with lung cancer with indication to addiction.Although this is criticise as lung cancer is not always due to smoking addiction. Though the biologist which focus on neuro-chemicals believe smoking could be due to the dopamine reward system, dopamine is connected to learning, memory arousal and emotional response and ability to experience pleasure. Altman made a study of rats 1996 in which the rats were given nicotine, the result was an increase in the production of dopamine in the rats. However rats are further down phylogenetic scale. Sunday, 14 April 13Evolutionary Approach The evolutionary view states that all behaviour is the product of genetic programming and inheritance. Every individual is genetically programmed to maximise their fitness. Fitness is then measured in 2 ways, firstly by reproducing and generating viable offspring, secondly by avoiding predation or being a successful predator and surviving. At first sight addictive behaviour could be seen as damaging fitness as alcaholics, smokers significantly reduce their life expectancy.Addiction also generally has negative effects on fertility. However if addiction is seen as a risk taking behaviour it could be seen as linked to the evolutionary approach as in early human groups humans capable of successfully taking risks may have been more successful. +/• This explanation is an explanation of risk taking behaviour and not addiction. • It could be equally argued that risk taking is dangerous and that animals that engage in this are more likely to die therefore reducing fitness. Sunday, 14 April 13Summary of the Biological view Genetic/heridotary approach • Genes of addicts are different • Monozygotic twin studies – higher concordance rates however not 100% identical • Thorgerisonns study on smoking – found genre variation on chromozone 15 • Jellineck – CNS cause of addiction Neurochemical • Dopamine reward system – higher dopamine increases arrousal • Nicotine stimulates dopamine Physical Dependence & Tolerance • Neurotransmitters – a chemical that moves in the gaps between nerve cells to transmit messages. Faster the transmitters the more intense the arousal • Addictive behaviour increases transmission • Nicotine Reward Model Sunday, 14 April 13 Biological approach to explain Gambling Initiation – genetic predisposition (diathesis stress model) Maintenance – stimulation of reward system would provoke dependence Relapse – the cause of addiction is still there you can always come back +/• Maintenance process is not purely biological as the behaviour is learnt Biological approach to explain Smoking Initiation – genetic predisposition, chromozone 15 has been known to correlate with smoking, SLC6A3-9 less likely to smoke.Maintenance – nicotine reward model – once you start you have to continue with the level, the dopamine reward system – nicotine produces dopamine production. Relapse – environmental factors may cause a relapse – diathesis stress model. Sunday, 14 April 13 The cognitive approach KEY IDEA – addiction is the product of thinking, thought processes and perception Initiation Becker and Murphy • Use the business concept of ‘utility’, utility is the measure of relative satisfaction from consumption of a particular good or service • To calculate this you weigh up the costs nd rewards to make a rational choice. RISC Theory • Pro’s of addiction out weigh the costs in the addicts perception Maintenance Becker and Murphy • Addicts are rational consumers, they look ahead and behave in a way that is likely to maximise their preferences. Schemas • Following their addiction schema causes addiction to be maintained. RISC Theory • Addicts are stable and keep to their view. Catastrophic thinking – all or nothing thinking (cognitive bias) Sunday, 14 April 13 Griffiths common cognitive biases of gamblers Looks at regular gamblers and non regular gamblers, he found that the regular gamblers were more likely to have irrational verbalizations and typically personified the machine by either congratulating it or swearing at it etc. /• Familiarity may produce Evaluation of the cognitive approach Strengths • Uses scientific methods – theories and assumptions are testable and objective • Various therapies which can help with addiction • Individual differences are explained Weaknesses • Better at explaining no physical addictions • Experience is under played in cognitive approach • Cognitive approaches use of the computer analogies means that its reductionist and over simplistic • Doesn’t consider genetic influences on addiction • Free will is not considered sufficiently Sunday, 14 April 13Learning Theory (Behavioural) KEY IDEA – all addiction is learnt Social Learning Theory Initiation – through seeing someone else have positive reinforcement eg: weight loss, friends etc, this is then copied to achieve same. Maintenance – because they keep seeing positive reinforcement of others. Operant Conditioning Maintenance – if positively reinforced you will continue, there is negative reinforcement through withdrawal symptoms and will avoid these. Relapse – if they have withdrawal symptoms they may want the reinforcement back.Classical Conditioning Maintenance – the addict associates time of day/place with the addictive activity therefore this will maintain the addiction Relapse – spontaneous recovery is when response comes back, if placed in conditions which relates to addict. Sunday, 14 April 13 Evaluation – Strengths and Weaknesses of Learning Theory Strengths • Behaviourism has a practical applications which could help people over come addiction eg: Token economy, Aversion. The theory can help to explain in part individual differences, not everyone who drinks becomes and alcoholic, those who find drinking more positively reinforced would do etc. • Though why is someone more positively reinforced? Weaknesses • Free will ignored, people may simply choose to be drinkers and gamblers without any kind of reinforcement – DETERMINISTIC • Thought processes ignored, this is where the cognitive approach is superior. Deterministic as it assumes once behaviour has been reinforced the individual is unable to break with the conditioning • Other theories would say this is one dimensional and ignores important aspects of behaviour, as it relies on biology • The theory doesn’t seem to work well in explaining chronic addiction. Sunday, 14 April 13 Vulnerability to addiction The addictive personality Eysenck believes personality is biological caused, there are 3 parts to any ones personality, Extraversion, Neuroticism and Psychoticism. Francis showed those who had higher than normal scores of neuroticism and psychoticism were more prone to addiction. /• Is contested as many theorists believe the psychotic, neurotic and extravert categories have no validity and that they do not make up personality – NOMOTHETIC • Made a questionnaire to test this, therefore validity is questioned. Zuckerman believes personality is a product of both nature & nurture, he named the risk taking/sensation seeking personality. Being an addict tends to have a risk and some tend to be illegal adding a further risk. Evidence – Parke et al studied gamblers sensation seeking, deferred gratification and competiveness.Both deferred gratification and competiveness showed postive correlation, but not sensation seeking suggesting it is not a characteristic of addicts. Sunday, 14 April 13 +/• Questionnaires are low in validity due to social desirability • Unbalanced sample, however may be realistic • Opportunity sample may not fit target audience • Risk taking acts are not all addictive such as: sky diving Naken believs the addictive personality is created from the illness of addiction and represents a change resulting from the addictive process, the personality does not exist before.Addiction is not the cause however personality is changed. +/• This theory can explain why people who become addicts continue but it cannot explain why people become addicts in the first place. Sunday, 14 April 13 Self Esteem Low self esteem correlates with addiction, those with low self esteem have trouble sustaining relationships as their feelings are easily hurt, this in turn alienates others and resorts to self loathing.Relating to Addiction • Addiction can be used as a coping mechanism • Addiction forms an escapism as your not yourself • Addiction is risky, those with low self esteem will not be bothered • Substance addiction can be used to improve self esteem • Addiction can give affirmation Taylor and Lloyds Study found that boys who have low self esteem are at higher risk of substance abuse, they found that the combination of low self esteem and peer approval of drugs can cause drug dependency. /• Large sample allowing it to be possibly representative • Negative is that it’s all boys • Peer influenced too, so introducing a confounding variable so study not fully about self esteem Improving self esteem and beating addiction • Get a life purpose statement – take personal development courses – take action – socialize involve yourself – stand up for yourself – set personal goals and accomplish them This suggests that self esteem is intimately connector to becoming and to escape addiction.Sunday, 14 April 13 Age Shram (the adolescent brain) – measured age differences in neural responses to nicotine administration, it was found to have greater effect on adolescents. Date from Survey implies that adolescent cigarette smoking is a ‘gateway’ to progression to other legal and illegal drug use. +/• Alternatively it could be peer pressure and subcultural groups Stress Diathesis stress model – pre-disposition to addiction triggered by environment by stress.Predisposed by – chromozone 15, eysenks personality, neurotic and psychotic all recognised in DSM as mental disorders Peers Sussman & Ames found that peer use of drugs is a strong predictor of drug use among teenagers. Also found that family conflict, poor supervision and tolerance of drug-use by parents also influences the initiation of drugs. Normative Social Influence and Peer Pressure NSI could lead to addiction to fit in with the rest of the group through compliance to avoid isolation.Sunday, 14 April 13 External Factors Availability – influences addiction as if the addiction is not there makes it hard to become an addict eg: Russia > Vodka Culture Influences – is a risk to you, societal culture values can make addiction more likely eg: gambling south east Asia, gambling is an everyday activity. Russia has a long cultural like of drinking. Subcultural > youth >binge. Social Dislocation – leads to high levels of deprivation and alienation – anomie leads to dysfunctional behaviourSunday, 14 April 13 Media SLT – copy behaviour through media especially that is positively reinforced. Evidence – Bobo Doll study of media, as children watch a film • Children watched a film of adult being violent with doll • Those adults who were positively reinforced were more violent • Found that the children’s behaviour related directly to whether they saw positive reinforcement or punishment.. Implies that witnessing addiction being positively reinforced in the media can increase levels. /• Study of aggression, not addiction • Low in temporal validity • Sample – just children therefore more easily influenced • Ethics – cannot replicate with addiction, as you cannot encourage additive behaviour Gunakesera et al (content analysis) – found that during their investigation of 200 top grossing films, that whilst drug taking was not prevalent as say unprotected sex, the portrayal of these behaviours were positive. +/• Not investigated whether the film has impacted on the audience Sunday, 14 April 13Madeline A Dalton et al – tries to locate impact from media on addiction Large sample of 3547 are assessed in turns of how much smoking in 50 films. Dolton found that high exposure P’s were 2. 71 times more likely to initiate into smoking, suggesting that level of exposure correlates to initiation of addiction. +/• Large sample – balanced • Longitudinal • Survey – invalid due to social desirability • Confounding variables – peers, stress, self esteem Supported by Sargent and Hanewinkel proving reliability. Maxist view of Media This assumes that addictive behaviour can be influenced by the media, THE HYPODERMIC SYRING Model suggests the audience is a sponge taking on board and accepts what ever the medial presents. So some people will be influenced by advertising for Gambling and Alcahol or by representations of smokers in films. +/• The pluralist view states that individuals can either accept or reject the media – free will • Underestimate audience ability to be active thinkers not passive Sunday, 14 April 13USE’s and Grafication Approach This view says that the media can influence addiction but not directly, individuals choose input according to their existing attitude, so its reinforces: • Selective exposure, choosing what fits existing views • Selective perception, interpreting messages by existing views • Selective retention, remember and repeat what are existing views Media doesn’t cause addiction, it simply serves to support an existing viewpoint Cohen – Folk Devils and Moral Panics • Noticed that mods and rockers were fighting each other • Media enforced this and created more violence • Media created folk devils by demonising these • Mods and rockers enforce this label and live up to their reputation. Arguably binge drinking in youth has been sensationalised in the media, and the process of demonization has occurred +/• Addictive binge drinking increases deviance amplification increases • This implies that the media can increase and enforce addiction Sunday, 14 April 13 Stages of prevention • Primary prevention; keeping people away from addiction • Secondary prevention; people who are at risk but not yet addicts • Tertiary prevention; people who are addicts stopping their addiction.The theory of planned behaviour More likely to become an addict if your behavioural intention is confirmed by 3 other factors. These include a positive attitude towards addiction, subjective norms whereby you’re friends and family are encouraging of the behaviour and the perceived behavioural control is simple and easy. An implication of this, is it can be used to prevent addiction by juxtaposing all 3 of these factors. Evaluation Norman and Tedeshi – 420 adolescents completed a survey regarding smoking, intentions to smoke and smoking status at 3 points in time. The theory of planned behaviour was able to identify those at risk and who went on to smoke.Wood and Griffiths – examined the attitudes and behaviour of adolescent participation on the lottery and scratch cards by applying the theory of planned behaviour, results revealed that young peoples attitudes are an accurate predictor of their gambling behaviour. Sunday, 14 April 13 Stages of change theory – Prochaska Process involving progress through a series of six stages: 1. Pre-contemplation – not intending to take action 2. Contemplation – intending to change in the next 6 months 3. Preparation – intending to take action in the next month 4. Action – made specific overt modifications within their life style within past 6 months 5. Maintenance – working to prevent relapse – estimated between 6 months to 5 years 6.Termination – individual has zero temptation and 100% self efficiacy, and are sure not to return Prochaska identifies the following being important; decisional balance, weighing up of pros and cons of changing, self efficacy the confidence people have that they can cope without relapsing in high risk situations and temptation this reflects the intensity of urges when in the midst of a difficult situations. Evaluation Prochaska realised that most people take more than one attempt to end their addiction so prevention is seen more as a series of runs each time the individual gets abit further along the spiral. Sunday, 14 April 13 Specific Treatments/ Types of Biological Aversive agent treatment – a drug such as antabuse that causes alcahol intolerance including nausea, vomiting, headaches etc. Though the success of the treatment will depend up to the patient complying and being willing to take the drug. Apomorphine injections – causes addict to be violently ill, the addict then performs the addictive behaviour to connect them both. Though this relies on the patient learning this and behaviourism. Agonist maintenance treatment – a controlled substitute drug gradually decreasing, most common is using methadone for the treatment of heroin addicts. Though it is questioned as you could get them addicted to 2 drugs. Narcotic antagonist treatment – this blocks the effect of drugs, the most common is naltrexone treatment, this is very successful. Smoking Nicotine replacement therapy – the most popular form is a nicotine patch, this works by releasing a steady concentration of nicotine to combat withdrawal symptoms.Meta analyses show that quitting rates for using patch is 28% and 12% with placebo. This is unique as it doesn’t actually take away the substance that the patient is addict to, you could argue this is not actually treating the addiction. Though smoking is unique, as the issue is the method of taking nicotine rather than the drug itself. Sunday, 14 April 13 Drugs used to combat the effects of stopping smoking Bupropion – works by stimulating the dopaminergic route ways, so compensates for absence of smoking. Side effects include dry mouth, itching and rashes Nortryptilline – stimulate adrenergic pathways and acts as stimulate to the body with a 24% abstinence rate.Evaluation of biological interventions • Fast acting • Easy to administer • Cheap and provides an affordable therapy • Treat symptoms not the cause • Rely on high levels of commitment by the patient • Ethical issues as many cause physical discomfort • Agonist maintenance is introducing a new drug, which is questionable? The theory is based on a nature based argument however relies heavily on nurture through association for these treatments to work. Sunday, 14 April 13 Psychological Behavioural Therapies – Aversion Therapy Made to think of or do the addictive behaviour such as an alcoholic being made to drink whilst electric shocks are administered, in theory the patient will become to associate their addiction with the shocks. +/• Links with classical onditioning • Ethical issues – high level of consent needed, protection of participants, though counter balanced with harm they already to themselves In vivo desensitisation – taken to place associated with an addiction, then after a long time it will get boring, and will become to associate the addiction with boredom. +/• Social learning theory would criticise as they believe exposure to addictive behaviour could be a source of relapse Imaginal desensitisation – the addict imagines cues for addiction when doing this they are taught relaxation techniques, the therapist goes through the list of cues and if kept relaxed they can beat the addiction. Sunday, 14 April 13Case Study Psychologists used one of four techniques to treat 120 gamblers; imiginal desensitisation, aversion therapy, imaginal relaxation and in vivo exposure. Results were promising in a 6 month follow up of 63 clients, 18 reported abstinence and 25 reported controlled gambling. Imiginal desensitisation was the highest abstinence rate of 78%. Operant Conditioning – The Voucher (modern day token economy) • 28 cocaine addicts on programme had their urine tested several times a week. •. 2. 50 given for every clear test, with a potential total of. 17. 50, however if traces of cocaine found total went back down to. 2. 50. • They were giving counselling on how best to spend their money.Relaxation therapy – training in relaxation techniques that can be used when the urge to engage in the behaviour arises eg: breathing. Satiation therapy – involves presenting the addict with no other stimuli and no other activities but those associated with the addiction. +/• Possibly reinforcing addiction (continuous reinforcement) • Ethics? Protection of participants • Can use for all addictions Sunday, 14 April 13 Evaluation of Behavioural Therapies • Behavioural therapies often used with another technique, a strength of this is that it becomes non-reductionist as more than 1 is used. • Behavioural therapy may eliminate the behaviour but not the problem it could be argued the caused by an underlying problem. The behaviourist approach suggests there is no FREE WILL, the therapies underestimate this causing it to be DETERMINISTIC • Treatment success for addicts might be overshadowed by public reaction to token economy for example, it may be shunned upon for rewarding addicts. Sunday, 14 April 13 Cognitive Therapies Motivational Interviewing – this is a directive, client-centred counselling style to elicit behaviour change by helping clients come to their own decision to change. +/• Therapy only will work if client is motivated to stop • Not very easy, high responsibility on client to create self awareness Relapse Prevention – identifies situations that present a risk for relapse both intrapersonal and interpersonal, then provides the addict with techniques to learn how to cope with temptation.Cognitive Behaviour Therapy (CBT) Both patient and therapist decide together on appropriate goals, the type and timing of skills training, whether someone else is brought in, the nature of outside practice tasks etc. +/• Combines both cognitive and behaviourist approach, therefore non reductionist • However could suggest cognitive therapy is weak on its own • It is good as it gives client choice which encourages motivation Sunday, 14 April 13 Evaluation of cognitive therapies • Known as the talking therapy • Articulate people likely to do better at this therapy as they can open up more • Time consuming and expensive as you need a therapist • Only as good as other available treatments • CBT treats cause! • CBT doesn’t assess genetics or vulnerabilities to addiction ?An alternative is SELF HELP THERAPIES, as they’re cheap and have no side effects for example: alcoholic anonymous (AA) Psychotherapy and addiction – this is based on Freuds approach and involves the discussion of relationships and feeling towards others, it can bring in past experiences and expectations in order to apply new knowledge for future decisions. +/• The approach doesn’t really confront the physical dependence of much of addiction • Eysenk suggests spontaneous recovery is only as good as psychotherapy • This is a very expensive therapy Sunday, 14 April 13 Public Health Campaigns Doctors Advice – Russell et all carried out a study in 5 doctors surgeries, patients were encouraged to give up smoking and place in one of four treatment groups. Doctors advice was influential even though percentages were low. /• Doctors advice is having limited success • Doctors might give advice that the patient does not want to accept • A doctors number 1 role is not that of dealing with addiction Workplace Interventions – for example bans in workplace of smoking, drinking etc causes around 2% quitting and most people compensate for lack of this at home. Smoking more at home is having a negative impact for children. Community based campaigns – Stanfords 5 city project targeted 2 cities for health education us many different methods. These were compared to 3 cities whom had not been targeted, results show improvement in treatment cities and a 13% reduction in smoking Sunday, 14 April 13 Norway and Gambling (State led campaign) – changes such as nationalising the gambling industry, banning use of note receptors on gambling machines, switching machines to online with central monitoring, relocating 50% of machines to age restricted locations and upto 20% where food and drink is served.The intervention has been successful and has reduced amount of slot machine gambling, due to government being allowed to make big structural changes, successful due to power. Responsible Gambling Community Awareness Campaign – the campaign presents messages based on an early intervention approach targeting low to moderate risk gamblers, using a wide spread of media channels such as newspapers, bus shelters, stations, bilboards, stadiums and cinemas. An online advertising component has also been included. +/• Nurture approach – can change! • FREE WILL, recognise behaviour can be altered • Not cross cultural, all in developed countries however could argue there dealing with developed addictions Sunday, 14 April 13
<urn:uuid:24c19e99-de34-4b03-a668-7ceeae7f4b16>
CC-MAIN-2022-33
https://freebooksummary.com/psychology-unit-4
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00500.warc.gz
en
0.938617
6,148
3.4375
3
By Hal Brands and David Palkki On March 27, 1979, Saddam Hussein, the de facto ruler and soon-to-be president of Iraq, laid out his vision for a long, grinding war against Israel in a private meeting of high-level Baathist officials. Iraq, he explained, would seek to obtain a nuclear weapon from “our Soviet friends,” use the resulting deterrent power to counteract Israeli threats of nuclear retaliation, and thereby enable a “patient war”—a war of attrition—that would reclaim Arab lands lost in the Six Day War of 1967. As Saddam put it, nuclear weapons would allow Iraq to “guarantee the long war that is destructive to our enemy, and take at our leisure each meter of land and drown the enemy with rivers of blood.” Until recently, scholars seeking to divine the inner workings of the Baathist regime were forced to resort to a sort of Kremlinology, relying heavily on published sources as well as the occasional memoir or defector’s account. This is no longer the case. The transcript of the March 1979 meeting is one of millions of Baathist state records captured during and after the U.S.-led invasion of Iraq in 2003. These records, many of which are now being made available to scholars, include everything from routine correspondence to recordings and transcripts of top-level meetings between Saddam and his advisers. These records illustrate the logic (and illogic) of Saddam’s statecraft to an unprecedented degree. They also shed light on one of the most crucial questions pertaining to Iraqi foreign policy: Why did Saddam want the bomb? THE IRAQI NUCLEAR PROGRAM The Iraqi nuclear program commenced in the late 1950s, with the purchase of a Soviet-made research reactor. The program lagged amid chronic political instability for much of the next fifteen years, but accelerated when Saddam became head of the Iraqi Atomic Energy Committee in 1973. Saddam recruited Iraqi scientists to work on the program and concluded nuclear cooperation accords with France, Italy, the Soviet Union, and other countries. The deal with France provided Iraq with the 40-megawatt Osirak research reactor and highly enriched uranium. The agreement with Italy allowed Iraq to obtain fuel fabrication and plutonium reprocessing tools, as well as “hot cells” that could yield plutonium from the uranium processed by the Osirak reactor. At the outset of the 1980s, Iraq was reportedly within a few years of being able to manufacture a simple nuclear device. Saddam publicly claimed that the nuclear program was geared toward peaceful purposes, but the military applications of nuclear power were never far from his mind. Saddam believed that possessing a nuclear weapon would showcase Iraqi technological development, thereby furthering Baghdad’s claim to leadership of the Arab world. More concretely, he believed that obtaining the bomb would allow Iraq to deter attacks by its two main enemies—Iran in the east, and Israel in the west. “We have to have this protection for the Iraqi citizen so that he will not be disappointed and held hostage by the scientific advancement taking place in Iran or in the Zionist entity,” Saddam said in 1981. “Without such deterrence, the Arab nation will continue to be threatened by the Zionist entity and Iraq will remain threatened by the Zionist entity.” This view of nuclear weapons fits nicely with much of the scholarly literature on proliferation, which frames security-related motives for pursuing nuclear weapons overwhelmingly in defensive terms. For Saddam, however, nuclear weapons were about offense as well as defense, and his desire for the bomb had much to do with his revisionist objectives vis-à-vis Israel. SADDAM AND ISRAEL Throughout his time in power, Saddam viewed Israel through a prism of intense hostility. Saddam’s public statements, his discussions with foreign leaders, and his private comments to advisers were filled with references to the dangers posed by Israel and the deep antagonism between Iraq and the Jewish state. “Our worst enemy is Zionism,” he told subordinates in 1980. In private as in public, Saddam argued that the conflict between Arabs and Israelis was intractable, and that conflict was inevitable. “This issue between the Arabs and Israel will never be resolved,” he told advisers in October 1985. “It is either Israel or the Arabs….Either the Arabs are slaves to Israel and Israel controls their destinies, or the Arabs can be their own masters and Israel is like Formosa’s location to China, at best.” Saddam’s animus toward Israel flowed from several factors. There was, of course, opportunism—haranguing the Zionists always played well in Iraqi politics and the Arab world. There was also the long history of conflict between Israel and its Arab neighbors, a struggle that had flared during the early years of the Baathist regime. The Baathist government contributed one division to the Syrian front during the Yom Kippur War of 1973; Israel, for its part, sought to bleed and distract the radical Baathist government by supporting an insurgency among Iraq’s Kurdish population. These events, as well as the broader legacy of Arab-Israeli strife, weighed heavily on Saddam’s perceptions. “The Zionist entity is not weak and oppressed,” Saddam explained to his advisers. “It is not an oppressed entity seeking peace….It is a hostile, arrogant entity that is imposed on the Middle East region.” Saddam’s perceptions of Israel were also deeply wrapped up in his anti-Semitism. Saddam often claimed in public that his opposition to Israel was based on anti-Zionism rather than anti-Semitism, but there was no clean divide between these two influences in his thinking. Saddam often referred to Israelis as “the Jews,” and the sense that Jews and Israelis were devious individuals motivated by sinister designs was a virtual article of faith within the Iraqi regime. At Iraq’s Special Security Institute, students were told that “spying, sabotage, and treachery are an old Jewish craft because the Jewish character has all the attributes of a spy.” This assessment fit nicely with Saddam’s own beliefs. In one extended monologue, Saddam told his inner circle that The Protocols of the Elders of Zion was an accurate representation of Jewish/Israeli aims. “The Zionists are greedy—I mean the Jews are greedy,” he said. “Whenever any issue relates to the economy, their greed is very high.” Indeed, Saddam believed that the Protocols provided a blueprint of sorts for understanding Israeli designs: “We should reflect on all that we were able to learn from The Protocols of the Elders of Zion….I do not believe that there was any falsification with regard to those Zionist objectives, specifically with regard to the Zionist desire to usurp—usurping the economies of people.” Geopolitical conflict and opportunism thus merged with Saddam’s anti-Semitism to inform an intense hostility toward Israel and a belief that confrontation was inevitable. “The extortionist Zionist enemy cannot survive without erasing the whole Arab nation,” he said in 1979. The requirements of waging that confrontation were central to Saddam’s thoughts on nuclear weapons. PLANNING THE NEXT BATTLE During the late 1970s and early 1980s, Saddam frequently said that Israel had to be made to yield to military force and spoke of his desire for the “next battle.” He often implied that the conflict would be a Pan-Arab war under Iraqi leadership. On some occasions, he indicated that the outright destruction of Israel was envisioned; more often, Saddam seemed to foresee military action designed simply to force Israel back to its pre-1967 borders. If successful, such a war would significantly weaken Israel’s geopolitical position and make Saddam a hero throughout the Arab world. In a meeting with the Revolutionary Command Council following the signing of the Egypt-Israel peace treaty in 1979, Saddam described the possibility of war with Israel in vivid fashion. “This is what we envision,” he said. “We envision a war with the enemy, either with the Unity nation or with Iraqi-Syrian military effort, or with the Iraqi, Syrian, and Jordanian military effort that should be designed and based on long months and not just weeks….We have the capability to design it the way it should be designed. Do we really want a war in which we gain miles quickly, but then step back and withdraw, or do we want the slow, step-by-step war, where every step we take becomes part of the land and we keep moving forward? The step itself is not the most important thing here; even more important is the widespread cheering from the masses that will accompany each step we take forward, which will reach every corner of the Arab world.” Even as Saddam made these inflammatory comments, however, he acknowledged that climactic struggle could not happen immediately. This caution derived in part from a grudging respect for Israeli military prowess. “The Zionist enemy is a smart and capable enemy, and we must not underestimate him,” he warned in 1979. Fundamentally, though, Saddam feared that as long as Israel possessed a nuclear monopoly in the Middle East, it could respond to any Arab military strike in devastating fashion. “When the Arabs start the deployment,” Saddam told a group of military officials in 1978, “Israel is going to say, ‘We will hit you with the Atomic bomb.’” Here, in Saddam’s eyes, was the key strategic salience of the Iraqi nuclear program. If the Arabs attacked Israel without nuclear weapons, Saddam believed, their advances could be halted by Israeli nuclear threats. Yet if Iraq also possessed nuclear weapons, it could neutralize these threats by holding the Israeli citizenry hostage. This mutual deterrence would allow a conventional war of attrition that, Saddam believed, would favor the Arabs with their larger armies and greater tolerance for casualties, allowing them to liberate the Golan Heights and perhaps the West Bank. As he explained in 1978: When the Arabs start the deployment, Israel is going to say, “We will hit you with the atomic bomb.” So should the Arabs stop or not? If they do not have the atom, they will stop. For that reason they should have the atom. If we were to have the atom, we would make the conventional armies fight without using the atom. If the international conditions were not prepared and they said, “We will hit you with the atom,” we would say, “We will hit you with the atom too. The Arab atom will finish you off, but the Israeli atom will not end the Arabs.” Saddam returned to this theme a year later. Iraq should “go put pressure on our Soviet friends and make them understand our need for one weapon—we only want one weapon,” he said. We want, when the Israeli enemy attacks our civilian establishments, to have weapons to attack the Israeli civilian establishments. We are willing to sit and refrain from using it, except when the enemy attacks civilian establishments in Iraq or Syria, so that we can guarantee the long war that is destructive to our enemy, and take at our leisure each meter of land and drown the enemy with rivers of blood. For Saddam, nuclear weapons would be the great equalizer, the deterrent force that would allow him to wage a war of liberation to reclaim the Arab territories lost to Israel. Saddam was very much the amateur strategist, and he never delved deeply into the complex tactical, strategic, and logistical issues that any such war would raise. How would Iraq supply numerous divisions operating in a faraway theater? Would Syria and Jordan cooperate in the attack? Why would Israeli officials not see an assault on the Golan as prelude to an offensive into Israel proper, and thus conclude that there was no alternative to going nuclear? Would a conventional war truly favor the Arabs, who had been beaten by Israel several times before? Yet superficiality was never a barrier to action in Saddam’s Iraq: he ordered the Baathist military to attack Iran in 1980 and Kuwait a decade later with little or no advance preparation. And Saddam’s comments on the inevitability of war with Israel were sufficient to persuade certain advisers that he was sincere in his desire for an eventual war against the Jewish state. Saddam “had the confidence that he could accomplish this mission and eliminate Israel,” recalls Raad Hamdani, an officer who rose through the ranks in the 1970s and 1980s and would eventually become one of Saddam’s more trusted subordinates. “He expressed this confidence that he could accomplish this goal in many meetings I had with him.” Ultimately, of course, Saddam was never able to bring the Iraqi nuclear program to fruition or undertake large-scale military action against Israel. By 1980, the prospect of an Iraqi bomb had become quite frightening to the Israeli government of Menachem Begin, who termed Iraq “the bloodiest and most irresponsible of all Arab regimes, with the exception of Kaddafi in Libya.” In June 1981, an Israeli air raid destroyed the Osirak reactor, setting the Iraqi program back by several years. Saddam later sought to reinvigorate the program, but his plans were disrupted by the 1990-91 Persian Gulf conflict and the severe UN inspections regime subsequently imposed upon the regime. Saddam remained virulently anti-Semitic and bitterly hostile to Israel through the remainder of his time in power, but his dreams of a climactic confrontation ultimately went unrealized. SADDAM, WMD, AND THE GULF WAR The Persian Gulf conflict was satisfying for Saddam in one way, however. Iraqi forces launched some 40 conventionally armed SCUD missiles at Israel, damaging roughly 4000 buildings and injuring several hundred civilians. Twenty Israelis were killed, nearly all of them as a result of heart attacks or misuse of gas masks. Baghdad Radio trumpeted the attacks as a devastating blow against Zionism: “The hour of Arab victory has come and…the price of the Zionist crimes will be paid in full.” Why did Saddam attack Israel? The most common explanation is that he hoped to provoke Israeli retaliation against Iraq, thereby making it politically impossible for Egypt, Saudi Arabia, and other Arab countries to remain part of the US-led coalition. Indeed, Saddam was nothing if not opportunistic, and there were pronounced political and diplomatic overtones to the SCUD attacks. Iraqi newspapers predicted that the SCUDs would “liquidate every form of aggression and occupation, and every aggressive occupying entity, on Arab soil,” and Iraqi missiles were given names meant to honor Palestinian stone-throwers. At one point, Iraqi engineers operationalized this imagery by filling the warhead of a missile launched at the Israeli nuclear complex at Dimona with cement rather than explosives. Yet the roots of the SCUD attacks went deeper than pure political opportunism. Saddam never obtained nuclear weapons, but he had assembled a massive chemical arsenal during the Iran-Iraq War. For Saddam, these weapons provided the essential deterrent power necessary to strike Israeli cities (with conventional weapons) without provoking an overwhelming nuclear response. Israel might retaliate with its own conventional missile strikes, Saddam believed, but it would probably have to refrain from using chemical or nuclear arms for fear of eliciting Iraqi chemical attacks. “It will be conventional, they will also reciprocate by attacking us with missiles,” he predicted. If Israel did dare escalate, Iraq could do so as well: “We will use the other warheads, you know, in return for the warheads they use.” Additionally, Saddam’s actions were wound up in the history of Iraqi-Israel relations and his own sense of historical destiny. When Iraqi missile forces struck Dimona, the attack was characterized as revenge for Israeli’s preventive raid on the Tammuz reactor a decade earlier. And Saddam, who had long styled himself as the statesman who would galvanize the Arabs, clearly took satisfaction in his ability to strike Israeli cities and terrorize the Israeli populace: “This scared them all, of course,” he later said of the attacks. “Some of them choked in their masks and when this happened, they thought it was the chemical and they died even before they died.” In this sense, the attacks represented the culmination of the hostility that Saddam had long evinced toward Israel. While Saddam hoped that acquiring nuclear weapons would provide regional prestige and security from foreign attack, his desire for the bomb was also thoroughly wound up with his revisionist aims regarding Israel. Saddam hoped to liberate lost Arab territories, and he believed that nuclear weapons would provide the deterrent power necessary to wage a conventional war against the Jewish state. Indeed, while the wisdom of the Israeli strike on Osirak is still debated, in the newly available Iraqi records Saddam makes the case for preventive Israeli action far more persuasively than Israel’s own officials could have done at the time. When thinking about the potential consequences of nuclear proliferation, it is worth keeping Saddam’s views in mind. The Iraqi case certainly does not invalidate the argument that some or even most states seek nuclear weapons primarily for defensive reasons. It does indicate, however, that as scholars, we need to consider more carefully the roles that offensive concerns play in pushing leaders to pursue the bomb. Hal Brands is Assistant Professor of Public Policy at Duke University. He is the author of From Berlin to Baghdad: America’s Search for Purpose in the Post-Cold War World, and Latin America’s Cold War. David Palkki is Deputy Director at the Conflict Records Research Center. This essay is adapted from a longer and more thoroughly documented essay, “Saddam, Israel, and the Bomb: Nuclear Alarmism Justified?” International Security 36, 1 (Summer 2011). 1. SH-SHTP-A-000-553, “Revolutionary Command Council Meeting,” March 27, 1979, Conflict Records Research Center, Washington, D.C. In this essay, captured Iraqi records are cited by CRRC number, title, and date. The best available translation of this passage is in Kevin M. Woods, David D. Palkki, and Mark E. Stout, eds., A Survey of Saddam’s Audio Files, 1978-2001: Toward an Understanding of Authoritarian Regimes (Alexandria, Va.: Institute for Defense Analyses, 2010), pp. 262-263. 2. Central Intelligence Agency, Comprehensive Report of the Special Advisor to the DCI on Iraq’s WMD (hereinafter Duelfer Report), September 30, 2004, Vol. 2, http://www.cia.gov/library/reports/general-reports-1/iraq_wmd_2004/index.html; Central Intelligence Agency, “Iraq’s Nuclear Interests, Programs, and Options,” October 1979, NLC-6-34-4-10-3, Carter Library (JCL); State Department, “Paper on the Iraqi Nuclear Program,” November 21, 1980, NLC-25-47-1-13-9, JCL; and Anthony H. Cordesman, Iraq and the War of Sanctions: Conventional Threats and Weapons of Mass Destruction (Westport, Conn.: Greenwood, 1999), pp. 604-606. 3. Woods, Palkki, and Stout, A Survey of Saddam’s Audio Files, p. 266. 4. See the discussion in Scott D. Sagan, “Why Do States Build Nuclear Weapons? Three Models in Search of a Bomb,” International Security Vol. 21, No. 3 (Winter 1996-97), especially pp. 57-59. 5.SH-SHTP-A-000-751, “Meeting with Saddam Hussein,” undated (1980). 6. SH-SHTP-D-000-567, “Meeting Between Saddam Hussein and Baath Party Officials,” October 5, 1985. 7. Woods, Palkki, and Stout, eds., A Survey of Saddam Hussein’s Audio Files, p. 93. See also SH-SHTP-A-000-858, “Meeting Presided over by Saddam Hussein,” undated. 8.Woods, Palkki, and Stout, A Survey of Saddam’s Audio Files, p. 75. 9. Woods, Palkki, and Stout, A Survey of Saddam’s Audio Files, pp. 80-82. 10. SH-PDWN-D-000-770, “Remarks by Saddam Hussein,” December 14, 1979. 11. SH-PDWN-D-000-341, “Speech at al-Bakr University,” June 3, 1978. 12. For evidence of the more extreme aim of destroying Israel, see SH-SHTP-A-000-635, “President Saddam Hussein Meeting with Ministers,” undated (1981-82); SH-PDWN-D-000-341, “Speech at al-Bakr University,” June 3, 1978; and Kevin M. Woods, Williamson Murray, and Thomas Holaday, with Mounir Elkhamri, Saddam’s War: An Iraqi Military Perspective of the Iran-Iraq War, McNair Paper No. 70 (Washington, D.C.: National Defense University Press, 2009), p. 94. 13.SH-SHTP-A-000-553, “Revolutionary Command Council Meeting,” March 27, 1979. 14. SH-SHTP-A-000-553, “Revolutionary Command Council Meeting,” March 27, 1979. 15. SH-PDWN-D-000-341, “Speech at al-Bakr University,” June 3, 1978. 16. SH-PDWN-D-000-341, “Speech at al-Bakr University,” 3 June 1978. 17. SH-SHTP-A-000-553, “Revolutionary Command Council Meeting,” 27 March 1979. 18. On these cases, see Kevin Woods, The Mother of All Battles: Saddam Hussein’s Strategic Plan for the Persian Gulf War (Annapolis: Naval Institute Press, 2008), chap. 5; Kenneth M. Pollack, Arabs at War: Military Effectiveness, 1948-1991 (Lincoln: University of Nebraska Press, 2002), pp. 182-203; Hal Brands, “Why Did Saddam Invade Iran? New Evidence on Motives, Complexity, and the Israel Factor,” Journal of Military History 75, 3 (Summer 2011). 19. Quoted in Woods, Murray, and Holaday, Saddam’s War, p. 94. 20. Quoted in Tel Aviv to the White House, July 19, 1980, Box 37/41, NSC Country File, Ronald Reagan Presidential Library. 21. Lawrence Freedman and Efraim Karsh, The Gulf Conflict, 1990-1991: Diplomacy and War in the New World Order (Princeton: Princeton University Press, 1993), p. 307. 22. FBIS-NES-91-014, “Talk Praises Attacks on Israel, Saudi Arabia,” 21 January 1991. 23. Ofra Bengio, Saddam’s Word (New York: Oxford University Press, 1998), pp. 200-201; Jerry M. Long, Saddam’s War of Words: Politics, Religion, and the Iraqi Invasion of Kuwait (Austin: University of Texas Press, 2004), pp. 30, 115, 139. 24. Woods, Palkki, and Stout, eds., A Survey of Saddam’s Audio Files, pp. 294-296. 25. SH-MISC-D-000-298, “Daily Statements on the Gulf War,” various dates. 26. Woods, Palkki, and Stout, eds., A Survey of Saddam’s Audio Files, p. 284.
<urn:uuid:6a123a67-a0cd-405d-bf7b-99a78a0bd2c7>
CC-MAIN-2022-33
https://www.eurasiareview.com/17082011-why-did-saddam-want-the-bomb-analysis/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572021.17/warc/CC-MAIN-20220814083156-20220814113156-00099.warc.gz
en
0.956309
5,128
2.640625
3
As part of studies, this post will be my notes on the Routing Protocol Open Shortest Path First - What is OSPF? - OSPF Structure - Inter-Node Communication - OSPF Packet Details - OSPF Hello Messages Details - Router-ID Selection Process - OSPF Neighbour Adjacency Process - Designated Router & Backup Designated Router - Designated Router Election - Non-Broadcast Multi-Access - Router Types - OSPF Route Types - Link-State Advertisement Types - Area Types What is OSPF Open Shortest Path First (OSPF) is an Open-Standard Interior Gateway Protocol (IGP) routing protocol. Unlike other Routing Protocols such as Routing Information Protocol (RIP), Enhanced Interior Gateway Routing Protocol (EIGRP) or Border Gateway Protocol (BGP), OSPF uses the Link State Algorithm in conjunction with Edsger W. Dijkstra Shortest Path First (SPF) algorithm to send out OSPF advertisements, known as Link-State Advertisements (LSAs), to share its Local Link-State Database (LSDB) with OSPF enabled devices to create an overall topology of every router, link state and link metric within a network. OSPF is defined in RFC2328: OSPF is a link-state routing protocol. It is designed to be run internal to a single Autonomous System. Each OSPF router maintains an identical database describing the Autonomous System’s topology. From this database, a routing table is calculated by constructing a shortest-path tree. OSPF recalculates routes quickly in the face of topological changes, utilizing a minimum of routing protocol traffic. OSPF provides support for equal-cost multipath. An area routing capability is provided, enabling an additional level of routing protection and a reduction in routing protocol traffic. In addition, all OSPF routing protocol exchanges are authenticated. OSPF advertises and receives LSAs to/from neighbouring routers; these LSAs are stored with the router’s local LSDB. Whenever there is a change in the network new LSA’s will be flooded across the routing domain and all the routers will have to update their LSDB. This is due to the nature of the Link State and SPF Algorithms; essentially all OSPF routers have to same synchronized identical copy of the Link State Database to have a complete loop-free map of the network topology. OSPF can be described as a two-tier hierarchical structure. This is because you have two main area types: Backbone Area and Non-Backbone Areas. The Backbone Area is known as Area 0 and Non-Backbone Areas are all other Areas. All Non-Backbone Areas MUST connect to Area 0. It is important to note, that OSPF routers in different Areas DO NOT have the same synchronized identical copy of each Link State Database however routers within the same Area will have an identical Link State Database. This is because; Area 0 provides transit for All Non-Backbone Areas. Non-Backbone Areas advertise their routes into Area 0 and Area 0 will advertise all routes learnt to the other Areas, as shown here Communication between OSPF routers is done, dependent on network type, over IP using it own protocol number 89 sending multicast OSPF packets between each other. There are two multicast addresses that have been defined for OSPF enabled routers/interfaces to dynamically find neighbours. RFC2328 defines them as: : This multicast address has been assigned the value 220.127.116.11 . All routers running OSPF should be prepared to receive packets sent to this address. Hello packets are always sent to this destination. Also, certain OSPF protocol packets are sent to this address during the flooding procedure. AllDRouters: This multicast address has been assigned the value 18.104.22.168. Both the Designated Router and Backup Designated Router must be prepared to receive packets destined to this address. Certain OSPF protocol packets are sent to this address during the flooding procedure. OSPF Packet Details As stated above, OSPF has it own dedicated IP protocol as reserved by Internet Assigned Number Authority (IANA) within the protocol, OSPF exchanges 5 types of packets: ||Discovers and Maintains Neighbours Hello are sent to ensure that neighbours are still available and online |Summarize Database contents When an adjacency is being formed, this packet will describe the content of the Link-State Database being received ||Link-State Request (LSR) These are used to request more detail about a portion of LSDB from one router to another, when some details are regarded as stale ||Link-State Update (LSU) This packet is normally in response to LSR packet, it provides an update to the LSDB as requested by a neighbour When the router receives a LSA flood, it will response to the flood to ensure OSPF reliable OSPF Hello Messages Details As stated earlier, an OSPF Packet will be exchanged between routers to allow them to have the same synchronizes OSPF database. For Adjacency discovery and maintenance; an OSPF Hello Message is flooded to all enabled interfaces, two routers that have the same matching hello messages will create an OSPF adjacency. The table below shows all the parameters that are within a Hello Message, with the first eight parameters needing to match for an adjacency to form: ||Amount of time between hello packets being sent and recieved ||How long to wait between hello packets before marking the neighbour as dead, by default the dead interval is 4x the hello interval. Essentially, the router can miss for hello interval before updating that the neighbour is down ||Both neighbour in the same OSPF Area. ||This is for connectivity both neighbours will need to be in the same subnet |Stub Area Flag ||This is for when the neighbour has been defined as Stub Area. Within OSPF all Areas that have been defined as Stub Areas mark their hello messages with the Stub Flag ||Securing communication between neighbours. This can be configured with None, Clear Text or MD5 |OSPF Router ID ||An unique 32-bit ID number that’s set in dotted-decimal format |Maximum Transmission Unit (MTU) ||As OSPF doesn’t support packet fragmentation, the MTU must be the same on both side. From my experiences this is only changed if you are using Jumbo Packet sizing ||Used to determine Designated and Backup Designated Routers |Designated Router & Backup Designated Router |The IP addresses of the Designated and Backup Designated Routers ||List of all the neighbours (the router) has recieved a Hello Message from, within the dead interval OSPF uses its ALLSPFRouters address to send out hello messages across all OSPF enabled interfaces. It is important to add that if you have an interface that has been set as a passive OSPF interface, this interface will still be advertised into an OSPF routing domain however hello messages ARE NOT sent out. From my experiences this is commonly used on loopback address or external/customer facing interfaces. As you would want to advertise the subnet into OSPF however you wouldn’t want to have start an OSPF Neighbour Relationship between your ISPs or Customers. The OSPF Router-ID is an important attribute when it comes to identifying a router within the OSPF domain. Each OSPF router has a Router-ID that is associated with the OSPF process, so it is possible to have to have two different processes active on single router with two different Router-IDs. The OSPF Router-ID has to be configured in 32-bit dotted decimal format, this is case whether you are using OSPFv2 (IPv4) or OSPFv3 (IPv4 and IPv6). As discussed in RFC2328 As each router will be getting an ID number, it is important to note, that these IDs have to be unique and no neighbour in the same OSPF domain can have same Router-ID. If two routers were to have same Router-ID, they wouldn’t be able to create a neighbour relationship. Additionally other neighbours peered with the both will have an issues with OSPF updates that come from the same Router-ID however the link-state databases are different, this can cause OSFP Flood War OSPF Router-ID Selection Process The process of selecting the Router-ID within OSPF follows this order: - Hard Coding the Router-ID: If the Router-ID manually configured under the OSPF process this take precedence over everything. This is recommended and best practice - Highest Logical IP Address: This will be the highest loopback address configured on the router - Highest Active Physical IP address: This will be the highest IP address configured on a physical interface on the router If you don’t hard code the router-id you will need to always remember, when you are making IP address updates on the router if you configure a new loopback or interface IP address that is higher than the currently OSPF Router-ID, it will change the Router-ID and can cause OSPF re-convergence, if the process is cleared or the device is reloaded. OSPF Neighbour Adjacency Process With OSPF, unlike, other IGPs has 2 Neighbour Adjacency states: OSPF Neighbours: OSPF Neighbours are when two routers/devices have stop at the 2-Way neighbour state. At this state the neighbours bidirectional connectivity and all the OSPF parameters match. But it is important to note that the neighbours DO NOT exchange their link-state databases at this state. OSPF Fully Adjacent Neighbours: OSPF Fully Adjacent Neighbours is when the two routers have the same bidirectional connectivity and all OSPF parameters match, however with Fully Adjacent Neighbours, each router will exchange their full link-state database with its neighbours and advertise the relationship in a link-state update packets. Within OSPF there are 8 neighbour states that two neighbours can go through to become Fully Adjacent Neighbours. These states are: ||This is the start state of neighbour communications. No Hello Messages have been exchanged ||This state is valid only for Non-Broadcast Multi-Access (NBMA) networks. It is when a hello packet has not been received from the neighbour and the local router is going to send a unicast hello packet to that neighbour within the specified hello interval period. ||The router has received a Hello Message from a neighbour, but has not received its own Router-ID from the neighbour. This means that Bidirectional communications have not been established yet. ||Bidirectional communication between the neighbours have been established, no Link State information has been exchanged. At this state an OSPF Neighbourship has been created ||This is where the neighbours start the process of becoming Fully Adjacent OSPF Neighbours and exchange Link State Databases ||At this state, Link State Database details has been sent to the adjacent neighbour. At this state, a router is capable to exchange all OSPF routing protocol packets. ||At this state, the neighbour has exchanged its own LSDB, however has not fully requested/received LSA’s from its neighbour ||Both LSDB’s have been exchanged and are fully synchronized. Each neighbour will have the full OSPF Network Topology available now Designated Router & Backup Designated Router OSPF has the concept of Designated and Backup Designated Routers (DR and BDR) for Multi-Access Networks that use technologies such as Ethernet and Frame Relay, as on the LAN you can have more than two OSPF enabled router. By having DR and BDRs, it assists in scalable of an OSPF segment, in addition to reducing OSPF LSA flooring across the network. This is because the other routers (OSPF DROthers) on the LAN, only create a Full OSPF Adjacency with the DR and BDR rather than with other DRothers. The DR is the solely responsible for flooding the LAN with LSA updates during a topology change. The flooring by the DR is controlled, as stated above, by the AllSPFRouters and AllDRouters multicast addresses. DR will flood LSAs to the AllSPFRouters destination address to communicate with other routers on the LAN; and DROthers will communicate their LSAs to DR and BDR using the AllDRouters destination address. As the name suggests the BDR role is to be the secondary router in case the DR was the fail or be un-contactable, it will take over as the DR and another BDR will be elected. The BDR has a full OSPF Adjacency just like the DROthers with the BR, however unlike them, the BDR can listen on the ALLDRouters address. This means, in a situation of a DR failure, the BDR can take over as DR quicker and there will be less re-convergence across the network, as it already synchronized to the DR and the DROthers as they will all have the same LSDB. Designated Router Election Method The DR/BDR Election process is done during the 2-Way State, where bidirectional communications has been established between the routers and have received Hello Messages. OSPF uses Interface Priority and Router-ID to determine, which routers will be elected as DR and BDR. An OSPF router can have its interface priority set between 0-255, (an interface priority set to 0 means it is prohibited from entering DR/BDR election process) with the highest priority taking the role as the DR and the secondary highest priority becoming the BDR. If the priorities are all the same, the highest Router-ID will be used as the tiebreaker. By default, OSPF’s priority is 1 on Cisco IOS/XR and 128 on Juniper. With Cisco IOS XR, you are able to set the priority for all interface within an area globally and under the interface, whereas Junos and Cisco IOS you can only set priority under the interface. If an OSPF router receives a Hello Packet with the Router-ID for the DR or BDR isn’t 0.0.0.0, it will assume that DR and BDR have been elected already and will become a DROther. Depending on what the Layer-2 topology looks like within a network can have affect on the behaviour of OSPF. A Topology that uses Ethernet commonly allow multiple node on a LAN, in this case a Designated Router (DR) and Backup Designated Router (BDR) are used to cut down the OSPF LSA flooding, due to both supporting broadcast domains. Whereas other media such as serial links or Frame Relay don’t support broadcast domains meaning DR/BDR are not needed. With this in mind OSPF has 5 different network types: A Broadcast network is where an OSPF router is able to send a single message (broadcast message) that is able to communicate to more than 2 other OSPF routers on the same multi-access segment. i.e. Router A, B and C are connected to a Switch when Router A sends out a Hello Message it will be broadcasted across the segment via the Switch. With in this in mind, the need for DR/BDR will be required to control the LSA flooding across the segment. By default OSPF uses broadcast as the network type when configured on Ethernet LAN. The hello timers by 10/40 by default. Non-Broadcast Multi-Access (NBMA) This network type is used on links that do not support broadcast domain, media such as Frame Relay, ATM and X.25, or topologies like a hub and spoke where a router can connect to multiple nodes out of a single interface however isn’t fully meshed. A Non-Broadcast network will need to have DR/BDR configured, as you could have multiple nodes on the segment. However, Non-Broadcast network (as the name would suggest) doesn’t support broadcast or multicast, this means that OSPF’s normal way of sending hellos via the multicast address 22.214.171.124 to flood LAN looking for neighbours will not work. Instead it sends out unicast hello messages to statically configured neighbours. The hello timers are 30/120 by default. This network type is commonly used when you only have two devices on the segment, ie if you have Router A connected to Router B using /31 or /30 that will be regarded as Point-to-Point (P2P) network. This network type doesn’t require DR/DBR as the two devices only have each other to communication and forming a DR/BDR would be a waste of Router resources. In addition, it important to note that P2P OSPF Adjacency form quicker as DR election is ignored and there is no wait timer. The hello timers by 10/40 by default and it supports OSPF Multicast Hello Messages. This network is commonly used when in a partially mesh network or hub and spoken network, where the Layer-2 topology doesn’t logically match the Layer-3 topology. I.e. in a hub and spoke or frame-relay network, Router A will be connected to Routers B and C, all on the same subnet, the Layer-3 will assume Routers B and C will be able directly connected on the same LAN, whereas the Layer-2 determines that Router B can only communicate with Router C by going via Router A. By using Point-to-Multipoint, it will advertise all each neighbour as a /32 endpoint forcing the Layer-3 routing to matches the Layer-2 by using Longest prefix match. The hello timers are 30/120 by default, doesn’t require DR/DBR and it supports OSPF Multicast Hello Messages. This network type is by default enabled on all loopback interfaces and can only be configured on loopback addresses. OSPF will always advertise loopback addresses as /32 route, even if the interface has been configured with a different prefix length. Hello messages, Timers and DR/BDR are not associated with Loopback network types. The wider a network gets, the wider OSPF domain will become. This can be an issue as all of these routers will need to maintain the same LSDB, and with a larger network more resources will be used processing LSA flooding and running SPF algorithm, which in turn will make the router run inefficient and possible start dropping packets. A way of easing this issue is to introduce OSPF Areas. OSPF Areas are used reduce the amount of the routers in a single area, in turn shrinking the LSDB size, restricts LSA flooding within/between areas, allows route summarization between Areas and increases SPF calculations. This is because routers maintain their own LSDB on a per-area basis. Essentially, Areas hide the their own topology and any LSA flooding or SPF calculations will same local to that area whilst the rest of the network stays unaware. Routers within the same area will have the same synchronized LSDB with Routers with interfaces in multiples area will hold LSDBs. Along with Area Types, OSPF has 4 different types of roles that an OSPF router could be, and dependent on the topology, multiple types at once. The table below describes the different Router types and you can see where each of these router types could sit within a simple topology here ||A router that is located and/or has a link(s) within Area 0 is known as a Backbone router. If this router has links to non backbone routers, it can also be known as an Internal router. ||An internal router is an OSPF router that only have links within a single area. If this router is within Area 0, it will also be known as Backbone Router. |Area Border Router (ABR) ||An Area Border Router (ABR) is a router that has links between 2 areas. ABRs are role is to inject routes from non-backbone areas into Backbone. For a router to be an ABR, it HAS to have a link to Area 0, if it doesn’t then it wont be an ABR. It is considered a member of all areas it is connected to. An ABR keeps multiple copies of the link-state database in memory, one for each area to which that router is connected. |Autonomous System Boundary Router (ASBR) ||An OSPF router that learns routes from external routing protocols (BGP, IS-IS, EIGRP, OSPF), Static Routes and/or both and injects them into OSPF via redistribution. ASBRs are special types of routers, as you have can ASBR that isn’t ABR as these ASBR functions are independent to ABR functions, but dependant on the topology, you could have router that is both an ASBR and ABR. OSPF Route Types OSPF has a unique relationship between how routes are exchanged between areas and how these routes are ranked in importance. There’s 3 types of the Routes that are exchanged within OSPF Inter-Area, Intra-Area and External Routes, and in regards with the External Routes, you have 2 different types of External Routes: Intra-Area Routes: these are routes that are learnt from Routers that are within the same area. They are also known as internal routes Inter-Area Routes: these are routes that have been learnt from different areas. These routes have been injected via an ABR. They are also known as summary routes. External Routes: are routes that are learnt outside of the OSPF domain. These routes have been learnt via redistribution by an ASBR. External routes have 2 classifications Type 1 and Type 2. - Type 1 Routes: Type 1 routes, metric value equals the Redistribution Metric + Total Path Metric. This means that the metric values will increase the further the route goes into the network from the injecting ASBR. Type 1 routes are also known as E1 and N1 External Routes - Type 2 Routes: Type 2 routes, metric value is only the Redistribution Metric. This means that the metric value will stay the same, no matter the how far the route goes into the network (within in 30 hops) from the injecting ASBR. By default, type 2 is the metric type used by OSPF. Type 2 routes are also known as E2 and N2 External Routes The order of preference for these route types are as followed: - External Type 1 - External Type 2 Link-State Advertisement Types Devices in an OSPF domain use LSAs to build their local areas LSDB. These LSDBs are identical for devices in the same area and different areas and different router types can produce different type of LSAs. There is 11 types of LSAs however typically there are 6 LSAs that are commonly used and that should be known. These are: Type 1 – Router Every OSPF Router will advertise Type 1 Router LSA, these LSAs are used to essentially build the LSDB. Type 1 LSAs are entries that describe the interfaces and neighbours of each and every OSPF router within the same area. In addition, these LSAs ARE NOT forward outside its own area, making the intra-area topology invisible to other areas. Type 2 – Network A Type 2 Network LSA, are used over Broadcast OSFP domain with a DR. Network LSAs are always advertised by the DR and is used to identify all the routers (BDR and DRothers) across the multi-access segment. As with Type 1 LSAs, Network LSAs ARE NOT advertised outside of its own area, making the intra-area topology invisible to other areas. Type 3 – Summary Summary LSAs are the prefixes that are learnt from Type 1 and 2 LSAs and advertised by an ABR into other areas. ABRs DO NOT forward Type 1 and 2 LSAs to other areas, any Network and/or Router LSAs are received by an ABR, it will be converted into Type 3 LSA with Type 1 and 2 information referenced within. If an ABR receives a Type 3 LSA from a Backbone router, it will regenerate a new Type 3 LSA and list itself as the advertising router and forward the new Summary LSA to non-backbone area. This is how inter-area traffic is process via ABR. Type 5 – External An External Type 5 LSA are flooded throughout an OSPF domain when route(s) from another routing protocol is Redistributed via an ASBR. These LSAs are not associated to any area and are flooded unchanged to all areas, with the expectation to Stub and Not-So-Stubby Areas. Type 4 – Autonomous System Boundary Router (ASBR) Summary When a Type 5 LSAs is flooded to all areas, the next-hop information may not be available to other areas because the route(s) would have been redistributed from another routing protocol. To solve this ABR will flood the Router ID of the originating ASBR in a Type 4 ASBR Summary LSA. The link-state ID is the router ID of the described ASBR for type 4 LSAs. Essentially, any routes that are redistributed into OSPF, when, the first ABR receives the Type 5 LSA, it will generate and flood a Type 4 LSA. Type 7 – Not So Stubby Area (NSSA) External Routers in a Not-so-stubby-area (NSSA) do not receive external LSAs from Area Border Routers, but are allowed to send external redistributed routes to other areas. As ABR DO NOT advertise Type 7 LSAs outside of their local. The ABR will covert the Type 7 LSA into a Type 5 LSA and flood the Type 5 LSA across the OSPF domain, as normal. In addition to the LSA types above, the other 6 LSA types that are within OSPF are: - Type 6 – Multicast Extension LSA - Type 8 – OSPFv2 External Attributes LSA, OSPFv3 Link-Local Only LSA - Type 9 – OSPFv2 Opaque LSA, OSPFv3 Intra-Area Prefix LSA - Type 10 – Opaque LSA - Type 11 – Autonomous System Opaque LSA Types 9 – 11 are defined in RFC5250 and RFC2370. They are typically used as MPLS Traffic Engineering OSPF Extension. I personally, haven’t looked into as of yet however will update once I have done more reading into them. OSPF defines several special area types: As described earlier, the Backbone Area also know as Area 0, this is the most important area in OSPF and there always has to be a Backbone Area. The Backbone Area MUST connect to all areas, as non-backbone area have to use Area 0 as transit area to communicate to other non-backbone areas. This is because the Backbone has all the routing information inject into it and advertises them out. This design is important to prevent routing loops. A Stub Area DOES NOT allow External Routes to be advertised within the area. This means when an ABR to a Stub Area receives a Type 5 (External) and Type 4 (ASBR Summary) LSAs, the ABR will generate a default route for the area as Type 3 Summary LSA. Not So Stubby Area (NSSA) A Not So Stubby Area are similar to Stub Areas as they DO NOT allow Type 5 External however unlike Stub Areas, Not So Stubby Areas DO redistributed external routes via an ASBR into the area. As described above when route is redistributed into the NSSA, a Type 7 NSSA External LSA is flooded throughout the area and once an ABR receives the Type 7 LSA, it is converted into a Type 5 LSA and flooded into other areas. It is important to add, by default the NSSA does not advertise a default route automatically when Type 5 or Type 7 LSAs are blocked by an ABR. Totally Stubby Area (TSA) A Totally Stubby Area DOES NOT allow any Inter-Area or External Routes to advertised with the area. Essentially, if a Type 3 Summary or Type 5 External LSA, by the ABR, it will generate default route and inject it to the area. Totally Stubby Areas only allow Intra-Area and Default Routes within the area. The only way for traffic to get routed outside of the area is a default route, which is the only Type-3 LSA, advertised into the area. Totally Not So Stubby Area (TNSSA) Totally Not So Stubby Areas DOES NOT permit Type 3 Summary, Type 4 ASBR and Type 5 External LSAs being received into the area. However just like a NSSA, it allows redistributed external routes into the area via an ASBR. Just like NSSA when route is redistributed into the NSSA, a Type 7 NSSA External LSA is flooded throughout the area and once an ABR receives the Type 7 LSA, it is converted into a Type 5 LSA and flooded into other areas, but unlike a NSSA when TNSSA ABR receives a Type 3 LSA from the backbone, it will automatically generate a default route and inject into the area.
<urn:uuid:1ac2b7a8-0782-4126-9668-c304605637ef>
CC-MAIN-2022-33
https://blog.marquis.co/page/2/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00696.warc.gz
en
0.921177
6,663
3.25
3
Chapter 6 - Classification of available data and techniques of imputing missing data Contents - Previous - Next Planning for the compilation of SEAFA Classification of basic data Techniques for estimation of missing data Commodity flow approach Some remarks on the use of statistical tools 6.1 The compilation of SEAFA, including its associated principal aggregates, presupposes an ideal situation in which the statistician is not only aware of all activities and transactions of the system but also that regular and comprehensive data on outputs, inputs, prices, taxes, subsidies, purchases of assets, inventories and uses of output are available on a continuing basis. This is however a situation that may never occur. In practice, not even the minimum desirable amount of data are always available for compilation of the main aggregates such as output and value added; partly because the collection of complete sets of data is a gigantic task and partly because of the costs involved. Whatever the situation, however, the compilation of SEAFA and its associated aggregates is a basic necessity for analysis, policymaking and planning. The job of the statistician is to make judicious use of the available information by manipulating it to obtain a picture of the economy. The purpose of this chapter is to list some of the general techniques used in different situations and also their underlying assumptions. It is necessary to point out the underlying assumptions so that users may be sure that conclusions drawn from the data are not simply dependent on assumptions. For example, if data on consumption of fixed capital are not available and it is assumed that this constitutes "x" percent of gross value added any statement comparing gross and net value added has no real significance. Planning for the compilation of SEAFA 6.2 The framework, concepts and definitions of SEAFA have been explained in previous chapters. However, the compilation of SEAFA is not a routine arithmetical exercise where various sets of data are simply pooled together and presented in the framework. The compilation of SEAFA requires: careful examination of the various databases available; processing of these databases to achieve complete coverage in a logically consistent way; and pooling them together, keeping in view their reliability and gaps. Work on the compilation of SEAFA can be divided into broad four stages as listed below: Study the production process of the economy, keeping in view the appropriate activities listed in ISIC. List the principal agricultural products (crop and livestock), the system for providing agricultural services, agro-based industries, etc., the dependence of the economy and agricultural activity on imports and the amount of agricultural goods being exported. Prepare a detailed list of various sources of data on output, the production or import of inputs and their utilization, items used as inputs for agro-based industries along with their sources of supply (i.e. domestic or imported), etc. that are required for the compilation of SEAFA. The list may also indicate the coverage and periodicity of these elements. While doing this exercise the statistician might also note that such a list could also indicate which of the three approaches, (viz. production, income and expenditure approach) can be used for the measurement of value added. 6.3 The production approach takes into account the production process. In this approach, value added is derived by deducting from the total value of output the goods and services purchased from other producer units and used as intermediate inputs in the production process. In the income approach, the cost structure of the producer unit is studied from financial data on income and expenditure. The sum of the compensation paid to employees and the operating surplus or mixed income (plus any taxes less subsidies payable on production) is taken as the measure of value added. In the expenditure approach the final use of output, i.e. output being used as final consumption (by the population, general government and non-profit institutions serving households), gross fixed capital formation, export and changes in inventory, less imports are recorded to measure value added or GDP. This last approach may not be appropriate for SEAFA and is normally used to prepare estimates for the economy as a whole. A critical review of the listed data may be made to examine: reliability; areas for which no data are available; and areas for which more than one set of data are available. 6.4 For areas where more than one set of data are available, only one set of data, the more reliable, would be used to prepare estimates. It may be useful, however, to monitor the reliability of all sets continuously. For example, the output of some crops may be available from crop estimation surveys and from marketing boards dealing with trade of the crop concerned. In such cases depending on the coverage of the two data sources and the precision of the two sets of data, it may be decided to use only one of the sources to prepare the estimates, but the reliability of both sets of data should be compared every year. 6.5 Sometimes, the data may be such that the sector is divided into two sub-sectors and a different approach adopted for each of these sub-sectors. For example, for crop and animal husbandry, reliable data on output as well as intermediate consumption may be available through the production approach, but for the operation of irrigation systems, data may be available on income and expenditure, so that it is feasible to follow the income approach. Thus, the agriculture sector could be divided into two sub-sectors, one covering agricultural crop and livestock production and the other covering the operation of irrigation systems. However, in such a situation, payments made for using water from the irrigation system would have to be considered as intermediate consumption in crop and livestock production to avoid double counting. Prepare the required breakdown of output, intermediate consumption, etc. using supplementary data and ad hoc studies and merge the data sets to get the final SEAFA. The following discussion goes into details of the classification of basic data and explain techniques for imputing missing data. Classification of basic data 6.6 The data for compilation of SEAFA are available either from statistical enquiries (census, survey or case-study) or as a by-product from administrative records. The data can be classified in two ways -- by their periodicity or by their purpose - and divided into three main groups according to the frequency with which they are collected, viz. regular (for each period), periodic (at fixed intervals e.g. every five or ten years) or ad hoc (i.e. only once in a while). The data can also be divided into three groups according to the purpose for which they have been collected, viz. primary (where data are collected directly for the purpose for which they are to be used through statistically designed censuses, surveys or case-studies), secondary (where data are collected for some other purpose but can also be used for the purpose under consideration) and tertiary (where the data are not directly relevant but, by making certain assumptions, can be used for the purpose under consideration). Taking the two criteria of periodicity and purpose together, sets of data that can be used to compile the SEAFA can be divided into nine groups as shown in Table 6.1. 6.7 In Table 6.1, as the cell identification number (calculated by numbering the rows consecutively from left to right and the columns from top to bottom and then multiplying the row number by the column number of the cell) increases from unity the reliability and representativeness of the estimate derived from the data in the cell decreases and progressively more care is required when using the data for the compilation of SEAFA. This implies that countries tend to have more reliable data for the compilation of SEAFA in the years in which they take censuses and similar surveys than in other years. Therefore one of the guiding principles for conducting censuses and surveys is that activities that change frequently and have a significant share in the total should be covered regularly in the scheme of data collection. Another important conclusion to be drawn from the table is that the reliability of the estimate of an aggregate could be worked out by classifying the basic data in the above groups and combining them according to the contribution they make to the calculation of the aggregate. Using this method it can be stated that a particular share of the data used to prepare the estimate is available directly and regularly and so a specified share of the estimate is based on direct data. Sometimes it is also possible to give margins of error for data that falls under different headings thus allowing the margin of error in the 'estimate' to be calculated using the share of such data in the total value. For example, let us say that 80 percent of the estimate of the value of output is based on (1,1)-type data with an error of +/- 5 percent, 15 percent comes from (2,1)-type data where the error is +/- 10 percent and the remaining 5 percent belongs to the (2,3)-type with an error of +/- 15 percent. Thus the combined margin of error in total value of output would be: 6.25 Table 6.1 Classification of Data |TYPE OF PURPOSE| |Direct data on crop Production, producer's prices||Yield on cost structure of a crop from neighbouring areas||Price data of Machinery and equipment (general) applied to agricultural machinery| |Agricultural or Livestock census||Cost of production study done at regular intervals for one rop used on other crop||Consumer expenditure surveys to estimate final expenditure on agricultural products| |Census/survey of age of fixed assets used in agriculture purchases of agricultural||Tabulation of government purchases to find out allocate draught animal goods by government||Use of result of transport survey to between agricultural and non-agricultural sector| Techniques for estimation of missing data 6.8 Once the basic data has been classified one of the following four alternative types of situation generally arises: · Type A situations occur when direct data are available on a regular basis according to the period for which estimates are required. Examples of this type include data on the production and prices of major crops. · Type B situations include cases where direct data are available at different points of time or at a single point. In such cases techniques of interpolation or extrapolation may be applied. · Type C situations occur when only partial data are available on a regular basis. In such cases, imputations have to be made for the remaining data. · Type D situations occur when no data are available and the whole item needs to be estimated. 6.9 The compilation of aggregates or items of SEAFA in the last three types of situation require special care and methods. In a Type B situation detailed benchmark estimates are generally prepared for the year(s) for which data are available. The benchmark estimates are extrapolated to other years until data for another point of time become available. While extrapolating the benchmark estimate, it should be disaggregated into its different components when these are known to have different growth patterns. For example, values are disaggregated into their quantity and price components. Livestock are disaggregated by type, sex, age and variety (indigenous or improved). The different components of the estimates are extrapolated independently and checked against new data for another point of time when those become available. If necessary, data for previous years are revised after getting new data for the next point of time. 6.10 There are two ways of extrapolating benchmark estimates. The first, and preferred, technique is to use information (data) on some other variable that is highly correlated with the variable under consideration. In this method, the true movements (i.e. ups and downs) can be seen in the projected time series. However, needless to say, it is necessary to have direct evidence on the correlation between the two series. When fresh data for any other point of time become available, a critical analysis should be undertaken to explain the difference between the projected and actual estimates. For example, to project the price of machinery and equipment used in the agricultural sector, general wholesale, retail or import prices, of machinery and equipment can be used, depending on the local situation and availability of data. However, when new data for the projected time point become available and a significant difference is observed between the actual and projected prices, it is necessary to determine the reason for this. It may be caused by any of the following factors: the general price level of machinery and equipment might have been influenced by some special type of machinery that was not included before; the agricultural machinery industry itself might have changed with the arrival of a new factory or new type of machinery; and/or a substantial inflow of machinery from abroad might have influenced the market. In such cases, it would be necessary to identify the year when the change has occurred, to correct the independent variable and to revise estimates for previous years. However, such a situation would not occur if the system is constantly reviewed and data on those items that are sensitive for the compilation of SEAFA are collected regularly. 6.11 The second approach is either to fit a trend line to the data (assuming that the data are available for more than one point of time) or to express the variable as a ratio of another variable to which it is known to have a direct relation and for which regular data are available. Numbers of livestock from livestock censuses are an example where the trend line approach is applicable. An example of the second approach occurs when the quantity of seed can be expressed as a ratio to the area sown under the crop. In such cases it is also necessary to establish that the two series are directly related. For example, in estimating the value of seed through area under the crop it is necessary to see that the relationship is expressed in constant prices or in quantity terms rather than in current values. The derived series will have to be adjusted independently for changes in the price level. 6.12 In a Type C situation where partial data are available, it is necessary to compute the component for which information is not available. Consider, for example, the estimation of production of a crop that is either new or relatively unimportant for the area and so is not covered in the scheme of crop cutting experiments. However, data on the area under crop are available from sources such as land records, crop insurance records and government extension schemes. In such a case, it is necessary to impute the crop yield to get an estimate of the production. There are usually two choices available; to impute either the yield of the crop from data for neighbouring areas, keeping in view the seed variety and agro-climatic conditions or the yield of some other crop which is similar to the crop under consideration and grown in the same area is taken as the imputed value of yield. Examples of the latter type of situation arises when area is available for 'other coarse cereals' or 'other fruits and vegetables' that are not explicitly recognized. In such cases, either the yield of a known crop, which is a close substitute, or the weighted or simple avenge of the yield of a group of crops is taken. 6.13 The Type D situation, where no direct data are available, presents a real challenge for the statisticians' ability to impute the value of a missing observation. The following factors may be borne in mind: what is the practice in other countries in the same part of the world where the situation is the same? if relevant data are available, what method and database were used in the past either to prepare estimates for this particular item or in macroeconomic or economic studies of such aggregates? what is the likely size of the activity? and what cross-checks are available to test the reliability of the imputation? For example, if no estimate of own consumption of agricultural goods is available, the estimate for the benchmark year could be made by multiplying the per caput consumption norm from a survey covering the rural areas by the total population dependent on agriculture. The benchmark estimate can be extrapolated to other years on the basis of total annual output and growth of the population. However, allowance would have to be made for relative movements of the prices of different commodities that enter into the consumption basket, changes in consumption habits (if the series is sufficiently long) etc. A consistency check with total population, size of the population that is not dependent on agriculture, domestic production, imports and exports, etc. would have to be repeated every year until direct data become available. 6.14 The kind of manipulation of available information described above is a necessary evil for any exercise in which official statistics are used to compile a secondary database such as SEAFA. At the same time, it must be remembered that the application of checks of the type discussed above do not guarantee the accuracy of any particular estimate but can only ensure that a relatively consistent picture is provided by the set of data as a whole. Checks of the type discussed above (own account consumption) are based on total utilization of the goods produced. Accuracy of the estimate depends on the imagined picture created and the contents of the data available. No statistician should hesitate to use such techniques when necessary, but constant monitoring is required. Commodity flow approach 6.15 The commodity flow approach was developed in the 1930s to prepare estimates of private consumption expenditure but is also widely used to prepare estimates of fixed capital formation. The logic of the approach is based on calculating the total availability of each good or service (or group of goods or services) by adding domestic output, imports and opening inventories and subtracting closing inventories to obtain the goods and services that are available for intermediate and final uses. The net availability calculated in this manner is further adjusted for trade and transport margins and taxes or subsidies on products to arrive at values at purchasers' prices. The goods and services available are allocated to different uses in the form of government purchases, gross fixed capital formation, exports and intermediate consumption. The balance of goods and services remaining, after adjusting for wastage, would be available for final household consumption. A similar procedure can followed to prepare estimates of gross fixed capital formation. Some remarks on the use of statistical tools 6.16 As already mentioned, the use of statistical tools such as averaging and ratio or regression techniques is unavoidable while processing the data. However, some judgement and caution are required while using these techniques to obtain SEAFA aggregates from official statistics. Some of the more important points are mentioned below for ready reference. (a) Average: Averaging of primary data is essential in order to present the main features of the data. The use of either simple or weighted averages is prevalent and the decision as to whether to use one or the other does not depend entirely on the availability of data. The homogeneity of the data is one of the important criteria used to decide whether a simple or weighted average of a given data set should be taken. Before processing the data it is necessary to analyze the sources of variation in the observations. Variation may be caused by either the geographical conditions prevailing in the area where the data are being collected or by other factors, e.g., the intervention of external economic forces such as changes in the quantum of import or export, varietal differences, consumer preferences and season. The next question that requires examination is whether the volume of the activity that is effected by the variation is the same or evenly distributed over the total period. If not, it is necessary to give due weight by stratifying the sources from which the data emerge and estimating their shares. Even if no quantitative character is readily available, it is necessary to impute a weight (i.e. to give more weight to places that cover larger volumes) to represent the true situation. Consider the example of a good (fruit or vegetable) that is sold throughout the year but that has high prices during its off-season of about ten months. A simple average to represent the annual price level of the good would give an equal weight to all the prices throughout the year. In actual practice more than 60 percent of the output may be sold in the first two months, higher prices in successive months being caused by storage charges, wastage in storage and trade margins. Thus, in the succeeding months the observations do not represent only the "pure" good but "the good" plus "some services". Even if consumer or purchasers' prices are being estimated, equal weighting of the prices recorded in different months is not justified. If weights representing sales in different months are not available, they may be imputed on the basis of the consumer population belonging either to different income or consumption classes or living in different agglomerations, e.g. city, town, village, rural or urban areas. 6.17 Another important aspect of averaging relates to the treatment of missing observations. Missing observations play a vital role in time series. If proper estimates of missing observations are not made before averaging the data, the average may present a completely false picture. Consider the example of a price or wage series where data are regularly collected from 100 sites (say). At a particular point it is noted that returns from only 80 out of the 100 sites have arrived by the date when results have to be presented. In such a situation, a simple average of the available observations if taken as representing the level in the current year vis-à-vis last year's level may present a misleading picture. The statistician has two alternatives to estimate the true situation. The first alternative selects those sites for which returns are available for both years, calculates the average growth rate for these sites and applies this growth to the overall level reported for the previous year. In this case, the growth rate in the selected sites is assumed to be the same as that for the excluded sites. The second alternative requires that the statistician be aware of the local conditions of the total population (universe), in which case the missing observations can be estimated with the help of available data for each of the 20 sites averaged out. In this case, the variation in the individual site has been taken into account. However, whichever alternative is used the calculated average should be revised after more data are received. 6.18 In either of the two situations (i.e. choice of weights or estimation of missing observations) a better judgement can be made if the coefficient of variation of the cross-sectional data is calculated. (b) Ratio or regression techniques: Ratio or regression techniques may be used to extrapolate data. Obviously a choice has to be made between the two methods when data are available in the form of a time series. The situation can be illustrated with the help of an example. Consider the time series of goat and cattle meat prices that are available for a country or region for a long period although data for one of the series are not available for the latest year. In such a case, either the ratio or the regression method can be used to estimate the missing observation(s). When the ratio method is chosen the implicit assumption is that the rate of inflation for 'goat' and 'cattle' meat in the period with missing data is the same as for the last year in which the ratio has been worked out. On the other hand, if the regression technique is chosen the fitted regression equation would also take into account the differential rate (trend) of inflation. Another possibility in such a situation is to include some other explanatory variable, such as animal imports, in the regression equation in order to improve the estimate. Use of these methods depends on the length of the series (since at least 12 degrees of freedom are necessary to estimate the error component), the time available to prepare the estimate and the availability of explanatory data. Contents - Previous - Next
<urn:uuid:3eb497c4-7e89-40c6-880d-be9d37a26b7e>
CC-MAIN-2022-33
https://www.fao.org/3/W0010E/W0010E0a.htm
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.78/warc/CC-MAIN-20220817001643-20220817031643-00299.warc.gz
en
0.938105
4,920
2.53125
3
Aluminum news, articles and information: |10/26/2016 - The presence of heavy metals in infant formulas has been a long-standing cause for concern. Even back in 1988, researchers discovered that aluminum contaminated many formulas, though some more than others. This early study noted that in human breast milk, aluminum appeared in fairly low amounts –...| |10/14/2016 - Mercury is the most toxic non-radioactive element on earth, and one scientific fact that most Americans do not know is that aluminum, listed as aluminum phosphate and aluminum hydroxide in vaccines, greatly increases the toxicity of mercury (listed as thimerosal). Why is this so alarming? We already...| |10/6/2016 - According to the CDC's own statistics, more than 10 million vaccines are given every year to infants. Not so well publicized, however, is the fact that aluminum -- a known neurotoxin -- is routinely used as an additive in many common vaccines to increase their long-term effects. In fact, children...| |10/4/2016 - It has long been surmised that the aluminum compounds used in many antiperspirant deodorant products may be a cause of cancer. But a new study out of Switzerland confirms this to be true, showing that aluminum chloride, a common additive in antiperspirant deodorant that blocks moisture, exerts an estrogen-like...| |9/29/2016 - According to the Food and Drug Administration, the amount of aluminum in modern vaccines that infants are exposed to in their first year is about 4.225 mg, which the agency considers to be low, and thus safe. But some experts believe that there are vaccines being given to children today that contain...| |9/26/2016 - Much of the controversy over additives in vaccines has centered around the mercury-containing preservative thimerosal, which is no longer found in childhood vaccines except for flu shots. But did you know that many childhood vaccines contain potentially toxic, brain-damaging levels of aluminum? |9/21/2016 - The non-profit Consumer Wellness Center has just published heavy metals tests results for eight of the most popular brands of Shilajit, a black tar-like substance that oozes out from between the rocks of mountains. See published results here. Many U.S. dietary supplement consumers are eating Shilajit...| |9/16/2016 - One of the main avenues of exposure to toxic contaminants is through the foods we eat. It's important to carefully choose the ingredients you use to cook with, but even if you're eating only fresh, organic, natural foods from your own garden or other trusted sources, you may still be inadvertently adding...| |7/29/2016 - Using its well-funded position as the authoritative voice on Alzheimer's, the Alzheimer's Association has made some odd statements regarding certain factors believed to contribute to the risk of Alzheimer's. On its website, the organization claims that known probable and potentially contributing factors...| |7/9/2016 - Longtime readers of Natural News are already aware that global honeybee populations are on the decline – so much so, that now researchers are quickly trying to develop "robo-bees" to replace them. The problem is, no one really knows if bot bees can mimic the pollination techniques of the real...| |6/27/2016 - Have you ever thought about how much aluminum may be in your drinking water? It might pay you to find out, and in fact, you can do that by sending a water sample to Mike Adams, the Health Ranger, for testing. Why is it important to know? Because, according to a 15-year French study of elderly men...| |4/18/2016 - Aluminum is a metal found in many childhood vaccines. The stated role of aluminum as an adjuvant in vaccines is to enhance the immune response to the main ingredient in the vaccine. This would be either a virus or bacterial component. Vaccines which currently contain aluminum are: hepatitis A, hepatitis...| |2/25/2016 - Could sulfur be the key to saving autistic children from the toxic tragedy that's been besieged upon them? Could organic sulfur crystals actually REVERSE autism? This is to be carefully considered now. Sulfate deficiency has been specifically identified in the cerebral spinal fluid of autistic children...| |2/18/2016 - New research looking at what appears to be a major contributor to male infertility, aluminum, has powerful implications for the safety of childhood vaccinations and flu shots, which are loaded with the toxic metal. Researchers from both the UK and France found that the higher the level of aluminum inside...| |11/2/2015 - Based on my recent articles revealing the composition and function of zeolites, some readers are asking questions to gain a deeper understanding of what zeolites are made of, how they work, and where they don't work. Below, I present three of the most common questions, along with informative answers. |11/2/2015 - Much of what we've all been told about zeolites over the years is a myth. Sadly, even articles published on Natural News have inadvertently repeated this myth, although I am now directing staff to update all zeolite articles on Natural News to reflect the latest scientific findings I'm releasing on...| |10/30/2015 - I want to thank all the readers who participated in the outpouring of support following my publishing of laboratory test results for zeolites. As I wrote yesterday, off-the-shelf zeolites are composed of remarkably high concentrations of lead (10, 20, 40 or even 60 ppm) and strikingly higher concentrations...| |10/29/2015 - After publishing a short story yesterday that warned about high lead and aluminum levels in zeolites (while also pointing out the clinical use of zeolites for blocking cesium-137), I awoke today to a stream of legal threats from a zeolite manufacturer who didn't like what I had to say. I'm used to...| |10/29/2015 - All zeolites contain high concentrations of aluminum and lead. But when zeolites are in granular form, these elements aren't digestible in the human body because intact, granular zeolites pass through your digestive tract like tiny rocks. However, when zeolites are "micronized" and ground into a...| |10/17/2015 - Aluminum foil (often erroneously referred to as tin foil) is possibly the most versatile item in your kitchen. I could probably list 100 different uses for aluminum foil, but for this post I'm going to focus on uses that might interest preppers and homesteaders. (Story by Alan, republished from UrbanSurvivalSite.com.) |6/22/2015 - An insect form of Alzheimer's disease caused by aluminum contamination may be one of the causes behind an ongoing decline in populations of bees and other pollinators, according to a study conducted by researchers from the universities of Keele and Sussex and published in the journal PLOS ONE. |5/7/2015 - Legendary-fool-posing-as-medical-doctor Paul Offit of the Philadelphia Children's Hospital has publicly declared that newborn babies thrive on being injected with the soft metal aluminum, which is commonly added to many childhood vaccines. In a certifiably insane statement posted on his "Vaccine Education...| |4/17/2015 - Maniac vaccine fanatic Paul Offit of Philadelphia Children's Hospital, who believes that a newborn child is capable of sustaining 10,000 vaccinations at once, now says that aluminum is an essential metal that plays an important role in the development of a healthy baby. This grim reaper of the vaccine...| |3/16/2015 - As Michael Snyder points out in a timely article at The Economic Collapse Blog, California is rapidly reverting back to the desert it was once. Awareness of this is only now beginning to spread, but almost no one truly grasps the implications of what losing California's Central Valley agricultural...| |3/6/2015 - It has been more than five months since an Italian court in Milan awarded compensation to the family of a young boy who developed autism from a six-in-one hexavalent vaccine manufactured by corrupt British drug giant GlaxoSmithKline (GSK), and the U.S. media is still nowhere to be found in reporting...| |2/20/2015 2:37:13 PM - On television, a quack doctor compares seatbelt safety in cars to the safety of vaccines, as if vaccines have ever been tested for safety by the establishment or the vaccine industry; however, in the same interview, another doctor exposes the chemicals in vaccines and let's you know that they've NEVER...| |2/13/2015 - It's no secret that mention of the word glyphosate angers many health-conscious people, while those affiliated with Monsanto, makers of Roundup, stand by their product. Although numerous data show that the main ingredient in the commonly used weedkiller can wreak havoc on health, Monsanto-loyal folks...| |2/7/2015 - My grandmother Ester died after suffering from Alzheimer's disease, and for the last 12 years of her life she did not recognize anyone from her immediate family, including her husband, her children or her grandchildren. She became violent in the end, and she basically starved herself to death. The hospital/extended...| |12/14/2014 - Aluminum. It is the most abundant metal found naturally in the earth's crust. But new research published in the journal Frontiers in Neurology warns that constant exposure to it can lead to Alzheimer's disease and other forms of dementia, which typically lead to early death. |10/22/2014 - Besides vaccines, there appears to be another major culprit in the escalating autism epidemic: Roundup herbicide. Data compiled from multiple government sources reveals that the steadily rising epidemic of autism in the U.S. is directly correlated with the rising use of glyphosate, the primary active...| |8/29/2014 - As the temperature rises in the body, thousands of sweat glands begin to bead up, preparing to cool the body down. The average person possesses about 2.6 millions sweat glands -- a built in thermostat. This system is made up of eccrine glands and apocrine glands. Eccrine glands are the most numerous,...| |8/6/2014 - History's largest mass poisoning of a human population has occurred in Bangladesh. Because of it, 35 million people have been exposed to lethal levels of arsenic. Mortality rates are estimated at 13 per 1000, which means that this poisoning has ended as many at 455,000 lives. It happened simply enough....| |7/29/2014 - The sodium fluoride added to U.S. water supplies is contaminated with the toxic elements lead, tungsten and aluminum, a Natural News Forensic Food Labs investigation has revealed. Strontium and uranium were also found in substantial quantities in some samples, raising additional questions about the...| |4/2/2014 - Editor's note: Since this article first ran in 2014, a former marketer of Adya Clarity, Kacper Postawski, has reached out to Natural News to offer a "tell-all" interview about the irresponsible marketing of the product, the faked clinical trial, marketing mistakes and lessons learned. That full interview,...| |4/1/2014 3:44:27 PM - A recent symposium on autoimmune disease that took place in France has brought to the world's attention a new disorder linked to the aluminum-based chemical adjuvants added to many childhood vaccines. Known as Autoimmune Inflammatory Syndrome Induced by Adjuvants, or ASIA, the novel disease includes...| |3/19/2014 - Renowned medical doctor and neurosurgeon Dr. Russell Blaylock holds nothing back when it comes to telling it like it is, even when "it" goes against the prevailing schools of thought within his profession. And one of his latest Blaylock Wellness Reports is no exception, shining light on the very real...| |3/12/2014 - With symptoms such as deterioration in memory and mental functions that worsen over time and leave the afflicted person often incapable of independent living, Alzheimer's disease may be a condition that you are very much concerned about as you or your loved ones approach 65 - the age when Alzheimer's...| |3/6/2014 - As various amounts of aluminum begin showing up in food products tested at the Consumer Wellness Center Forensic Food Lab, it's becoming clear how metals like aluminum can build up unnecessarily in the body, taxing the organs over time. A percentage of ingested aluminum is naturally retained by the...| |1/30/2014 - Aluminum's role in causing neurotoxicity and contributing to a number of degenerative diseases, including Alzheimer's disease, has been widely discussed and is supported by a number of studies, though the exact mechanism remains inconclusive. Now, scientists have reached a better understanding of...| |1/25/2014 - Although many things remain unknown about the roles that zinc and aluminum play in the human brain, one thing is certain: high aluminum concentrations lead to brain damage. Because aluminum is ubiquitous as both an environmental and industrial chemical, it is impossible to avoid some exposure to...| |1/22/2014 - For years following the first Gulf War (1991), scores of returning American and Western military personnel suffered through a set of mysterious symptoms that doctors and scientists eventually described as "Gulf War Syndrome" (GWS), if for no other reason than because they simply could not identify a...| |1/21/2014 - How much aluminum is in your drinking water? It's hard to tell, but in a 15-year study on French elderly men and women, regular consumption of tap water was associated with aluminum toxicity and increased prevalence of dementia. How might the accumulation of aluminum from just tap water alone affect...| |1/20/2014 - A 2010 peer-reviewed study has found that sudden exposure to aluminum dust can have long-term detrimental health effects, especially when it comes to lung capacity and function. According to Brazilian researchers who introduced concentrations of aluminum dust into the respiratory systems of mice,...| |1/18/2014 - Honey bees are tireless workers, committed to sustaining life for all through pollination of various plants and crops. Honey bees are effective natural chemists as well. By collecting resins from leaf buds and vegetables, they are able to produce propolis. Bees create propolis from their environment...| |9/27/2013 - Recent studies point to a strong link between aluminum and breast cancer. The correlation between aluminum and brain damage, including Alzheimer's disease, has been established by extensive research and is reason enough to avoid aluminum exposure. Recent studies find a strong connection between aluminum...| |9/4/2013 - Here at Natural News, we are spearheading a campaign of total transparency for superfoods and nutritional supplements, including publicly posting our Certificate of Analysis for Clean Chlorella and Clean Chlorella SL. You can view these documents yourself by opening these PDFs: |7/14/2013 - Back in the 1960's, quiet scientific dialogue began about global climate change and how it can be manipulated. What might have turned into a productive discussion of responsible protection of Earth's climate and ecosystem had eventually evolved into a mad, controlling science experiment. By the 21st...| |5/23/2013 - As Natural News readers know, we recently announced the availability of Clean Chlorella and explained how it was cleanest chlorella on the planet. Today I'm happy to share with you the actual lab test results which have now been repeated across multiple production batches. These results show the astonishing...| |5/10/2013 - A popular cosmetic product since time immemorial, lipstick has long been used by women in many diverse cultures to accentuate their femininity and emanate their own unique expressions of elegance and style to the outside world. But a new study released by the University of California, Berkeley (UCB)...| |3/25/2013 - The non-profit Consumer Wellness Center (www.ConsumerWellness.org) has issued a consumer health warning over Adya Clarity, a "detox" product that was seized by the FDA in 2012 and tested at over 1200ppm aluminum. The product is currently being marketed through a series of highly deceptive webinars that...| |2/19/2013 - What you are about to read is 100% true to the best of my knowledge, and it all started in September of 2012 when we were in the process of launching the Natural News Store and receiving organic certification from the USDA. I knew we wanted to offer chlorella under our own brand name, but I had heard...| |12/18/2012 - It is a little-known condition that can trigger persistent and debilitating symptoms similar to those associated with multiple sclerosis (MS) and fibromyalgia, but is also one that the medical profession at large is still unwilling to acknowledge. And yet emerging research continues to show that macrophagic...| |10/24/2012 - Have you ever wondered what's really in vaccines? According to the U.S. Centers for Disease Control's vaccine additives page, all the following ingredients are routinely used as vaccine additives: • Aluminum - A light metal that causes dementia and Alzheimer's disease. You should never inject...| |9/19/2012 - When the denial of chemtrails is resolved, concerns of their content and their effects are raised. Two metals are consistently discovered worldwide in chemtrail analyses: barium and aluminum. They're formed as nanoparticles easily breathed in by mammals and absorbed by plant life. |5/29/2012 - Late last year, NaturalNews went public with an explosive story about a fraudulently marketed dietary supplement product called "Adya Clarity." Made from a combination of sulfuric acid and a mineral / metallic ore mined out of the ground near Fukushima, Japan, the substance was marketed online with...| |1/30/2012 - Aluminum Lake food coloring, used to heavily coat liquid medicines for children, contains dangerous amounts of aluminum and harmful synthetic petrochemicals. These "petrochemicals" are carcinogens containing petroleum, antifreeze and ammonia, which cause a long list of adverse reactions. Aluminum poisoning...| |1/18/2012 2:03:46 AM - The mineral silica (Si) is getting more notice for its important functions. Sometimes called the "beauty mineral" because it improves skin elasticity and hair and nail growth, a few other more important aspects have been explored lately. Silica helps ensure collagen elasticity of all connecting tissues...| |1/10/2012 - We are in the "Age of Aluminum", this according to a lecture by Dr. Chris Exley, PhD at a January 2011 vaccine safety conference in Jamaica. A common expression among those who deflect aluminum's toxicity issues is that aluminum is prominent throughout the earth's crust. What they fail to mention is...| |11/2/2011 - In the aftermath of the apology and recall issued on the Adya Clarity product by its top North American distributor, many people are concerned about whether they may have inadvertently exposed themselves to toxic levels of iron or aluminum, the two most common elements found in the Adya Clarity product...| |10/31/2011 - NaturalNews can now report that Adya, Inc. has been caught not only misrepresenting the composition of its product on its own label, but has now been caught committing marketing fraud that violates its terms of licensing with Health Canada. Health Canada is already investigating the issue. |10/30/2011 - There is a lot of conversation on the 'net about Adya Clarity following our publishing of information questioning its composition, labeling and safety (https://www.naturalnews.com/034005_Adya_Clarity_consumer_alert.html). Readers have rightly been calling for pictures, documents and charts that help...| |10/28/2011 - A product called Adya Clarity has been sweeping across the natural health community in the last year or so. It has been sold with recommendations for internal use -- taking "super shots" -- and often accompanied by wide-ranging claims that it treats cancer, kidney stones, hormone regulation, arthritis,...| |8/26/2011 - Senile dementia is the term given to many different diseases producing dementia. Over half of all dementia cases are sufferers of Alzheimer's disease. Protect yourself as best you can by avoiding products containing aluminum. Memory loss is not the only indication of Alzheimer's disease, since in...| |8/25/2011 - Vaccines are one of the most sensitive topics in the health care debate. Many people swear by them and credit vaccines with putting an end to childhood infectious diseases. Others are strongly concerned about the growing number of vaccines on the market today and about legislative policies that would...| |1/11/2011 - An allergic reaction to aluminum used to be extraordinarily rare. That's not true any more, however, and researchers have been baffled for an explanation. Now it appears one has been found. It turns out that as the number of vaccinations people are given has increased, so has the incidence of the formerly...| |10/18/2010 - According to a new report, the costs associated with dementia this year alone will be more than 1% of the world's gross domestic product and the number of people with dementia is expected to double in the next twenty years and triple in the next thirty. By age eighty, about 30 percent of the population...| |10/1/2010 - Recently, many people have become aware of the dangers of aluminum. It is likely that simply being around aluminum won't cause any damage to a person, but ingesting aluminum or absorbing it through the skin is possibly dangerous. Even though aluminum exposure is unavoidable, people can avoid additional,...| |9/20/2010 - The aluminum content of several of the most popular brands of infant formulas is high - especially for soy-based and lactose intolerant substitutes. This should be as frightening to parents as is the presence of aluminum in vaccinations, since an infant's digestive system is more vulnerable to systemic...| |5/7/2010 - An independent panel of supposed experts recently met at the National Institutes of Health near Washington, D.C., to discuss whether or not Alzheimer's Disease can be prevented through dietary and lifestyle changes. After evaluating a handful of studies that deal with the subject, the panel basically...| |12/1/2009 - Most consumers don't know it, but antiperspirant deodorant products often contain extremely toxic chemicals and heavy metals that can cause severe harm to the human nervous system. To rub such products under the arms is inviting the absorption of these harmful chemicals, which many believe will inevitably...| |11/9/2009 - This article has been removed| |9/22/2009 - A recent report found that 35 million people around the world have Alzheimer's or other forms of dementia. The number is expected to double every 20 years "unless there is a medical breakthrough." Dementia rates have grown 10% over what was predicted just a few years back, and this is attributed to...| See all 120 aluminum feature articles. |5/1/2009 - The FDA has issued an advisory that some medical patches may contain enough aluminum or other metals to cause burns if worn during a magnetic resonance imaging (MRI) scan. Patches that release small amounts of drugs over longer periods of time are becoming increasingly popular, and are now available...| Concept-related articles:Hydrogen:Sodium:Research:Store:Fuel:Vision:Storage:Program:Antibiotics:Vaccines:Canada:Formaldehyde:Flu vaccines:Thimerosal:Chemicals:Health: Most Popular Stories TED aligns with Monsanto, halting any talks about GMOs, 'food as medicine' or natural healing 10 other companies that use the same Subway yoga mat chemical in their buns Warning: Enrolling in Obamacare allows government to link your IP address with your name, social security number, bank accounts and web surfing habits High-dose vitamin C injections shown to annihilate cancer USDA to allow U.S. to be overrun with contaminated chicken from China Vaccine fraud exposed: Measles and mumps making a huge comeback because vaccines are designed to fail, say Merck virologists New USDA rule allows hidden feces, pus, bacteria and bleach in conventional poultry Battle for humanity nearly lost: global food supply deliberately engineered to end life, not nourish it Harvard research links fluoridated water to ADHD, mental disorders 10 outrageous (but true) facts about vaccines the CDC and the vaccine industry don't want you to know EBT card food stamp recipients ransack Wal-Mart stores, stealing carts full of food during federal computer glitch Cannabis kicks Lyme disease to the curb a free video website featuring thousands of videos on holistic health, nutrition, fitness, recipes, natural remedies and much more. CounterThink Cartoons are free to view and download. They cover topics like health, environment and freedom. The Consumer Wellness Center is a non-profit organization offering nutrition education grants to programs that help children and expectant mothers around the world. Food Investigations is a series of mini-documentaries exposing the truth about dangerous ingredients in the food supply. alternative health programs, documentaries and more. The Honest Food Guide is a free, downloadable public health and nutrition chart that dares to tell the truth about what foods we should really be eating. a free online reference database of healing foods, phytonutrients and plant-based medicines that prevent or treat diseases and health conditions. a free, online reference library that lists medicinal herbs and their health a free online reference database of phytonutrients (natural medicines found in foods) and their health benefits. Lists diseases, foods, herbs and more.
<urn:uuid:8a57b236-4092-4c10-9e8a-d034a56c4360>
CC-MAIN-2022-33
https://www.naturalnews.com/aluminum.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571147.84/warc/CC-MAIN-20220810040253-20220810070253-00498.warc.gz
en
0.940488
5,727
2.828125
3
This Federal Environment Agency web page offers regularly updated information about air and other topics related to this most important elixir of life. Find out here how air quality has developed and which pollutants are harmful to health. We identify sources of pollution and point out measures to combat it. 145 µg/m³ - the concentration of hazardous particulate matter (PM2,5) measured on New Year's Day 2013 at the Landshuter Allee measuring station in Munich. Particulate matter consists of a complex mixture of solid and liquid particles and is classified into different fractions according to size. PM2.5 stands for particulate matter with a maximum diametre of 2.5 micrometres (µm). Due to their small size these particles can penetrate into the small bronchi and bronchioles when inhaled. There are particularly high concentrations of particulate matter on New Year's Day, due to the fireworks on New Year's Eve. The 2013 national high of 145 µg/m³ was measured in Munich: To compare: the average daily mean value for December 2012 - January 2013 at that measuring station was a mere 19 µg/m³. The World Health Organization (WHO) recommends that its guideline value for a daily mean of 25 µg/m³ should not be exceeded if at possible. Number of the week in review Week 1 + 2: 2.5 µm is the maximum dimension of respirable particulate matter. Week 3: 3 months is the statistically shortened lifespan for every person in Germany owing to diesel soot. Week 4: An 643 monitoring stations are currently taking daily measurements of the air quality in Germany. Week 5: 2285 µg/m3 was the PM10 concentration during the first hour of the new year, measured at the Berlin air monitoring station in Steglitz-Schildhornstraße. Week 6: 870 µg/m3 was the daily mean level of sulphur dioxide concentration on 6 February 1990 in the southern area of Leipzig. Week 7: About 900 µg/m3: the level of PM2.5 concentration on the night of 12 January in Beijing. Week 8: In 1649 Otto von Guericke invented the air pump. Week 9: Breathing air has 21 per cent oxygen by volume. Week 10: The particle size of respirable crystalline silicon (quartz particles) thought to be carcinogenic is less than 4 µm (PM4). Week 11: 9,188 industrial plants are now authorised according to the IPPC Directive. Week 12: 33 BAT Reference Documents - Best Available Techniques are available to the sectors of industry most important to the environment. Week 13: There are 14,373 petrol stations in Germany (Date: 1/2012, EID). Week 14: A megacity is a metropolis with a population of more than 10 million. Week 15: Residents of the City of London pay a congestion charge of £ 20 per month. Week 16: The World Health Organization (WHO) guideline for the annual mean value for PM2.5 is 10 µg/m3. Week 17: About 600,000 motor vehicles in Tehran are more than 20 years old. Week 18: About 70 per cent of China’s total energy consumption in 2009 was supplied by firing coal. Week 19: An adult human breathes about 4,000,000 million litres of air per year. Week 20: More than 80 per cent of the population in the EU is exposed to particulate pollution at levels above the air quality guidelines which the WHO established in 2005. Week 21: Nearly 2 million new erythrocytes are produced every second. Week 22: 4 oxygen molecules can bind to a single haemoglobin molecule. Week 23: 78 per cent of all the natural and semi-natural ecosystems in Germany are exposed to atmospheric nitrogen inputs. This does harm to the environment in the long term. Week 24: About 3.2 million tonnes of reactive nitrogen compounds enter Germany´s nitrogen cycle every year. Week 25: Ground-level ozone causes damage to ecosystems. AOT 40 is the parameter used to assess its risks for plants. Week 26: 51 parties: 48 European countries, the European Union, the USA and Canada collaborate in the Geneva Convention on Long-range Transboundary Air Pollution. Week 27: It took 17 days for smoke from a forest fire in Russia to go once round the Earth. Week 28: There were 12 million tonnes of NOx emissions in Europe in 2008. In contrast, NOx emissions in China rose by that very amount between 1990 and 2008. Week 29: There are currently 23 POPs which are regulated by the Stockholm Convention on Persistant Organic Pollutants. Week 30: 29 global stations are taking measurements in the Global Atmosphere Watch Programme (GAW). Week 31: The mean ozone concentrations in Mace Head (Ireland) along the western coast of Europe are about 80 µg/m3. Week 32: The symbolic threshold value of 400 ppm carbon dioxide (CO2) in the atmosphere as a daily average was exceeded for the first time at the Mauna Loa, Hawaii, measuring station on 9 May 2013. Week 33: 44states (+ EU as Nr. 45) have signed the EMEP Protocol to the Convention on Long-range Transboundary Air Pollution. Week 34: 10micrometres (micrometre =a millionth of a metre or thousandth of a millimetre) is the cap on the size of the particulate fraction in the air called particulate matter. Week 35: The Federal Environment Agency air quality monitoring network consists of 7measuring stations. Week 36: Hamburg has made it possible for virtually every inhabitant to have access to a form of public transport that is within 300 metres at most. Week 37: 75 per cent of the world's population will soon live in cities. Week 38: Copenhagen city has a free bike-sharing system. Week 39: The ground-breaking "Tianjin Eco-city" is being built on wasteland. It used to be one-third desert, one-third salt flats and one-third sewage works. Week 40: In addition to other agricultural crops and ornamental plants, 101 tomato varieties are grown in Andernach, the "edible city" in Rhineland-Palatinate. Week 41: There are about 14 million small, solid fuel-fired heating systems in Germany. Week 42: Small coal and wood firing installations produced 25,000 tonnesof particulate emissions (PM2,5) in 2011. Week 43: New limits on the pollutant emissions from wood-fired boilers and stoves will become effective in 2015. Week 44: 25 per cent is the maximum moisture content that firewood may have at the time it is burned. Week 45: The Climate and Clean Air Coalition to Reduce Short-Lived Climate Pollutants (CCAC) has 72 members to date. Week 46: The CCAC launched 10 global initiatives wich aim to reduce emissions of short-lived climate pollutants worldwide. Week 47: The CCAC seeks to contain global warming to an increase of 0.5 Grad Celsius up to 2050 by enacting global initiatives and measures to reduce short-lived climate pollutants. Week 48: Every year there are about 6 million premature deaths worldwide which are traceable to high levels of air pollution. Week 49: Only 14 of the 48 Low Emission Zones in Germany still currently allow vehicles with a yellow sticker to drive in them (December 2013). Week 50: The source of 94% of Germany's ammonia emissions in 2011 was agriculture. Week 51: Tree Candles will be burning on our wreaths in the third week of Advent. Week 52: The average particle count at a measuring station on city outskirts (urban background) is about 10,000 particles per cubic centimetre. Monthly column - December Commission on Air Pollution Prevention of VDI and DIN - Standards Committee KRdL Platform for national, European and international standardisation in air pollution control The high standard of environmental protection that exists in Germany and Europe today is based on legal standards and technical rules in which the VDI and DIN ascertain the state of scientific and technological progress. The Commission on Air Pollution Prevention of VDI and DIN (KRdL) has been active in preparing the technical guidelines for air pollution control since 1957. The VDI Guidelines and DIN standards issued by KRdL act as an instrument of deregulation in that they take into consideration both national regulations (Technical Instructions on Air Quality Control in particular) and the European legal system (e.g. EU Air Quality Framework Directive). For example, the 39th BImSchV (Ordinance on Air Quality Standards and Emission Ceilings) refers to DIN EN 12341 for sampling and measurement of particulate matter. The standard describes a reference method to determine the PM10 fraction of suspended particulate matter. The scope of the KRdL's work covers all relevant issues in air pollution control. Its focus areas cover e.g. techniques for measuring nitrogen oxides or heavy metals, integrated pollution-abatement systems for use in waste treatment, as well as meteorological measurement and modelling, odour testing of indoor air, and environmental health assessment of bioaerosols. The competences of the KRdL are organised into four technical divisions: Environmental Protection Technologies Environmental Measurement Techniques Since 1990 the KRdL has served as the secretariat for ISO/TC 146 "Air quality", and of CEN/TC 264 "Air quality" since 1991. This makes the KRdL the sole responsible organ for national, European and international regulation in the area of air pollution control. The KRdL has more than 170 working groups which unite some 1,200 volunteer experts from industry, science and administration. Most of the costs for this voluntary work are carried by the state and other interest groups who represent industry, science and administration who provide their time and know-how. Organisation and management are in the hands of the KRdL office in Düsseldorf. The office has 18 full-time staff who supervise KRdL's technical regulation work. The KRdL documents the entire body of knowledge in air pollution control in more than 470 VDI guidelines and more than 140 DIN standards which are included in the six volumes of the VDI/DIN Air Pollution Prevention Manual. The Manual undergoes continuous updating and further development. Current air pollution issues are discussed in expert forums and at other events with the corresponding communities of experts. Contact: Commission on Air Pollution Prevention of VDI and DIN - Standards Committee KRdL Dr. Rudolf Neuroth Phone: +49 (0) 211/62 14-5 44 E-mail: neuroth (at) vdi.de Monthly column - November Der Sommer ist vorbei, in Deutschland beginnt die Heizsaison. Das Heizen mit Holz – mit gemütlichen Kaminfeuern ebenso wie mit komfortablen Pelletkesseln - wird beliebter. Das schont das Klima, weil Holz ein regenerativer Brennstoff ist und beim Verbrennen nur so viel CO2 entsteht, wie der Baum zuvor aus der Atmosphäre gebunden hat. Für die Lu Short-lived climate pollutants such as soot, methane, ozone or hydrofluorocarbons (HFCs) add to the greenhouse effect and can have profound effects on the Earth’s climate system. What characterises these short-lived climate pollutants is that they – unlike CO2 for example – are relatively short-lived in the atmosphere. The residence time of soot ranges between several days and a week. Achieving a reduction of short-lived climate pollutants can therefore help in the short term to mitigate negative effects on the climate. However, some of thes pollutants –soot and ozone – are also classified as air pollutants. Both substances have an adverse impact on human health: they can cause respiratory problems, cardiovascular disease or even lung cancer. Ozone and other pollutants can also damage ecosystems and lead to a loss of biodiversity. Mitigation of short-lived climate pollutants can help to reduce their impact on the climate and to protect man against their harmful health effects. A group of states and international organisations founded the Climate and Clean Air Coalition to Reduce Short Lived Climate Pollutants (CCAC) in February 2012. Its objective is to achieve a reduction of these pollutants through global initiatives. The aim of one of these initiatives is to reduce pollutant emissions from Methane from the gas production processes. ftqualität kann das Heizen mit Holz aber zum Problem werden: Besonders ältere Anlagen stoßen viel Feinstaub aus, dazu kommen eine Reihe weiterer Schadstoffe, vor allem, wenn das Holz nicht vollständig verbrennt. Wenn Sie mit Holz Heizen möchten, ist es wichtig, ein paar Tipps zu beachten: Setzen Sie auf moderne, emissionsarme Anlagentechnik. Bei Pelletöfen und -kesseln sind viele emissionsarme Anlagen mit dem blauen Engel ausgezeichnet. Verwenden Sie nur ausreichend getrocknetes, naturbelassenes Holz. Richten Sie sich nach den Vorgaben in der Bedienungsanleitung, z.B. wenn es um die Holzmenge und um die richtige Einstellung der Verbrennungsluftzufuhr geht. Monthly column - October Summer is over, which marks the beginning of the heating season in Germany. Heating with wood – either for cozy fireplace heating or in convenient pellet boilers – is becoming more and more popular. It is better for the climate because wood is a renewable fuel and, when burned, only releases the amount of CO2 which the original tree had previously captured from the atmosphere. However, heating with wood can develop into an air quality problem: older installations in particular generate a lot of particulate emissions in addition to a number of other pollutants when the wood does not burn completely. If you would like to heat with wood, it is important to keep the following in mind: Opt for modern, low-emissions system design. Many low-emission models of pellet stove and boilers have been awarded the Blue Angel. Use only natural, untreated wood that has been dried adequately. Follow the guidelines in the user manual, e.g. as concerns proper amount of wood and correct adjustment of the combustion air supply. Monthly column - September Liveable Urban Centres: Opportunities and challenges for environmental protection and quality of life Today about 50 per cent of the population in Germany lives in urban areas of more than 500 inhabitants per square kilometre – and the number is set to increase in the future. The high population density and the associated high pressure for use not only pose great challenges but are also opportunities for environmental protection. Densely populated areas are often affected by high levels of air pollution. Intelligent transport planning schemes which promote local public transport and cycling, for example, also are a means to improving air quality. With its thematic focus on “Liveable Urban Centres”, the Federal Environment Agency strategy for 2015 intends to come up with sustainable solutions to ensuring a high quality of life for people in urban areas without putting a strain on health and the environment. In the same context the Federal Environment Agency is holding a photo contest called Stadt im Sucher (City in the viewfinder) through the end of September. Fabulous prizes can be won by entering photos that show the attractive aspects of life in the city. For more information about the thematic focus area "Liveable Urban Centres" see the Federal Environment Agency’s annual publication What Matters 2013. Examples of cities that have been exemplary in environmental protection can be found on the web pages of the European Green Capital. Monthly column - August Air pollution of migrant origin Air quality monitoring network at the Federal Environment Agency records air pollutants transported over long distances Section 2 of the Act on the establishment of the Federal Environment Agency (dated 22 July 1974, last amended 1 May 1996) specifically mandates the Federal Environment Agency with the measurement of wide-area air pollution. The Federal Environment Agency (UBA) therefore operates an air quality monitoring network that consists of seven measuring stations that are located throughout Germany. The UBA air quality monitoring network in rural clean air areas mainly does measurements to which Germany is committed under international agreements and EU law. In addition, the UBA monitoring network does research and development to improve measuring technology and to better understand atmospheric chemistry processes. The legal basis for the UBA air quality monitoring network’s measurement obligations is founded on the Geneva Convention on Long-Range Transboundary Air Pollution (EMEP and ICP-IM), the GAW Programme of the United Nations WMO, on Germany’s membership in the OSPAR und HELCOM marine environment protection commissions, and on the EU air quality directive. Monthly column - July Pollutant transport across the northern hemisphere – other continents also influence our pollution levels Airborne transport of pollutants – a well-known problem, one which usually conjures of images of transport contained to Europe. But pollutants can also be transported from other continents to Europe. The July article explains how this works and directs its focus on ozone and its precursors and on persistent organic pollutants. You can also watch a film that follows the path of ozone transport across the northern hemisphere. Monthly column - June Plants and animals also need clean air There is an intensive exchange between every individual and the atmosphere for we all breathe in and out about 12,000 litres of air every day. Clean air is therefore a most important foodstuff, and this is no different for animals and plants: a mature tree can exchange more than 30 million litres of air per day and thereby supply “fresh” air. Airborne pollutants that are transported to even the most remote ecosystems can also become a problem for nature: Nitrogen compounds can cause overfertilisation of ecosystems. Together with sulphur compounds they are also responsible for acidification. Exposure to pollutants is one of the five main risks to biological diversity. Nitrogen inputs via the air pathway are above levels which are considered tolerable in the long term on three-quarters of ecosystem land surface in Germany. Elevated concentrations of ozone in the air have a negative impact on nature, whether in natural ecosystems or on agricultural land. Annual harvest losses across Europe due to ozone account for economic damages of more than one billion euros in wheat farming alone (study). Even bodies of water suffer from poor air quality. Mercury contamination of fish and overfertilisation of the Baltic Sea are two problems which are also caused by pollutant transport through the air. Since air pollution does not stop at borders, clean air is an important goal of international pollution control policy, particularly within the Framework of the Geneva Convention on Long-Range Transboundary Air Pollution. The Convention agreed to establish international monitoring programmes which record the effects of air pollution. In the past clean-air measures have been able to curb the negative effects of pollution, but there is still a lot to be done. One important field of action is the reduction of ammonia inputs from agriculture. Air pollution can have a serious impact on health. The effects of local, regional and transboundary environmental problems can impair health. These effects may be acute and cause immediate problems, such as irritation of the respiratory tract. Longer term exposure, however, can lead to chronic illness of the lungs or circulatory system. Air pollution is also the cause of deaths associated with these illnesses. At European level legislation is complemented by an air pollution control strategy whose goal is to achieve air quality that has no significant negative impact on health and the environment and causes no such risks. The air pollution control strategy defines objectives of air pollution reduction and recommends measures by which to achieve these aims by 2020. These measures include updating current legislation, more targeted focus of these regulations on hazardous air pollutants, and the greater involvement of industry and policy-makers who are in a position to influence air pollution. The EU air pollution control strategy also identifies health and environmental targets as well as emissions reduction targets for the most important air pollutants. The objective is to provide the people of the EU with effective protection from exposure to particulate matter and ground-level ozone. Tobacco use leads to nearly 6 million deaths worldwide every year. More than 600,000 non-smokers die as a result of passive smoking. The annual death toll will reach 8 million by 2030 if we don't take action. More than 80 per cent of these deaths occur in low- and medium-income countries. World No Tobacco Day which is held every year on 31 May highlights the health risks associated with tobacco use and campaigns for effective measures to reduce its use. Tobacco use is the most easily avoided cause of death worldwide but it claims the life of one in every ten people. The FIFA International Federation of Association Football has announced that the World Cup 2014 in Brazil will be a smoke- and tobacco-free environment. Spectators and players are to enjoy the tournament in 100% smoke-free stadiums. FIFA is thereby taking up the recommendations issued by the World Health Organization (WHO). Monthly column - April Particulate matter pollution in megacities The Federal Environment Agency and the environmental agencies of the Länder are responsible for monitoring air quality in Germany. The quality of our breathing air is relatively high at present, but the road to improvement from the 1970s until today has been long. Some of the countries in the world must still travel that road. Rapid urbanisation – the growth of cities on one hand and the behaviour in cities of people from rural regions on the other - means an increase in the number of vehicles on roads, more burning of coal and greater economic need which is putting a strain on the world's population because of particulate matter pollution. Pollution is sometimes far greater than human health can tolerate. Monthly column - March The EU directive on industrial emissions Directive 2010/75/EU (Corrigenda of 17 December 2010) on industrial emissions (integrated pollution prevention and control) of 24 November 2010 provides the basis throughout the EU for the authorisation of installations with relevance to the environment. The directive follows up on the Integrated Pollution Prevention and Control directive (IPPC Directive) and six other sector directives on large combustion plants, waste incineration, solvents and titanium dioxide. The new directive represents the further development of the principle of sustainable production. The objective is to achieve a high overall level of protection for the environment. This is achieved by adopting an integrative approach across all media. To reduce consumption of resources and energy and other environmental impact, pollutant emissions to the different media (e.g. air) and all production processes must be taken into account. Germany has transposed the IPPC into national law in an omnibus act and two series of ordinances. These comprehensive changes and revisions to the legislation are currently in the law-making process which is due to conclude in March. Announcement in the Federal Law Gazette will be made in April, whereby the revisions enter into force. Monthly column - February Air quality in Germany: Compliance with caps still problematic in 2012 Air pollution caused by nitrogen dioxide and particulates continued to be too high in Germany in 2012. These are the results of an initial evaluation of interim measurement data from the Länder and UBA. Levels of nitrogen dioxide pollution also remained high. Mean PM10 particulate concentrations were at the same level as in 2008 and thus well below the levels measured in 2009-2011. Monthly column - January Air is thick on New Year’s Eve Lead-pouring for fortune telling, champagne and fireworks at midnight – all part of a typical New Year’s Eve in Germany. An unfortunate part of that tradition is that the air is thick with smoke, eyes burn and throats are itchy. Setting off fireworks catapults many pollutants into the air. Pollution with harmful particulate matter is higher in some areas than at any other time of year. You can help to reduce levels of particulate matter by buying fewer fireworks this year – better yet, do without them altogether. The UBA’s motto, For our environment (“Für Mensch und Umwelt”), sums up our mission pretty well, we feel. In this video we give an insight into our work. Wörlitzer Platz 1 06844 Dessau-Roßlau GermanyPlease contact us preferably by e-mail: buergerservice [at] uba [dot] deYou can reach us by telephone Mon - Fri during service hours 9.00 - 15.00 at: +49-340-2103-2416Fax: +49-340-2103-2285
<urn:uuid:0b9e45fb-88ad-41ec-8899-b8517a5a6179>
CC-MAIN-2022-33
https://www.umweltbundesamt.de/en/year-of-air-2013
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00499.warc.gz
en
0.896696
5,302
3.328125
3
Extracting texts of various sizes, shapes and orientations from images containing multiple objects is an important problem in many contexts, especially, in connection to e-commerce, augmented reality assistance system in a natural scene, content moderation in social media platform, etc. The text from the image can be a richer and more accurate source of data than human inputs which can be used in several applications like Attribute Extraction, Offensive Text Classification, Product Matching, Compliance use cases, etc. Extracting text is achieved in 2 stages. Text detection: The detector detects the character locations in an image and then combines all characters close to each other to form a word based on an affinity score which is also predicted by the network. Since the model is at a character level, it can detect in any orientation. Post this, the text is then sent through the Recognizer module. Text Recognition: Detected text regions are sent to the CRNN-CTC network to obtain the final text. CNN’s are incorporated to obtain image features that are then passed to the LSTM network as shown in the below figure. Connectionist Temporal Classification(CTC) decoder operation is then applied to the LSTM outputs for all the time steps to finally obtain the raw text from the image. – Hi, so I’m Rajesh Bhat, I’m working as a Data Scientist at Walmart Labs, Bengaluru. So finally I’m pursuing my master’s from Arizona State University. And I’m a Google competition expert with three silver medals and two bronze medals. So in today’s session I’ll be talking about Text Extraction from Product Images. So this is how the Agenda looks like to start with I’ll give an introduction to Text Extraction task. Then I’ll be talking in detail about the Text Detection and Text Recognition models, which are like the building blocks for Text Recognition. So later I’ll touch upon the CTC loss calculation part and I will talk about other advanced techniques for Text Extraction and finally, I’ll be happy to take the questions. Yeah, so this is how the Text Extraction Pipeline looks like. So given an image or an product image, first of all we need to know where exactly the text is present in the image, right? So that is called as Text Detection. So once we know where the text is present in the image, so we have the crop those regions and send it to the Text Recognition model, okay? So the Text Recognition model crop images taken as an Input, and finally we get the raw text as an Output, okay? So in the below image, below example you can see exactly the Text Detection and Text Recognition steps. You can see the bounding boxes after the Text Detection task and those affect to the Text Recognition and finally we get the Raw Text out of it, right? So there are many domains where you can plug in this Text Extraction (mumbles). So it’s like, let’s say in retail, right? Let’s say we take the domain of retailer eCommerce, right? So, I mean, we have the product catalog and it so happens that many times the product catalog is not fully clean or some of the values that are missing in the product catalog. So if we have the product images for that, then we can extract the text from it and maybe after extracting the text, we can do (mumbles) extraction or entity recognition then we can fill those missing values in the product catalog. Let’s say if there is any offensive content in the Product Image then we don’t want to show those kinds of images in the dotcom website, eCommerce website, right? So in that scenario as well, if there is any offensive content, then maybe we can do the Text Extraction first and then do a classification of saying that, is there any offensive content or not? Facebook also has similar functionalities. So whenever a user uploads, an image, so they check that if there is any offensive content in uploaded image or not. So it’s basically like social media content monitoring system. There are like a lot of other use cases as well. So, but these are like few use cases which come to my mind right now, yeah. So, next is, talking detail about the Text Detection technique. So I’ll be talking about Character Level Detection Technique of which is achieved through Image Segmentation, okay? Yeah, so I’ll just give a brief of what it Image Segmentation is. So, basically given an image, right? We want to find the different segments that are present in this digital image. So let’s take, this cat as an example, let’s say there are like two segmentation, one is like cat and the background, right? So, basically given the Input as a calculation of the Ground Truth look something in the (mumbles) right? So as you can see in the right, so all the pixels which are belonging to cat are marked as ones and the background pixels are not as zero, so not cat, right? So that’s a binary mask. So it’s basically called as mask. And, so this is how the Ground Truth looks like. So given a new cat image, we want the Output to look something like this, whatever you’ve seen, right? It’s given a new image. And this is just an example with the giraffes, same thing applies, and you can see the binary mask (mumbles). So wherever the object is presented, those pixels sum up and others are treated as background. Same thing applies to the fixed detection scenario, right? So wherever the characters are present, you can see it is being mapped and the rest of the region is not mapped. So this is how the Ground Truth looks like for Text Detection task, okay? But how do we generate this Ground Truth? Because this data generation itself, the training data generation for this kind of task is very expensive, right? So now we’ll see how we can generate the training data for Text Detection Task. Okay, so we have like Character Boxes over here, right? So given a bold and for each characters, we have the boxes. It’s like character level annotation is present. So using this information, all of the information that is presenting the slides needs to be generated. So basically I’m talking about the affinity boxes and then finally getting the masks out of it. So if Affinity Boxes are mainly for telling that two consecutive characters belong to the same word, right? So let’s take an example, let’s take B and D. There is a affinity box between E and D. There’s affinity box within that. But if you take a word E at the end of the word and if you speak the word badly over here. So there is no affinity box between these two, right? That means these two are coming from two different points. Okay, so Affinity Boxes are mainly for consecutive characters. If they’re part of same word, okay. Now let’s see, how do we generate this Affinity Box, okay? So given the Character Box and let’s take two characters now, B and D, and the Character Boxes to that. So what we have to do is we have to join the diagnosis. So whatever you are seeing in the do, right? So those are the diagnoses. Once we joined the diagnoses for both these characters, we get the next triangles. Next step is finding the centroid of these four triangles. Now, if we have four triangles find the centroids and basically join this centroids. So once we joined the centroids we get Affinity Box given two characters. Same thing applies for other character assessment. And finally, you can get the Affinity Boxes for every like two characters and between two characters. Now, once we have the Affinity Boxes and Character Boxes, next step is to create the Mask. In the (mumbles) segmentation example, it was a binary mask, but in this case it’s a 2D Isotropic Gaussian, so it’s a continuous values now. So what we’re doing is we’ll take the 2DIsotropic Gaussian, and then we try to fit this window particular box B, your Character Box or an empty box. And you can see the transform to the question over here. So this is nothing but mine for each character, this is the mask now. And same thing applies for the Affinity Box. And finally, we get good Region Score Ground Truth and Affinity Score Ground Truth, right? So these are Gaussian Map, these are like continuous values now. So if it was a binary mask in the that segmentation example, we could have used binary cross-entropy as the last function of entering the model. Since these are like continuous values, we can’t use a cross entropy, so how to use (mumbles). Now let’s look into the architecture, the Model architecture for Text Detection. So as you can see in the right does this like very similar to the U-Net architecture, so U-Net architecture is like pretty famous in Image Segmentation Task, basically in medical U-Net segmentation, those kinds of things. So, basically we have like batch normalized, Washington, VGG16 as a backbone over here, and then like skip connections, very similar to U-Net architecture, and then like, upsampling blocks over here. So given an Input Iimage, the Output is though Region score and the Affinity score. And this is actually a published paper in (mumbles), does this by AI research. The paper is called as Kraft, (mumbles) Text Detection. So there are several other techniques also for Text Detection word level, Text Detectors are available, but the problem with that is if let’s say, if the word is (mumbles) or something, right? If it’s arbitrary shapes and the detection will not be that accurate, if we are doing it on a character level, then as we’ll see in the next (mumbles), the detection is a very accurate. So these are the sample images, Output images which are taken from the paper itself. So given an image, the Input image which is in the top, I forget about the annotations are here, but identifications that access and (mumbles). So the Output from the Text Detection Model is the Region score and Affinity score. So I think the scores are tells that how two characters are related to each other. Are they part of the same word or are they not part of same word. So as you can see, there are like two words here. But there is no Affinity Region coming between these two words. Also since the I mean, the segmentation is happening on the other prediction is happening on the character level. So we can detect the arbitration extortion (mumbles). To get the bounding box what we can use this, we can use the functionalities from open CV. So basically we have like connected component and a minimum idiotic angle functionality set up open city, we can leverage that to find that bounding boxes. Now we’ll see how the Output looks for the product images. So given a sample Input sample product, so you can see that finally the bounding boxes that are coming out from the Text Detection Model. So this was about the Text Deduction, now we’ll see how we can leverage the Output from the Text Deduction and (mumbles) Text Decuction. Firstly talk about how we can prepare data for the Text Recognition Task. So I am using the library called Synth Text for synthetically creating the dataset. Pending dataset for the Text Recognition. I can’t manually given a crop dimension, I can’t manually annotate all those images. So to train a deep learning model, that will be very difficult because deep learning needs lots of data and preparing those data manually is a very difficult task. That’s why I’m creating the dataset synthetically. Depending on the use case, so for me it was like I had to extract the extra product images. So what I did was I took the product descriptions and product titles from the catalog and synthetically created their own like 15 million images. So basically in this 15 million images are a lot of variations that was added. The variations were with like different font styles, different font sizes, different font colors, varying backgrounds, and all those things. So let’s say you had to do number three recognition, right? So it’s a totally different scenario. Now, I mean, the kind of vocabulary that goes into a kind of words kind of works on galaxy that goes into those domains are totally different things. Just like certain characters are presented certain followed by certain numbers. So you don’t see a meaningful border with it. So depending on the use case, you have to choose the vocabulary and synthetically create the dataset. So for the product scenario had like 92 characters in the vocabulary, so basically includes capital letters, small letters, numbers, and especially symbols, because product can have like expiring date, the pricing information can be presented in the product, so all those (mumbles) that’s right. And the vocabulary looks like 92 characters. On the right, you can see those synthetically generated images using the Synth Text Library. You can see a lot of variations in that gender data itself. Now we will see how the pipeline looks like for the Text Recognition. So given the Output from the Text Detection, we passed the crop those part and send it to the CNN models so that we get the Image Features out of it. Then these Image Features are as an Input to the LSTM Model. Now whatever Output we have been from the LSTM Model it’s back to the CTC Decoder followed by the final Output that we get from the Text Recognition Model. Now briefly we’ll understand what are the Receptive Fields and then I’ll talk about how these Receptive Fields concepts are connected to the Text Recognition Model. Typically what happen (mumbles) conventional networks is like a bunch of filters, which are applied on there (mumbles). Let’s say so in this example, let’s say I have like five crossfire image, which is optical view and let’s say, I have like three cross reflector, right? Here if I applied this three cross three filter on this part of the image, I’ll be getting a single value, right? So this single value has visibility on this three cross three patch. So this is nothing but a Receptive Field. So it’s finally behind us reasoning the Input image space that a particular scene in features is looking at, right? So this is a CNN Feature or aFeature Map is looking at three cross replies. That’s something that is (mumbles) Now, once we apply this filter across this image, the Output would be a three plus three, right? So this is the size of the Feature Map now. Let’s say we’ll apply one more filter on top of this three cross three Feature Map, right? Now once we do that we get a single value finally that is the Feature Map, final Feature Map (mumbles). Now this single value has visibility on the (mumbles). So now the Receptive Field for this is final crossfire. So basically if you guys are aware of a single-shot Detectors and object detection task, right? So they did the features from not only from the Output final numbers, but also from the intermediate layers. So the intuition is that intermediate layers how a low Receptive Fields and the layers which are close to the task, what we have is let’s say classification tasks or object detection tasks, right? So those have a higher Receptive Field. So in the object detection scenario, we want to detect the object irrespective of its size. So if the object is very small then the Intermediate Features will be handling those, I mean, intermediate layers will be Receptive Field is small and they will be able to handle and detect the small objects. And the features close to the final layer so those are like higher Receptive Fields and they’ll be able to detect the larger object. So that’s why we take like intermediate layer features and (mumbles) features in the object detection task. Now, we’ll see how that’s related to the Text Recognition Tasks. Now I have an image of the size of 1.8 is the weight and 32 is the height and it’s a (mumbless) image. So none is nothing but the bad size. So you can ignore the (mumbles), okay? So given the Input image, so in the model, in the CNN model, I have like a couple of (mumbles) layers and max putting layers so that I get the Feature Map of the shape, one cross 31 into final (mumbles). So something like this, right? Like I can see like one room, we have 31 columns and find it like, this is the shape of the Feature Map now. So now we need the Receptive Field concept now, okay? So you can see particular value in the Feature Map has visibility on this, I think part of the Input (mumbles). So if you’re taking 431 timesteps, the final value will be, but I think to the final part of the image and the initial values will be, I think that the first part of the image, right? So it’s a sequential in nature now. So to give you an intuition with the an (mumbles) context, let’s say 31 here is nothing but the number of timesteps. So that’s kind of like max sequence length we define when training a typical LSTM modeling. So that is nothing but a number of timesteps and find it well could be thought of make the word embedding damage. Let’s say we are feeding (mumbles) I mean, we are feeding word for particular time step, right? So we are feeding 31 (mumbles) and let’s say the word embedding size for this each word is like finding (mumbles). So that is the intuition, if you want to relate this to the an empty context now. So this Input this Feature Map does an Input to the LSTM model now. And finally, since we have like 31 timesteps, we will be getting the Softmax properties order (mumbles). So what can we do sizes 92 (mumbles). Now here comes the interesting part now, so given an Input image, Hello, so we have the Ground Root has (mumbles), Hello. It’s like five characters now but my Output from the listing model is four 31 timesteps. So length of the Ground Truth is not matching to the prediction. They will end up the ground despite, and the prediction is like 39 (mumbles). How do we calculate the loss? Since the, I mean, the lens are not matching, how do we calculate the loss date? If it was named entity recognition kind of a task, then for every (mumbles) in the Input would be having the tax thing that is it organization, is it relating to person or is it predicted to location or is it something else or others. For every time step, we will be having them Ground Truth for these kinds of any kind of a task. In this scenario, then we can use category build cross-entropy, since the Input lens, sorry, the Ground Truth and the prediction (mumbles) is matching. We can use the categorical cross-entropy as a loss function but for the Text Recognition Task, I think the CRN model, right? So (mumbles) because the Input sorry, the prediction and the Ground Truth lens are not matching data. Same thing holds good for the speech to text kind of a scenario where we have a speed signal that’s not, I mean, we don’t know, we just have the corresponding text to the speech. We don’t have the information like the H, the letter H was spoken for like two seconds or three seconds, those information is not presently. So can we manually aligned each character to its location in the audio or maybe in the image. Can we do that? So yes, we can do that about that would be a lot of money (mumbles) so, I mean, we can just forget about the training deep learning model because we need a lot of data and we’ll spend most of the time in preparing the data access (mumbles). So we have CTC loss to the rescue, CTC operation to the rescue. So CTC is nothing but connection is temporal classification. So just a mapping from image to text and not worrying about the alignment of each character to the location of the input image. So we can calculate the loss and we can train the network. So we’ll see what are the Decode operations now, right? CTC has basically two component, one is CTC Decoder phase and CTC loss, these two are totally different. As you saw previously the prediction is like, we have like predictions for 31 timesteps and the Ground Truth is like only five highlighters depending on what image is that. So somewhere we have to most of the Output that is coming from the model. So one may we which I can think of is like, we can reduce the reputations. Let’s say I reduced the reputations see we had like three inches present, I just like brought down to single H. (mumbles) about doing the single E and so on. But here, the problem is we need double L, right? So if we just merge the Output that is coming from the LSTM Model, then we’ll lose it and lower it. Whenever there are characters repeated, then we’ll lose a particular character, right? Like I listed, then we need two L but if merge this will get us single L. So what we do is we introduce a special character called as Blank Character. So this character should be part of the vocabulary, there should be a character totally out of vocabulary. And then this needs to be added to build vocabulary. So this is typically called as Blank Character and we will make sure that whenever there is a changing character, our model predicts Blank Character. So instead of just merging the repeats now, so we have the Blank Characters, but stuff is still most of the repeats. Now, after merging the repeats, you can see since Blank Character was introduced, we have we have two separate answers. So once we most of the repeats next step is to remove the blanks. Finally we got (mumbles) which is a very, I mean, same as (mumbles), then we can say that our model is performing good. Now next, we’ll see this was a CTC Decoding step. Now we’ll see how we can calculate the CTC loss. Given the lets say, I mean, we’ll not take a very large image because it will be very difficult to show the slides how do we calculate the loss? So let’s take the image as AB that is a Ground Truth, image we have, and the content is AB and let’s say the vocabulary is just A and B. And since we are using CTC as a loss function, we need to introduce this Blank Character. So my vocabulary sizes are three now and let’s say there is only like three times steps. So in the actual use case I had like 31 times steps, to keep it simple, let’s say we just have three times steps. Now we’ll see how we can calculate loss. So given the Ground Truth AB, I need to find all possible candidates. So on which if I apply a CTC Decode operation, that is like most repeat and drop Blank Characters. I get the prediction, which is same as the (mumbles). So to give you an example, this is the down (mumbles) AB, now let’s see if my modern inspecting ABP, if I apply the CTC Decode operation on top of this, that is like most repeats, then this B will be two B moving most to a single B. I’m drawing a Blank Character, but there is no black character that is painted, right? So that is an example of how we apply CTC Decoder function to the predicted Output, right? So basically to calculate the loss given the Ground Truth, we are generating all these candidates. For generating on this candidates, dynamic programming paradigm is used to gender targets candidates subjected to the conditions. The condition is like we should generate only those candidates if we apply CTC Decoder on top of it, we should be getting the (mumbles) this is a condition. Now let’s see, these are the Softmax properties at t1, t2, t3. The (mumbles) is over the vocabulary. So to keep it simple, we kept the vocabulary as AB and Blank Characters introduced because we are using CTC, that’s right. (mumbles) Now, what is the score for this, what is the probability of observing this AB (mumbles)? So how do we do that? So what we have to do is we take the Softmax properties 48 at time step one is pointed four times to do the Softmax next property, (mumbles). Appointed into 0.7 (mumbles) like we have been and explained it. Finally we get a score for the ABC (mumbles). So can we just multiply this Softmax properties, t1, t2, t3? Yes, we can, because these are all additional conflict is because we are using the (mumbles). Now, we have to finally calculate the probability of getting the Ground Truth AB. So this was just one scenario that a lot of the scenarios it’s like AB (mumbles). It’s like if the model is predicting blank AB is also then it’s fine because I can just apply the CTC Decoder. And that comes out to the AB only and don’t put this also AB then are more or less (mumbles), so it’s an (mumbles). If the model is freaking ABB or AB or Blank AB, AB Blank. So how do we calculate the probability of getting done? Now we just add (mumbles) basically property principles (mumbles). So after adding this properties, having ended up getting (mumbles) AB. So then finally the losses minus (mumbles) property of getting (mumbles). So the property values can’t be better than one or less than zero, so the max value is one. So let’s say if my model is doing really well, then that means the property of getting the (mumbles) will be very high, that is equal to one and note one is zero and minus zero plus zero it doesn’t matter. So losses is zero, okay? Now let’s say my model is not doing better, to start the training process, to model will be initially learning and model is not doing. So in that case, the property of getting the ground to AB would be zero. So in the next two slides are the example for that I’m just attending what could be the max values of the last function. Now property, if the property of getting zero, not zero is minus, like it’s ending towards minus of minus infinity is infinity, like the loss is tending to infinity. So whenever there is a perfect mismatch and the model is not doing well. And whenever we have a mismatch, we can see the property of getting the contract to zero and this tends to infinity that you can take it offline and do a deep dive on this. Now, this was like how we mathematically calculate the lost, but intensive that we have in build function, so we need not worry about calculating the last mile a week and just use the CTC batch cost function and we can get the last CTC loss. And this was how the model architecture looks like, so as I said earlier, like we have like a couple of unleashing layers, all the blood max pulling this, and then this part doesn’t Input to the Alice team. So as I said earlier, I had like 50 million images, I find load all these images in the memory, so if I know it will come up to like 690GB. So we were using a generator functionality caters, which will load only the particular batch in memory and with the help of workers max_queue_size and multi-processing we can speed up this training process. And training was done using P100 GPU to us for a single epoch and a prediction time was like one second for batch of 2,048 images. So that was the training side of Text Recognition with CRN and CTC model, there are several other approaches one can try this, so it’s like a one contained and attention model and could be a decoder framework. And basically they can get rid of CPC and have a constant to be lost over here and I’ve seen many people using the spatial transformation networks just before plugging into the Extra Condition Model. So basically distorted images rectified and we get a property mates using special television networks and that is where does an Input to them for the modern, right? It’s like a scene in front of by the attention mechanism model CNN followed by CRNN CTCmodel. So I have the references, maybe you can look into it offline. So this is what I had, so I know I have covered a lot of content. So the aim was to provide an overview of different techniques that one could apply for Text Extraction Tasks and on the content of this session is presented this (mumbles), you can refer the URL, just scan this QR code. So if you have any questions, I can take it now. Walmart Global Tech India "Rajesh Shreedhar Bhat is working as a Sr. Data Scientist at Walmart, Bangalore. His work is primarily focused on building reusable machine/deep learning solutions that can be used across various business domains at Walmart. He completed his Bachelor’s degree from PESIT, Bangalore. He has a couple of research publications in the field of NLP and vision, which are published at top-tier conferences such as CoNLL, ASONAM, etc. He is a Kaggle Expert(World Rank 966/122431) with 3 silver and 2 bronze medals and has been a regular speaker at various International and National conferences which includes Data & Spark AI Summit, ODSC, Seoul & Silicon Valley AI Summit, Kaggle Days Meetups, Data Hack Summit, etc. Apart from this, Rajesh is a mentor for Udacity Deep learning & Data Scientist Nanodegree for the past 3 years and has conducted ML & DL workshops in GE Healthcare, IIIT Kancheepuram, and many other places. "
<urn:uuid:ef735e9c-e348-4643-b4d6-835f8e768b24>
CC-MAIN-2022-33
https://www.databricks.com/fr/session_na20/text-extraction-from-product-images-using-state-of-the-art-deep-learning-techniques
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00099.warc.gz
en
0.939307
6,589
2.765625
3
Cultures of Rejection Cultures of rejection are practices, discourses and cultural formations based on values, norms and affects or political attitudes constituted in and through the rejection of a set of socio-cultural objects. More than just a gesture of protest, they are “modes of living,” or ways of being in the world. The objects of rejection may vary, but often they include immigration, domestic political elites, institutions of civil society and media, shifting gender relations, trade unions, and European integration. They are usually based on distinct crisis narratives. Cultures of rejection can lead to political decisions, such as support for authoritarian populist parties, but this is not their necessary outcome. They may also be articulated with a rejection of ‘the political’ in toto, for example, they may result in a decision to not vote, or in residing and working in a place where one does not have the ability to vote. They may also indicate a sense of being rejected or left behind. Cultures of rejection are grounded and reproduced in everyday practice. Our working assumption posits that cultures of rejection emerge from experiences of change and/or crisis, up to and including profound crises of authority. Our research project analyses the narratives of crises and transformation that are processed in cultures of rejection, and how meaning is intersubjectively ascribed to transformation and crisis in different socio-spatial and digital environments. Trade unions are interest organizations of workers. Historically, trade unions have often been founded and organized by craftsmen and industrial workers. With the rise of industrial capitalism trade unions often came to be part of broader labor movements with various socialist-inspired ideologies, and at times they also organised around religious and ethnic identities. In the period following the Second World War, many trade unions in the capitalist countries of the Global North, often after long struggles, became part of officially recognized industrial relations systems of negotiation between the organized interests of employers and workers. With the neoliberal transformations that started in the 1980s and the decline in industrial employment in the Global North, the power of trade unions began to decline. The percentage of workers organized in trade unions fell due to changes in industrial structure, increasing employer and state animosity towards unions and internal union weaknesses in organizing women and migrant workers. According to the International Labour Organisation, the rate of unionization in 2016 varies widely across Europe, between 5% in Estonia and 90% in Iceland (rates also vary with respect to the countries studied in this project: Austria 27%; Croatia26 %; Germany 17%; Serbia 28%; Sweden 65%). While the rate of unionization is a significant measure of the power of trade unions, there are a number of additional factors to consider in that regard, including the industrial relations system, the strength of employers, the ideological strength of trade union members and the role of the state more generally. Despite a long period of declining unionization rates, trade unions still play an important role in capitalist economies. While some social scientists distinguish between the content and style of populist political communication (Moffit), others claim that populism is a “thin-centred ideology” (Mudde) that must necessarily refer to other ideologies. Right-wing populism connects with racist, sexist and nativist ideologies and notions of social inequality. Right-wing populist communication is characterized by a primary antagonism between an (corrupted) elite and a “We” the people. A second antagonism is established with respect to “Others,” such as migrants who are portrayed as threats to “the people” that require exclusion. Right-wing populists conjure the notion of a pure, nativist people and claim to represent this imagined people. While they publicly regret the lack of sovereignty of this people, they simultaneously claim leadership and negate democratic popular sovereignty, redesigning it in a submissive role. At the descriptive level, respectability refers to desires and practices of making oneself socially acceptable through ‘good’ or ‘proper’ behavior. As such, it is linked to hegemonic notions of what is normal, natural and right. This notion of respectability has been used in multiple research contexts. In particular, studies of working-class culture have asked how certain groups of workers identify with mainstream values in contrast to ‘unrespectable’ workers. Scholars of gender have shown how hierarchies of ‘good’ and ‘bad’ are established among women, as wives and mothers, through practices of respectability that are linked to the private environment of the home. The concept “politics of respectability” has also been used to analyse African-American strategies in struggles for civil rights and inclusion. A number of studies have shown that striving for respectability can reinforce notions of the normal in ways that hide power relations. In this context, the idea of “migrant respectability” has been developed as a way to understand different strategies for social inclusion, ranging from struggles for social justice to practices of assimilation. As a form of migrant respectability, assimilation is understood as a way to negotiate a better social position by distancing oneself from Other migrants. With the growth of authoritarian populist parties and politics, some migrants have developed strategies in support of anti-migrant and culturally racist politics, with reference to being European, Christian or “white,” in contrast to the non-European, Muslims and racialized Others. Racism is a social relation. It includes but is not reducible to prejudice and implicit bias, ideology and instruments of divide and rule. As a true “total social phenomenon” (Étienne Balibar), racism organizes social practices, discourses, representations, affects, subjectivities and institutions. It constitutes affective understandings of ‘self’ and ‘other,’ creating senses of belonging as well as rejection. Racism can play a determining role in cultures of rejection. Objects of rejection may be constructed along the lines of racist demarcations. At the same time, subjects invested in cultures of rejection may themselves be the objects of (other) forms of racism, and they may invest in racist discrimination against ‘other Others’ as a way to prove themselves worthy members of an ‘Us.’ While racism is an unquestionably hierarchical power relation, as a total social phenomenon or mode of ‘negative societalisation’ (W.D. Hund), it is an infinitely complex phenomenon that escapes any attempt to reduce it to a simplistic model of ‘oppressor vs. oppressed.’ We seek to understand the articulations of racism with class and gender relations, its reliance on culturalist and biologist discourses, its universalist and particularistic variants, its spatial and temporal particularities across Europe and its ‘post-racist’ transmutations in the current conjuncture. Racialization is the process of attributing racial and/or ethnic characteristics to a relationship, social practice, group or individual from a position of power. Racialization (sometimes ‘ethnicization’ is used in a similar way) often focuses on somatic characters such as skin color, but it also operates with respect to cultural and religious characteristics, as in Antiziganism, Antisemitism and Islamophobia. Racialization is a used to differentiate people in order to exclude, exploit and dominate them. Racialization reproduces the idea of ‘race’ as something real and essential. By racializing others, people simultaneously racialize themselves as something better and worthier. Racialization is a condition for racism and racist practices. Racialization has been common across histories of colonialism, imperialism, nationalism and migration. The rise of ethnonationalist and racist movements and parties in Europe during recent decades increasingly has been accompanied by racism and racialization. Racialization is used as a way to differentiate groups that supposedly should be expelled or not let into a bordered area. It also shapes processes that normalize labour market segmentation by linking certain occupations to certain categories of workers. The function of protest is to state dissidence, or objection. However, the Latin root protestari contains an additional meaning. It places the prefix pro- in front of the verb testari, which means ‘to witness.’ This etymology hints at a fruitful understanding of collective protest, one that emphasizes the participants’ act of witnessing. Cultures of rejection are embedded within a landscape of protest. Prior to the unprecedented public and political reaction to the Covid-19 crisis, there was the populist surge of 2016–2019, from Brexit to Bolsonaro, and the emancipatory rupture of 2011, including the Arab Spring, anti-austerity protests and Occupy, all in the aftermath of the global debt and financial crisis of 2007/8. Migration plays a key role in the discourses generated in all these movements and processes, and it may also be seen as a protest movement in its own right – the Long March of our era. Millions of people participate in contemporary protests of various stripes. To what deep structures in our political order do they object and bear witness? In the context of cultural discourse, pathos is a modality of affective investment that takes a form of passive reactivity, such as pity or resentment. Pathos is characterized by an overindulgence in affectation, and as such it is always inappropriate to a situation. It emerges in contexts of lacking (adequate) social response, and it implies that social responsibility is in fact the ability to respond appropriately. An example of pathos can be seen in practices of abstaining from political participation and denying one’s own agency in relation to societal and political change, which essentially amount to a refusal of social and personal responsibility. In fact, any form of rejection that results in passivity, that is, in a refusal to either accept or alter political change, can be observed as pathological. Otherness is the characteristic or quality of being not alike, or of being Other. It takes shape in relation to and as part of constructions of individual and group identities. Otherness emerges as a result of a discursive process of differentiation along the line of “Us/Self” and “Other/Them.” It is usually expressed in binary categories, using different biologically and socially constructed characteristics such as age, ethnicity, sex, race, sexual orientation, socioeconomic class, physical ability, subculture, and so forth. Otherness can point to what is distinct or different from what is experienced or known, but in social theory, it is used to label the separation of social groups that occurs as a result of asymmetrical relations of power. Various modes of socialization over the course of a lifetime play significant roles in processes of (re)constructing otherness. Such processes usually result in stereotypes and prejudices that serve to maintain the social and symbolic order. Examples of political practices engaging issues of otherness can be seen in human rights struggles, civil rights movements, and other struggles for freedom with respect to race, religion, sexual orientation. Today, we actualize and continue to practice otherness through racialization and Orient-Occident ways of thinking, as can be seen in social resistance towards Islam and immigration from the East. When E.P. Thompson coined the term in 1971, ‘moral economy’ referred to pre-capitalist practices of barter and exchange mediated by moral obligations and customs rather than the market. Since then, the concept has been adopted and adapted in various disciplines and stripped of its historical specificity. It has been used to investigate the moral dimensions of contemporary economies as well as the economies of moral orders. For a comprehensive understanding of contemporary cultures of rejection, we make use of the concept of moral economy in both senses. In the first sense, following Susana Narotzky, Jaime Palomera and Theodora Vetta, we approach moral economy as the norms, meanings and practices that mediate the inequalities produced by certain forms of capitalist accumulation, including the introduction of austerity measures and the informalization of labor. In the second sense, we investigate the sociocultural conditions of acceptability in cultures of rejection by mapping “the production, distribution, circulation and use of moral feelings, emotions and values, norms and obligations in the social space” (Didier Fassin). Both approaches relate to the question of authority: is the crisis of authority also a crisis of moral economy? Migration refers to human mobility and is often conceived as cross-border mobility. As such migration is not a historical exception. However, European migration policies understand human mobility in exceptional terms, and they prevent, structure and regulate migration accordingly. The refugee and migration movement of 2015 onwards clearly has demonstrated how migration constitutes, shapes and changes European societies in all areas. Our project combines an investigation of the discursive, social and political effects of this movement along its route in and through Serbia, Croatia, Austria, Germany and Sweden. We assume that since that time and up to the present, the narratives and political controversies about migration in these five countries have promoted and required quite different ways of thinking about migration, and therefore also different ways of dealing with it. We also assume that the societal attitude towards migration expresses a degree of democracy in each of these countries. We are interested in how narratives about migration and experiences of migration and everyday life are lived in societies where migration is socially constitutive, that is, in migration societies. We assume that an understanding of these countries as migration societies today requires an integrative perspective, combining knowledge of existing and emerging mobility and labor regimes in interaction with social rights in and across European states. Labor market segmentation Understanding labor market segmentation is important for two reasons. Firstly, we often talk and write about the labor market. However, it would be more accurate to think about labor markets in the plural. When we look for job or think about working life, we immediately see that there is no labor market. Rather, there are a large number of distinct labor markets that interact and are integrated in different ways, based on occupations, geography, companies and industry. The concept of labor market segmentation is used to emphasize the limited fluidity, or crossover possibilities, that exist between these different segments. Understanding labor market segmentation is also important because labor market segments are often shaped by both the technical system of education and occupational experience, on the one hand, and the social system, and in particular relations of gender, migration, ethnicity, racialization and class, on the other. When we study labor market segments, we often discover that certain segments are composed of distinct groups of people, and that the segmentation evinces processes of inequality and discrimination and demonstrates how social norms link certain categories of people to particular labor market segments. Occupations dominated by women, migrants or racialized workers often have worse conditions and lower salaries relative to occupations dominated men, non-migrants or people who are not subject to racialization. The concept of informalization with respect to economy or working life aims to capture processes in which different kinds of work are increasingly organized in informal ways. The concept has a twofold history. First, it derives from historical third-world studies that show how large parts of the population in rural and peripheral urban areas of what is today called the Global South engage in informal activities in order to earn a livelihood – a situation often called “informalization from below.” A second root of the concept can be found in gender studies research that has shown the extent to which women’s work – both unpaid and paid – is organised in informal ways. However, the concept of informalization has been used increasingly to demonstrate how informal arrangements of work are gradually becoming more prevalent in the Global North. This latter process has been called “informalization from above,” in order to emphasize how big businesses develop strategies of downsizing, outsourcing and subcontracting – often in tandem with the decline of welfare states. Processes of informalization can be seen in many parts of the economy and working life, not least in various kinds of private-sector service work, including logistics labor. 90% of people worldwide (men and women) hold some sort of bias against women, according to a recent report by the United Nations Development Programme. The finding of prevalent bias of women against themselves shows how deeply biases are ingrained in our cultural fabric. An implicit bias is a bias that exists without one’s awareness. Even if someone believes that characteristics such as gender, race, sexuality, ethnicity and religious affiliation are irrelevant to them, implicit bias often can be seen in their choices. Discourses that frames young, male, Muslim migrants as a threat to European values contain a multiplicity of (explicit and implicit) biases that inform rejection. Identity signifies two very different things: sameness and selfhood. As sameness, identity is similar to categorization or attribution; it hits you from the outside (you are grouped as a Swede or a Serb, a native or a migrant, a worker, doctor, teacher or manager, and so forth). In this sense identity tells you what you are. As selfhood, identity is your sense of who you are – the outcome of your lifelong psychic labor to find your place in the world, the meaning of your past and future. This labor is influenced by external norms and forms that shape your desire. It usually results in a sense of self that matches the identity prescribed by the social order. Althusser called this interpellation: we identify with and internalize the subject positions offered to us by the surrounding order. In this sense, identities are results (or symptoms) rather than causes of social processes, and personal identities can be multiple and are often contradictory. In periods of crisis and transformation political movements emerge that promise a renewed sense of meaning and security within the embrace of religious, ethnic or national identity, or any other kind of imagined community (Anderson). This poses a risky scenario because religious, national and ethnic identities are sustained through the negation and rejection of those who are grouped as not belonging, that is, as ‘others’. We ask: are contemporary national and racialized identities symptoms of cultures of rejection? What are these identities made of? How are they lived? Gender and right-wing populism The decade of the 2010s was characterized by right-wing populists’ obsession with gender. In fact, the success of right-wing populists needs to be understood with respect to transformations of gender relations. Right-wing populism joined the anti-feminist movement against gender that was originally launched by the Vatican with the dual aim of re-establishing a traditional gendered division of labor and a clear gender binary and also ending gender equality policies and sexual diversity. Nevertheless, right-wing populist actors use gender-neutral language in performances that aim to figure Muslims as patriarchal and at odds with European societies (as elucidated by Sara Farris’ concept of ‘femonationalism’). The flip side of this anti-feminist ideology is a masculinist identity politics. By conjuring a “crisis of masculinity” as a consequence of female integration into the labor market and gender equality policies, this political discourse re-signifies neoliberal transformations of labor markets and welfare states as if these changes were caused by (well-educated) women and (male) migrants. Right-wing populist actors claim to re-establish a sovereignty of masculinity through traditional gender relations. Crisis of authority A crisis of authority, wrote Antonio Gramsci, “means precisely that the great masses have become detached from their traditional ideologies, and no longer believe what they used to believe previously, etc.” In this sense a crisis of authority is less an event and more a period – an “interregnum” in which “a great variety of morbid symptoms appear.” A crisis of authority is, for Gramsci, a crisis of hegemony, that is, a crisis of moral and political leadership in a class society that manifests itself in various ways. The rejection of political elites, commercial and public media and state and religious institutions is a characteristic feature of cultures of rejection. This raises the question as to whether cultures of rejection indicate a wider detachment from institutional discourses, that is, if they point to a crisis of authority in the Gramscian sense. What are the implications of protest practices and narratives that reject authority, claiming to fearlessly speak truth to power – what Foucault called parrhesia – and that, at the same time, invest in other, particularly authoritarian modes of power (see authoritarian populism)? Cathartic politics refers to the active exploration of the political in terms of the experience and concept of catharsis and its implications. Catharsis relates to an emotional climax, or to a kind of crescendo of collective experience that results in intense emotional discharge. In this sense catharsis acts as a cohesive social force; it facilitates empathy for fellow beings through shared experience. Crucially, cathartic politics are lacking in contemporary liberal, postliberal and ultraliberal societies that are focused primarily on nurturing and encouraging individualistic affects. Especially in times of crisis, such as the increased isolation amidst COVID-19, a lack of cathartic politics results in an excess of unprocessed fears and anxieties. The toxicity of such states can be seen in increasing phenomena of rejection with respect to any other who is perceived as an agent of (unwanted) change. When we research cultures of rejection, we investigate the transforming boundaries of belonging. Who and what belongs where and why and how, according to whom? Which group(s) does one belong to, according to whom? What implications do answers to such questions have for one’s rights and duties, one’s powers and capabilities, one’s exposure to violence and one’s claim to solidarity? Cultures of rejection construct belonging not just as “cognitive stories,” as Nura Yuval-Davis reminds us, but as reflections of “emotional investments and desire for attachments.” Yuval-Davis analytically distinguishes “belonging” from the “politics of belonging.” The latter is defined as the maintenance and contestation of community boundaries. Consider debates over access to welfare and political rights: to reject access can be read as an element of a culture of rejection, but also as an engagement with the politics of belonging. We might add: how is belonging to a group (or to multiple groups) lived and experienced? How does belonging alternate between association and attribution? How is belonging lived socially, and how can we trace narratives of discontent and transformation in the ways belonging is lived? Authoritarianism describes specific types of government whose common principle is obedience to a central authority at the expense of personal freedoms, the rule of law, and the principle of the separation of powers. It is defined ‘negatively,’ that is, with respect to a deficit of democratic principles and practices, including freedom, political pluralism, civic participation, free and fair elections and constitutional checks and balances. Considered spatially authoritarianism can be seen to represent the middle of a continuum, the opposite ends of which are (liberal) democracy and totalitarianism. The absence of distinct democratic elements can give shape to different kinds of authoritarianism, as evidenced by the variety of descriptive terms in use, including hybrid systems, mixed systems, defective democracies, illiberal democracies, competitive/electoral/stealth authoritarianism, abusive/authoritarian constitutionalism, and so forth. The rise of authoritarianism often occurs together with the rise of (right-wing) populism that seeks a strong executive power ‘in the name of people’ unhindered by legal constraints. Contemporary authoritarianism usually keeps a democratic facade, that is, authoritarian governments maintain a constitution and quasi-democratic institutions and norms, hindering the identification of authoritarianism until it has reached an advanced stage. Authoritarian populism is a concept originally coined by Stuart Hall in the late 1970s. His aim was to understand the popular appeal of neoliberal ‘Thatcherism,’ or the peculiar combination of “free market, strong state.” While right-wing populism is often reduced in contemporary academic debates to a “thin ideology,” a political strategy or style, Hall urges us to understand it as a way in which political agents re-organize hegemony under conditions of crisis. Authoritarian populism offers ways to make sense of transformations that are experienced as threatening by articulating such transformations in a language of moral decline or in the name of “common sense.” Examples of authoritarian populism today include narratives about the sustainability of the welfare state (and the actual degradation of welfare institutions that accompanies such narratives); discourses about the supposed sexism of (Muslim and/or racialized) “Others” that simultaneously aim to undermine feminist achievements (see gender and right-wing populism); and celebrations of diversity and the so-called open society that ignore and/or promote the dismantling of basic rights of mobile and sedentary populations. As a political project striving for hegemony, authoritarian populism promises the restoration of moral order in the name of a popular common sense against both the elites and their institutions and deviant ‘others,’ such as migrants or the so-called ‘undeserving’ poor. Austerity is a set of economic policies aimed at restoring market competitiveness by reducing wages and public spending. This recipe for economic recovery has failed in most places where it has been tried. It has also generated social polarization: poverty for the many at one end and wealth for the few at the other. Austerity is also an ideology that portrays the dismantling of welfare and social rights as a necessity, even a virtue. Associated with commands from political and economic authorities, austerity is a call to order and discipline, intended to mobilize against external threats or close ranks in times of internal troubles. Economic concerns are thus turned into ‘a moral game’: the individual employee and citizen is made personally responsible for ongoing societal decay and asked to fix this by becoming more obedient, resilient and agile (see moral economy). The ideology of austerity may help to illuminate why the hopes and fears of many Europeans take on an authoritarian bent. The cultural realm is essentially affective. In it we negotiate what we value, fear, and cherish – those things that concern us deeply. Affects can play a cohesive role in culture and society, but they also can be centered on rejection. Distrust, envy, fear, anger, resentment, indignation, contempt and hatred are significant factors in cultural and political developments, such as the rise of populist tendencies in the wake of the so-called European “refugee crisis.” Affective investments are easily exploited in cultural and political rhetoric. As such, they can be mobilized to disrupt cultural discourse and democratic processes. We are interested in how authoritarianism is criticized and lived in Europe. How does authoritarianism become acceptable as a way of thinking? It seems that any criticism of authoritarianism based on rationality and/or numbers is linked with existing powers in inseparable and tragic ways. Far too often such critiques focus on what authoritarianism questions in the existing world, and what in the existing world must be preserved. Too seldom do they inquire into what about the past world must be overcome in order to find a way out of authoritarianism. Our suggestion is that we need to better understand the present, historically concrete connection between knowledge and power (Foucault) with respect to both agreements with and critiques of authoritarianism. Thus, we turn our attention to conditions of acceptability. Foucault describes this process in his essay "What is critique?" He suggests there that we shift our analysis from the empirical observability of an ensemble to its historical acceptability, from the point of its acceptance to what makes it acceptable, from the fact of acceptance to the system of acceptability. Following Foucault we seek to pay attention to struggles over the terrain of acceptability. For this reason, we privilege the everyday life of people. We do so not because we believe this to be the privileged (read: exclusive) research site for understanding the acceptability of authoritarian modes of politics and life. Rather, our commitment consists in the belief that acceptability necessarily takes hold in the realm of the everyday, in mundane modes of living and thinking, in order for authoritarian discourses and practices to become successful. And we believe that such processes occur in contradictory rather than straightforward ways. How are such contradictions articulated? How are they lived? Sometimes we focus too much on public discourse instead of the social conditions that allow for and promote that discourse. By focusing on everyday utterances, perhaps we can speak differently about the models, concepts and qualities of authoritarianism. We may notice unforeseen ways in which modes and practices of authoritarianism are articulated and how they move around in our lives. In our studies of cultures of rejection, we may find cultures of acceptability. And we may detect cultures of unacceptability.
<urn:uuid:3b3d8a40-64bd-4536-aef0-c53899630653>
CC-MAIN-2022-33
http://culturesofrejection.net/concepts-notes
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571982.99/warc/CC-MAIN-20220813172349-20220813202349-00099.warc.gz
en
0.952431
6,011
2.96875
3
A Tradie relies on his tools, e.g., they influence the jobshe can or can’t do, the time spent on the job, and safety issues. Tools are essential to a Tradie’s livelihood – without tools a Tradie can’t do the work. “A good workman looks after his tools.” My brother (who among other things is a Carpenter, Builder and Paver) said this to me once, and it stuck. I remember when my Mum, sister and I helped him with a renovation project. And by ‘helped’ I mean, he taught us (e.g.techniques to use tools when working in a tricky corner) so we could help. During this reno project, I noticed the care he took with his tools, e.g. when he showed us how to use them, as he worked and afterwards, as he cleaned his tools and returned them to their place. A Tradie looks after his tools, he: • Stores them correctly because it keeps the tools in good shape – “a place for everything and everything in its place.” • Only uses tools that are in good working order, e.g. he doesn’t use dull blades, or a mallet with a loose head (that ain’t gonna do anyone any good) • Knows rust is an enemy of tools (makes them unsafe and unusable), so he cleans his tools – wipes them down and conditions his tools, e.g. oil, which forms a seal on the tool • Sees when a tool needs repair and he gets the tool repaired • Knows to buy quality tools rather than buying something because it was on sale. He commits to quality, checks and clean his tools, because he knows it impacts the quality of his work • May keep a few frequently used tools within easy reach for jobs around the house Today we are putting the ‘tool’ of respect in your tool belt. Respect (“the most neglected”) is: “the acknowledgement of a person’s worth. It includes honour, interest in, regard, recognition, admiration, and affirmation” Respect is the most neglected tool in the four components, so let’s learn from Tradies and look after our relationship ‘tools’. Let’s: • Work at nurturing our ‘tools’ to get them in good shape • Realise Satan is an enemy to our relationships (he wants to make them unsafe and unusable). So let’s bring ourselves and our ‘tools’ before God, so He can work in us, and just like oil is used to seal a tool, the Holy Spirit is our seal (Eph. 1:13), let’s ask for His operation in our lives • Notice if there is a problem with any of our ‘tools’ (e.g. out of balance, weak, etc) and work on repairing them and growing them in balance • Realise our ‘tools’ impact the quality of our work. Let’s use our tools to build quality relationships • Keep our tools handy at all times for use in our relationships It’s Demo Day, so, in our renovation, let’s bring respect back! DEMOLISH: Devaluing behaviour Stay connected to God: One aspect of respect is giving recognition to who others are. Proverbs 9:10 says, “The fear of the Lord is the beginning of wisdom, and the knowledge of the Holy One is understanding.” “Our attaining of wisdom begins with our relationship with God; that wisdom grows as we draw closer to our Lord; and our relationship with Him deepens as we begin to know His nature and His ways. Therefore, revere and honour the Lord, His reproofs, instructions, and advice with humility, knowing that from these come wisdom and understanding.” • To give you understanding of His character • To reveal Himself to you • If you have any wrong or distorted views of His character and, if so, ask Him to bring His truth, so you have a correct view of His character! Another aspect of respect is appreciating someone in your life. This means appreciating God for Who He is and all He has done! “For God so loved the world that He gave His only begotten Son, that whoever believes in Him should not perish but have everlasting life. For God did not send His Son into the world to condemn the world, but that the world through Him might be saved.” It’s easy to love those who love us, but while we were still in sin, God showed His love by sending His Son, and in love, Jesus gave His life for us. Almighty God, the Creator of the heavens and the earth, made us for relationship with Him! He sought us and pursued us! That is mind blowing! God deserves all the glory and honour! “Moses…asked the vital question: Who is our Supreme Commander? The answer pierced the air with crystal clarity; ‘I AM WHO I AM.’ (Exodus 3:14). Electric! Fascinating! Authoritative! All-encompassing! Conclusive! ♦ Supreme in authority ♦ Timeless in existence ♦ Unquestionable in Sovereignty ♦ Ingenious in creativity ♦ Limitless in power ♦ Terrible in wrath ♦ Majestic in splendour ♦ Awesome in holiness ♦ Infinite in wisdom and knowledge ♦ Unsearchable in understanding ♦ Dazzling in beauty ♦ Unfathomable in love ♦ Incomprehensible in humility ♦ Absolute in justice ♦ Unending in mercy ♦ Matchless in grace God has totality of ownership, and is the ruling, reigning Monarch of the universe with an indestructible, eternal kingdom.” J. Dawson Prayerfully read and meditate upon scriptures on who God is (part of respect is recognising who someone is and what they do). Here’s some verses from Isaiah 40, to get you started: Isaiah 40: 9b-15 “Here is your God!” See, the Sovereign Lord comes with power, and He rules with a mighty arm. See, His reward is with Him, and His recompense accompanies Him. He tends His flock like a shepherd: He gathers the lambs in His arms and carries them close to His heart; He gently leads those that have young. Who has measured the waters in the hollow of His hand, or with the breadth of His hand marked off the heavens? Who has held the dust of the earth in a basket, or weighed the mountains on the scales and the hills in a balance? Who can fathom the Spirit of the Lord, or instruct the Lord as His counsellor? Whom did the Lord consult to enlighten Him, and who taught Him the right way? Who was it that taught Him knowledge, or showed Him the path of understanding? Surely the nations are like a drop in a bucket; they are regarded as dust on the scales; He weighs the islands as though they were fine dust. “To whom will you compare Me? Or who is My equal?” says the Holy One. Lift up your eyes and look to the heavens: Who created all these? He who brings out the starry host one by one and calls forth each of them by name. Because of His great power and mighty strength, not one of them is missing. Stay connected to others: • Pray God’s love will be in you flowing out to others. Ask God for His strength, wisdom, insight, patience, peace, understanding, and compassion as you demolish devaluing and disrespectful behaviour and build respect in your life and relationships. • Pray God would be the centre of your marriage and the centre of your home. Pray, His peace and truth would reign in your family. Pray that God would show you and lead you in His ways, with His truth and love. • Pray your home will be filled with the Presence of God. • Thank God for your “respect strengths” (as you continue to work to keep them strong). • Thank God for the people in your world and their “respect strengths”. • Thank God for the resources you have (e.g. His word) and the work of God (in your life and relationships) that will enable you to continue to build strong, healthy, satisfying, growing relationships with the people in your world. • Pray God will help you grow in the areas of respect that are out of balance in your life and out of balance in your relationships. Pray God will show you (and your loved ones), which elements/characteristics of respect need nurturing and growth. • Thank God for what He is going to do in your life and in your relationships! Renovation process step one: DESIGN Disrespect shows itself in behaviour such as, when someone ignores or pushes your boundaries, doesn’t listen to you, when it’s their way at all costs, when they put you down, and take care of their needs without considering you or your needs. In essence, disrespect is when someone treats you “as less than.” There is a misconception respect means giving someone control, coming under his/her domination, and meeting his/her demands. But, if disrespect is treating someone as less than, the essence of respect is recognising and acknowledging the value and worth of someone. God created humankind; people are His by His creative act. He looked at His creation and said it is very good. And people who are saved, are His by His redemptive act. People have value given to them by God. Genesis 1:27, 31a: “God created man in His own image, in the image of God He created him; male and female He created them…God saw all that He had made, and behold, it was very good…” At the Fall, sin entered the world and our sin separates us from God. But that’s not the end of the story – nothing takes God by surprise. God had a plan to reconcile man to Himself and nothing could stop Him fulfilling it. God sent His only, beloved Son, Jesus, to die for you – dealing with sin, redeeming and reconciling you to Himself through Jesus. Through Jesus’ death and resurrection, you can have a restored relationship with God!! That’s how valuable you are! God established a love relationship, which He has never been willing to cancel. Your value, your worth comes from God (see healthy relationships 3 & 4). Just as God made you to have value and worth, the same is true for the people in your world, they have value and worth given by God! Respect has two parts: 1) Respect yourself 2) Respect others To respect others, you need to respect yourself. This means understanding your God given value and worth (and in doing so, you can understand others’ God given value and worth). Read and meditate on scriptures that tell you what God says about you! “For we are His workmanship [His own master work, a work of art], created in Christ Jesus [reborn from above—spiritually transformed, renewed, ready to be used] for good works, which God prepared [for us] beforehand [taking paths which He set], so that we would walk in them [living the good life which He prearranged and made ready for us].” Ephesians 2:10 (AMP). Respect means realising and acknowledging the worth and value God has placed on people. You are precious in God’s eyes. The people in your world are precious in God’s eyes. Next time you look at your partner, family members, friends, etc., hit the pause button, take a moment and think about their value and worth which isgiven by Almighty God! Recognise and remember the people in your world have value and are vulnerable, so treat them with respect and they’ll feel safe around you. The two parts of respect have another equally important function, which is vital to healthy relationships. All relationships involve choice. When others devalue you and treat you as less than, you can choose to recognise your value and worth by safeguarding yourself (see healthy relationships 7 for more on this). In other words, when others forget to respect you, you can choose to respect yourself. “If you can’t remember how valuable and vulnerable you are, then your whole well being depends on others remembering. To the degree they remember you are safe, but to the degree they forget, you aren’t safe. In that case, you are helpless and have no say.” Can you see the importance of recognising your value and vulnerability? When you know your value, worth and vulnerability, you can build and maintain a safe environment by creating healthy boundaries so that even if others forget at times that you are valuable and vulnerable, you can respect and safeguard yourself. Renovation process step two: PLAN 1 Peter 2:17: “Show respect for all men [treat them honourably]. Love the brotherhood (the Christian fraternity of which Christ is the Head). Reverence God. Honour those in authority.” Everyone has deep inner needs, to know: • They are accepted • They have significance and matter • They have value and worth These deep inner needs are fully met and satisfied in God alone. Our needs are met through our identity in Christ: • You belong to God (security, sense of belonging) • You’re good and acceptable for He has redeemed you (self-worth) • You have a good destiny (significance); being created in His image, redeemed for His glory, serving Him and living forever with Him. Most people try and have their deep inner needs (e.g. self worth) met by other people or things rather than going to God. When these needs aren’t met, we may feel, for example, a love deficit (shortfall), or rejected. All relationships have the risk of rejection, which is defined as the sense of being unwanted. Because we live in a fallen word, we get wounded. We can develop responses to others and ourselves, e.g. resentment, hostility, depression, fear, insecurity, withdrawal or overachieving (to name a few). Because of wounds, we can become hard or hide. When we become hard, it’s an aggressive response. People build walls around themselves to protect self from further hurt, hiding the real self, replacing it with a false self projecting, e.g. success, niceness, or toughness. These walls, although we think they’re strong, are actually fragile and they will fail. And inside these walls, are wounds that need healing. Walls are always built because people don’t feel safe –they feel threatened. Walls are built for protection and self-preservation. If there are walls in your life, or in those close to you, respect the wall was built because you or others felt unsafe. For example, walls could be built because trust was betrayed or because there is a lack of respect. So, what do we do? People don’t want to stay closed, in defensive mode. Generally, people “long for relationships in which [they] feel completely safe. [They] want to feel free to open up and reveal who [they] really are and know that the other person will still love, accept and value [them] – no matter what.” Let’s say someone built a wall because of something you said and did. The first step is to respect the other person needs the wall right now. It’s about “seek[ing] to understand and value [the individual’s] concerns.” It’s important to let the other person know, you understand the wall is there because they don’t feel safe, and let them know that you will work on yourself (and not stop) till the other person eventually feels safe. Examine what it was you did to create an unsafe environment, and fix it. In other words, it’s not about insisting the individual gives you something, e.g. ‘trust me’, but it’s about you creating an environment of safety by working on yourself. If you try to bulldoze walls, you just confirm to the other person they aren’t safe around you. God made us for healthy, safe relationships. He wants to come in His love and bring healing to emotional wounds. You are safe in His hands. Dear one, if you have been wounded by rejection, a Divine Exchange took place at the Cross – Jesus took our rejection so we could have acceptance with God! God doesn’t just put up with you – He loves you! God wants to come and bring healing to your wounds (for more info please see healthy relationships 3 & 4). God made you, and He has placed value on you! You have value, you have worth, you are precious and you are wanted! Renovation process step three: CONSTRUCT How to show respect: You show acceptance: you don’t try to change the person or control them. When we accept others for who they are, we free them from the pressure of being changed and moulded into the person we want them to be. Remember, we can’t change others, only ourselves. You welcome the person – it’s like saying, “I’m glad you are a part of my life!” Q: How do the people in your world show they accept you? How do you accept others? Q: If you are in a relationship, how does your partner express acceptance of you? How do you show your partner you accept them for who they are? You give recognition: You notice people. You aren’t disinterested in them or tolerate them. You notice who a person is and what he/she does – and you let them know that you notice, appreciate, and recognise who they are and what they do! Q: Do you notice people? Or are you disinterested in them or tolerate them? Q: If you notice people, how do you give recognition for who they are and what they do? Think of an example. If you have a partner, how do you give him/her recognition? (Or family member/close friend) Q: How do people in your world give recognition for who you are and what you do? Think of an example. If you have a partner, how does he/she give you recognition? You give affirmation and encouragement: You believe in others and look for ways to build them up. You believe in others even when they might not believe in themselves and you encourage them (1 Thess. 5:11). You don’t take who they are or what they do for granted. Think of an example of how you do this, and how the other person in your life does this for you. You give appreciation: You express your pleasure of being a part of this person’s life (it’s personal). Q: Do you do this for others (e.g. your partner/close friends & family members)? Think of an example. Q: Do others do this for you (e.g. your partner/close friends & family members)? Think of an example. You give admiration: You give credit to someone for his/her ability and there are no strings attached, no wrong motives or jealousy. “Do nothing out of selfish ambition or vain conceit. Rather, in humility value others above yourselves.” (Philippians 2:3). Q: How do you do this for others (e.g. your partner/close friends & family members)? Think of an example. Q: How does your partner/close friends & family members do this for you? Think of an example. Respect isn’t just about what you say; it’s about what you do. Respect includes your non-verbal communication (body language). Respect can’t be partly given; anything less than 100% respect is not respect. Respect is about recognising and acknowledging a person’s value, worth and vulnerability. All the best as you continue on the journey using respect to help you build strong healthy relationships! See you next time, when we put the ‘tool’ of understanding in your tool belt!
<urn:uuid:eadd86c5-3c41-4514-8d7c-1ac6665da61c>
CC-MAIN-2022-33
https://www.victorylifechristianchurch.com.au/healthy-relationships-8/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.95/warc/CC-MAIN-20220817032054-20220817062054-00498.warc.gz
en
0.956607
4,389
2.53125
3
Where did Earth’s water come from? That’s one of the most compelling questions in the ongoing effort to understand life’s emergence. Earth’s inner solar system location was too hot for water to condense onto the primordial Earth. The prevailing view is that asteroids and comets brought water to Earth from regions of the Solar System beyond the frost line. But a new study published in the journal Nature Astronomy proposes a further explanation for Earth’s water. As the prevailing view says, some of it could’ve come from asteroids and comets. But most of the hydrogen was already here, waiting for Earth to form. There are lots of potential uses for a Mars colony. It could be a research outpost, mining colony, or even a possible second home if something happens to go drastically wrong on our first one. But it could also be a potential source of what is sure to be one of the most valuable elements in the space economy – hydrogen. White dwarfs are supposed to be dead remnants of stars, doomed to simply fade away into the background. But new observations show that some are able to maintain some semblance of life by wrapping themselves in a layer of fusing hydrogen. We’re waiting patiently for telescopes like the James Webb Space Telescope to see first light, and one of the reasons is its ability to study the atmospheres of exoplanets. The idea is to look for biosignatures: things like oxygen and methane. But a new study says that exoplanets with hydrogen in their atmospheres are a good place to seek out alien life. The Sun. It’s a big ball of fire, right? Apparently not. In fact, what’s going on inside of the Sun took us some time and knowledge of physics to finally figure out: stellar fusion. Let’s talk about the different kinds of fusion, and how we’re trying to adapt it to generate power here on Earth. We have comets and asteroids to thank for Earth’s water, according to the most widely-held theory among scientists. But it’s not that cut-and-dried. It’s still a bit of a mystery, and a new study suggests that not all of Earth’s water was delivered to our planet that way. Ever since the existence of antimatter was proposed in the early 20th century, scientists have sought to understand how relates to normal matter, and why there is an apparent imbalance between the two in the Universe. To do this, particle physics research in the past few decades has focused on the anti-particle of the most elementary and abundant atom in the Universe – the antihydrogen particle. Until recently, this has been very difficult, as scientists have been able to produce antihydrogen, but unable to study it for long before it annihilated. But according to recent a study that was published in Nature, a team using the ALPHA experiment was able to obtain the first spectral information on antihydrogen. This achievement, which was 20 years in the making, could open up an entirely new era of research into antimatter. Measuring how elements absorb or emit light – i.e. spectroscopy – is a major aspect of physics, chemistry and astronomy. Not only does it allow scientists to characterize atoms and molecules, it allows astrophysicists to determine the composition of distant stars by analyzing the spectrum of the light they emit. In the past, many studies have been conducted into the spectrum of hydrogen, which constitutes roughly 75% of all baryonic mass in the Universe. These have played a vital role in our understanding of matter, energy, and the evolution of multiple scientific disciplines. But until recently, studying the spectrum of its anti-particle has been incredibly difficult. For starters, it requires that the particles that constitute antihydrogen – antiprotons and positrons (anti-electrons) – be captured and cooled so that they may come together. In addition, it is then necessary to maintain these particles long enough to observe their behavior, before they inevitable make contact with normal matter and annihilate. Luckily, technology has progressed in the past few decades to the point where research into antimatter is now possible, thus affording scientists the opportunity to deduce whether the physics behind antimatter are consistent with the Standard Model or go beyond it. As the CERN research team – which was led by Dr. Ahmadi of the Department of Physics at the University of Liverpool – indicated in their study: “The Standard Model predicts that there should have been equal amounts of matter and antimatter in the primordial Universe after the Big Bang, but today’s Universe is observed to consist almost entirely of ordinary matter. This motivates physicists to carefully study antimatter, to see if there is a small asymmetry in the laws of physics that govern the two types of matter.” Beginning in 1996, this research was conducted using the AnTiHydrogEN Apparatus (ATHENA) experiment, a part of the CERN Antiproton Decelerator facility. This experiment was responsible for capturing antiprotons and positrons, then cooling them to the point where they can combine to form anithydrogen. Since 2005, this task has become the responsibility of ATHENA’s successor, the ALPHA experiment. Using updated instruments, ALPHA captures atoms of neutral antihydrogen and holds them for a longer period before they inevitably annihilate During this time, research teams conduct spectrographic analysis using ALPHA’s ultraviolet laser to see if the atoms obey the same laws as hydrogen atoms. As Jeffrey Hangst, the spokesperson of the ALPHA collaboration, explained in a CERN update: “Using a laser to observe a transition in antihydrogen and comparing it to hydrogen to see if they obey the same laws of physics has always been a key goal of antimatter research… Moving and trapping antiprotons or positrons is easy because they are charged particles. But when you combine the two you get neutral antihydrogen, which is far more difficult to trap, so we have designed a very special magnetic trap that relies on the fact that antihydrogen is a little bit magnetic.” In so doing, the research team was able to measure the frequency of light needed to cause a positron to transition from its lowest energy level to the next. What they found was that (within experimental limits) there was no difference between the antihydrogen spectral data and that of hydrogen. These results are an experimental first, as they are the first spectral observations ever made of an antihydrogen atom. Besides allowing for comparisons between matter and antimatter for the first time, these results show that antimatter’s behavior – vis a vis its spectrographic characteristics – are consistent with the Standard Model. Specifically, they are consistent with what is known as Charge-Parity-Time (CPT) symmetry. This symmetry theory, which is fundamental to established physics, predicts that energy levels in matter and antimatter would be the same. As the team explained in their study: “We have performed the first laser-spectroscopic measurement on an atom of antimatter. This has long been a sought-after achievement in low-energy antimatter physics. It marks a turning point from proof-of-principle experiments to serious metrology and precision CPT comparisons using the optical spectrum of an anti-atom. The current result… demonstrate that tests of fundamental symmetries with antimatter at the AD are maturing rapidly.” In other words, the confirmation that matter and antimatter have similar spectral characteristics is yet another indication that the Standard Model holds up – just as the discovery of the Higgs Boson in 2012 did. It also demonstrated the effectiveness of the ALPHA experiment at trapping antimatter particles, which will have benefits other antihydrogen experiments. Naturally, the CERN researchers were very excited by this find, and it is expected to have drastic implications. Beyond offering a new means of testing the Standard Model, it is also expected to go a long way towards helping scientists to understand why there is a matter-antimatter imbalance in the Universe. Yet another crucial step in discovering exactly how the Universe as we know it came to be. We use a tool called Trello to submit and vote on stories we would like to see covered each week, and then Fraser will be selecting the stories from there. Here is the link to the Trello WSH page (http://bit.ly/WSHVote), which you can see without logging in. If you’d like to vote, just create a login and help us decide what to cover! If you would like to join the Weekly Space Hangout Crew, visit their site here and sign up. They’re a great team who can help you join our online discussions! If you would like to sign up for the AstronomyCast Solar Eclipse Escape, where you can meet Fraser and Pamela, plus WSH Crew and other fans, visit our site linked above and sign up! We record the Weekly Space Hangout every Friday at 12:00 pm Pacific / 3:00 pm Eastern. You can watch us live on Universe Today, or the Universe Today YouTube page. Scientists have understood for some time that the most abundant elements in the Universe are simple gases like hydrogen and helium. These make up the vast majority of its observable mass, dwarfing all the heavier elements combined (and by a wide margin). And between the two, helium is the second lightest and second most abundant element, being present in about 24% of observable Universe’s elemental mass. Whereas we tend to think of Helium as the hilarious gas that does strange things to your voice and allows balloons to float, it is actually a crucial part of our existence. In addition to being a key component of stars, helium is also a major constituent in gas giants. This is due in part to its very high nuclear binding energy, plus the fact that is produced by both nuclear fusion and radioactive decay. And yet, scientists have only been aware of its existence since the late 19th century. The Sun has always been the center of our cosmological systems. But with the advent of modern astronomy, humans have become aware of the fact that the Sun is merely one of countless stars in our Universe. In essence, it is a perfectly normal example of a G-type main-sequence star (G2V, aka. “yellow dwarf”). And like all stars, it has a lifespan, characterized by a formation, main sequence, and eventual death. This lifespan began roughly 4.6 billion years ago, and will continue for about another 4.5 – 5.5 billion years, when it will deplete its supply of hydrogen, helium, and collapse into a white dwarf. But this is just the abridged version of the Sun’s lifespan. As always, God (or the Devil, depending on who you ask) is in the details! To break it down, the Sun is about half way through the most stable part of its life. Over the course of the past four billion years, during which time planet Earth and the entire Solar System was born, it has remained relatively unchanged. This will stay the case for another four billion years, at which point, it will have exhausted its supply of hydrogen fuel. When that happens, some pretty drastic things will take place! The Birth of the Sun: According to Nebular Theory, the Sun and all the planets of our Solar System began as a giant cloud of molecular gas and dust. Then, about 4.57 billion years ago, something happened that caused the cloud to collapse. This could have been the result of a passing star, or shock waves from a supernova, but the end result was a gravitational collapse at the center of the cloud. From this collapse, pockets of dust and gas began to collect into denser regions. As the denser regions pulled in more and more matter, conservation of momentum caused it to begin rotating, while increasing pressure caused it to heat up. Most of the material ended up in a ball at the center while the rest of the matter flattened out into disk that circled around it. The ball at the center would eventually form the Sun, while the disk of material would form the planets. The Sun spent about 100,000 years as a collapsing protostar before temperature and pressures in the interior ignited fusion at its core. The Sun started as a T Tauri star – a wildly active star that blasted out an intense solar wind. And just a few million years later, it settled down into its current form. The life cycle of the Sun had begun. The Main Sequence: The Sun, like most stars in the Universe, is on the main sequence stage of its life, during which nuclear fusion reactions in its core fuse hydrogen into helium. Every second, 600 million tons of matter are converted into neutrinos, solar radiation, and roughly 4 x 1027 Watts of energy. For the Sun, this process began 4.57 billion years ago, and it has been generating energy this way every since. However, this process cannot last forever since there is a finite amount of hydrogen in the core of the Sun. So far, the Sun has converted an estimated 100 times the mass of the Earth into helium and solar energy. As more hydrogen is converted into helium, the core continues to shrink, allowing the outer layers of the Sun to move closer to the center and experience a stronger gravitational force. This places more pressure on the core, which is resisted by a resulting increase in the rate at which fusion occurs. Basically, this means that as the Sun continues to expend hydrogen in its core, the fusion process speeds up and the output of the Sun increases. At present, this is leading to a 1% increase in luminosity every 100 million years, and a 30% increase over the course of the last 4.5 billion years. In 1.1 billion years from now, the Sun will be 10% brighter than it is today, and this increase in luminosity will also mean an increase in heat energy, which Earth’s atmosphere will absorb. This will trigger a moist greenhouse effect here on Earth that is similar to the runaway warming that turned Venus into the hellish environment we see there today. In 3.5 billion years from now, the Sun will be 40% brighter than it is right now. This increase will cause the oceans to boil, the ice caps to permanently melt, and all water vapor in the atmosphere to be lost to space. Under these conditions, life as we know it will be unable to survive anywhere on the surface. In short, planet Earth will come to be another hot, dry Venus. Core Hydrogen Exhaustion: All things must end. That is true for us, that is true for the Earth, and that is true for the Sun. It’s not going to happen anytime soon, but one day in the distant future, the Sun will run out of hydrogen fuel and slowly slouch towards death. This will begin in approximate 5.4 billion years, at which point the Sun will exit the main sequence of its lifespan. With its hydrogen exhausted in the core, the inert helium ash that has built up there will become unstable and collapse under its own weight. This will cause the core to heat up and get denser, causing the Sun to grow in size and enter the Red Giant phase of its evolution. It is calculated that the expanding Sun will grow large enough to encompass the orbit’s of Mercury, Venus, and maybe even Earth. Even if the Earth survives, the intense heat from the red sun will scorch our planet and make it completely impossible for life to survive. Final Phase and Death: Once it reaches the Red-Giant-Branch (RGB) phase, the Sun will haves approximately 120 million years of active life left. But much will happen in this amount of time. First, the core (full of degenerate helium), will ignite violently in a helium flash – where approximately 6% of the core and 40% of the Sun’s mass will be converted into carbon within a matter of minutes. The Sun will then shrink to around 10 times its current size and 50 times its luminosity, with a temperature a little lower than today. For the next 100 million years, it will continue to burn helium in its core until it is exhausted. By this point, it will be in its Asymptotic-Giant-Branch (AGB) phase, where it will expand again (much faster this time) and become more luminous. Over the course of the next 20 million years, the Sun will then become unstable and begin losing mass through a series of thermal pulses. These will occur every 100,000 years or so, becoming larger each time and increasing the Sun’s luminosity to 5,000 times its current brightness and its radius to over 1 AU. At this point, the Sun’s expansion will either encompass the Earth, or leave it entirely inhospitable to life. Planets in the Outer Solar System are likely to change dramatically, as more energy is absorbed from the Sun, causing their water ices to sublimate – perhaps forming dense atmosphere and surface oceans. After 500,000 years or so, only half of the Sun’s current mass will remain and its outer envelope will begin to form a planetary nebula. The post-AGB evolution will be even faster, as the ejected mass becomes ionized to form a planetary nebula and the exposed core reaches 30,000 K. The final, naked core temperature will be over 100,000 K, after which the remnant will cool towards a white dwarf. The planetary nebula will disperse in about 10,000 years, but the white dwarf will survive for trillions of years before fading to black. Ultimate Fate of our Sun: When people think of stars dying, what typically comes to mind are massive supernovas and the creation of black holes. However, this will not be the case with our Sun, due to the simple fact that it is not nearly massive enough. While it might seem huge to us, but the Sun is a relatively low mass star compared to some of the enormous high mass stars out there in the Universe. As such, when our Sun runs out of hydrogen fuel, it will expand to become a red giant, puff off its outer layers, and then settle down as a compact white dwarf star, then slowly cooling down for trillions of years. If, however, the Sun had about 10 times its current mass, the final phase of its lifespan would be significantly more (ahem) explosive. When this super-massive Sun ran out of hydrogen fuel in its core, it would switch over to converting atoms of helium, and then atoms of carbon (just like our own). This process would continue, with the Sun consuming heavier and heavier fuel in concentric layers. Each layer would take less time than the last, all the way up to nickel – which could take just a day to burn through. Then, iron would starts to build up in the core of the star. Since iron doesn’t give off any energy when it undergoes nuclear fusion, the star would have no more outward pressure in its core to prevent it from collapsing inward. When about 1.38 times the mass of the Sun is iron collected at the core, it would catastrophically implode, releasing an enormous amount of energy. Within eight minutes, the amount of time it takes for light to travel from the Sun to Earth, an incomprehensible amount of energy would sweep past the Earth and destroy everything in the Solar System. The energy released from this might be enough to briefly outshine the galaxy, and a new nebula (like the Crab Nebula) would be visible from nearby star systems, expanding outward for thousands of years. All that would remain of the Sun would be a rapidly spinning neutron star, or maybe even a stellar black hole. But of course, this is not to be our Sun’s fate. Given its mass, it will eventually collapse into a white star until it burns itself out. And of course, this won’t be happening for another 6 billion years or so. By that point, humanity will either be long dead or have moved on. In the meantime, we have plenty of days of sunshine to look forward to!
<urn:uuid:7efa4a8c-3555-482d-ba25-4423c3716e17>
CC-MAIN-2022-33
https://www.universetoday.com/tag/hydrogen/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572304.13/warc/CC-MAIN-20220816120802-20220816150802-00499.warc.gz
en
0.949291
4,216
4.09375
4
The Buckeye Knoll site (41VT98) contains a prolonged record of short-term continuous site use over a period of 8,000 years (8,500–500 BP) with evidence of resource caching for future occupations. We know very little about Archaic life history and Buckeye Knoll constitutes one of the largest populations available for testing hypotheses regarding health and disease in this early period of North American prehistory. Excavation uncovered 75 discrete burial loci and recovered a minimum number of 116 individuals that were dated to 8,500–3,500 BP using tooth and bone collagen samples. Buckeye Knoll was exhumed and reburied in compliance with the Native American Graves Protection and Repatriation Act (NAGPRA), so any future data collection or analysis must come from the digital photographs collected for archival purposes (Ricklis, Weinstein & Wells, 2012c). Dental enamel hypoplasia defects represent an interruption in the growth process of teeth and can be attributed to genetics (Brook, 2009; Hart et al., 2002; Zilberman et al., 2004), trauma (Brook, 2009), and insult (Goodman, 1988; Sarnat & Schour, 1942; Sarnat & Schour, 1941). Those linked to external biological insult (e.g., foreign disease pathogen, injury) develop when resources normally directed to growth and development are rerouted to defending the body or are only insufficient to sustain maintenance activities (e.g., malnourishment, diarrhea) (Sarnat & Schour, 1942; Sarnat & Schour, 1941). Enamel hypoplastic defects occur on the buccal and labial surfaces of teeth and mostly commonly manifest as transverse grooves, or linear enamel hypoplasia (LEH), but also can appear as pits or grooves (Hillson & Bond, 1997). Because teeth do not remodel, defects captured during growth and development are permanent and have been used to infer early life health in a number of populations (e.g., Berbesque & Doran, 2008; Guatelli-Steinberg, Larsen & Hutchinson, 2004; Hoover & Matsumura, 2008; Lieverse et al., 2007; Temple, 2010). Of particular note are the associations between weaning stress (e.g., Herring, Saunders & Katzenberg, 1998; Katzenberg, Herring & Saunders, 1996; Moggi-Cecchi, Pacciani & Pinto-Cisternas, 1994) and earlier age at death (DeWitte & Stojanowski, 2015; Walter & DeWitte, 2017; Yaussy, DeWitte & Redfern, 2016). A major shift in dietary pattern and environmental adaptations occurred in the southern United States during the transition from early to mid-Holocene. This period was a time of dramatic worldwide changes in temperature, sea level, and coastal ‘configuration’. Buckeye Knoll may have been in a period of climatic transition, the severity of which is unknown. The climate reconstruction of Buckeye Knoll was primarily from palynology. Two cores were taken from the Guadalupe River Flood Plain adjacent to the Buckeye Knoll Site for palynological analysis. These cores enable a regional vegetation reconstruction extending back to 9,500 cal. B. P. until present. During this period, there were marked changes in climate reflected in the pollen taxa represented, particularly circa 6,000 BP when climate change resulted in enough increases in upland-prairie biomass that it may have caused a shift in subsistence strategy (Ricklis, Weinstein & Wells, 2012a). This might be a factor in the overall levels of systemic stress in populations of this time period, such as Buckeye Knoll. Here, we aim to infer nonspecific nutritional and developmental stresses via the developmental timing and frequency of linear enamel hypoplasia defects (LEH) in the canines using photogrammetric methods. Study site description The first evidence for human activity at Buckeye Knoll dates to the Paleo-Indian period and consists of scattered artifacts, specifically stone darts. Prolonged occupation of the site begins in the Archaic period, which is marked by a variety of human activities linked to repeated short-term occupation. Primary artifacts include debitage, projectiles, tools, beads, bone, shell, and hearths. More recent artifacts include indigenous ceramics. The site record contains evidence for a prolonged record of short-term continuous use for a period of 8,000 years (8,500–500 BP). Of particular interest are large pits which may have been used to store food which suggests longer occupations of up to a few months; even more interesting is evidence for material caching which suggests intentional regular re-occupation (Ricklis, Weinstein & Wells, 2012c). Faunal remains recovered from the site are abundant—74,000 identifiable fragments representing a minimum of 126 vertebrate taxa including fish (mostly gar), small mammals (often rodents), some large mammals (e.g., deer), and rarely birds. The pattern of resource exploitation evidenced by faunal analysis suggests that opportunistic hunting of larger game was gradually replaced by increased emphasis on net-fishing (evidenced by a shift from larger to smaller fish body sizes) and wider exploitation of other taxa; this may be attributable to increased population demands over time (Ricklis, Weinstein & Wells, 2012c) or the previously noted climate change that resulted in changes to the local environment and possible dietary shifts in response to that change. A total of 75 discrete burials containing 119 individuals were excavated. The majority of burials were single interment but there were also graves containing multiple individuals. All but one burial (dated to the Late Archaic) were interred on the Knoll Top. Of the remaining 74 burials, the vast majority (n = 68) date to the Texas Early Archaic, including one extremely early burial dated to 8,500 BP. The Texas Early Archaic burial dates tend to cluster between 7,400–6,300 BP–the lack of non-mortuary activity at the site during the 7th millennium (roughly 7,000–6,200 BP) suggests that the Knoll Top space was reserved exclusively for treatment of the dead during this time (Ricklis, Weinstein & Wells, 2012b; Ricklis, Weinstein & Wells, 2012c). Texas Early Archaic burials are associated with artifacts that form a unique mortuary assemblage that is closely related to Middle Archaic period (i.e., ca. 8,000–5,000 BP) cultures in the Mississippi Valley region and beyond. Thus, this assemblage reflects larger regional cultural associations. During this period, flexed or semi-flexed burials were most common followed by a smaller number of disarticulated individuals, and an even smaller number of individuals interred in sitting postures. The Late Archaic period was characterized by extended burials (Ricklis, Weinstein & Wells, 2012b). Photogrammetric materials and methods Photographs were used for data collection because the Buckeye Knoll sample was reinterred. Reliability of LEH scoring is more robust in photogrammetric methods, with a significant increase in LEH number identified compared to direct examination method (Golkari et al., 2011). This method was successfully applied to a similar published study on another Early Archaic population, Windover (Berbesque & Doran, 2008). Photographs were taken of the left maxillary and mandibular canines using the Nikon 990 Coolpix in macro mode. The diminished focal length presents some difficulty with depth or focus on anything other than one plane. As teeth are often curved, every attempt was made to capture the labial surface of the tooth with most clarity. Multiple photographs were taken from different angles to ensure defects were scorable. A metric scale was placed in the plane of the tooth surface in each photograph. The photographs were taken in high quality TIFF file format. Missing teeth or teeth too worn to score were excluded from analysis. In some cases, dental calculus prevented an accurate measurement of crown height, and measurements were then taken from the bottom of the calculus to the top of the crown. These measurements are primarily for quality control in using an imaging software for analysis. Permanent canines were chosen for data collection because they have a prolonged period of crown formation (7.5 months to 6.5 years for maxillary canines and 10.5 months to 5.5 years for mandibular canines) (AlQahtani, Hector & Liversidge, 2014) and can best capture the peak window of developmental stress caused by weaning (Sandberg et al., 2014). LEH was scored in Microsoft Paint. Once scored, the images were imported into Scion Image for analysis (a PC friendly software modeled after the National Institute of Health ImageJ, which is commonly used in morphometrics studies) (Scion, 2000–2001) http://www.nist.gov/lispix/imlab/labs.html. Developmental timing of each defect was determined using the estimate by Reid and Dean (Reid & Dean, 2000), which necessitates estimation of complete, unworn crown height for every tooth. An estimate of completeness for each canine was based on surrounding dentition and other canines within the population. The median percent complete for permanent dentition is 85% overall. Mandibular canines were 86% complete, and maxillary canines were 81% complete. This visual estimate of complete canine height provided a wear estimate for each canine. Because this population has significant dental wear, stage of development for each defect was determined by measuring the distance from the cemento-enamel junction to the bottom of each defect rather than from the tip of the cusp down to the defect. All statistical analysis was conducted using SPSS version 22. None of the variables met the assumptions of a normal distribution, so nonparametric statistics were used for all analyses. To place Buckeye Knoll in context with similar populations, data from this study were compared to published data from populations dating to an average of 3,000 years or older contained in the public Global History of Human Health Database (Steckel & Rose, 2002) (see Table 1). Buckeye Knoll was also compared with another Early Archaic population, Windover (8,120–6,980 14C years B.P. uncorrected), using the same methods deployed in this study (Berbesque & Doran, 2008). There were 41 deciduous canines in the sample and 92 permanent canines. The permanent dentition consisted of 37 maxillary canines and 43 mandibular canines—12 could not be identified as maxillary or mandibular. The permanent dentition had a hypoplasia frequency rate of 59% (n = 54 canines with at least one hypoplastic defect) in the population. There was an overall mean of 0.93 defects per permanent canine, with a median of one defect, and a mode of zero defects. We did not analyse deciduous dentition for timing of defects. Out of 41 deciduous canines in the population, only one defect was found. Despite limited demographic information available for these mostly isolated dentition, there were associated skeletal material for some individuals—allowing for a basic breakdown by sex and age category (adults versus juvenile with permanent dentition). Juveniles with permanent dentition had higher rates of multiple defects than the general population (see Table 2). Table 2 provides breakdown of the sample by presenting frequency and portion of the overall sample by LEH count (range = 0–4) and demographic category. |Tlatilco||80||Some||Maize, beans, squash||Temperate||Small/medium village||2,930–3,250| There were no significant differences between the maxilla and mandible in timing of earliest defect (Mann Whitney U = 228, earliest maxillary defect N = 20, earliest mandibular N = 27, p = .366) or number of defects (U = 640.5, maxillary defects N = 37, mandibular defects N = 43, p = .110). The mean age for the earliest defect per individual was 3.92 (range = 2.5–5.4). Individuals with more LEHs also had earlier age of first insult (n = 54, rho = −0.381, p = 0.005). The mean developmental age of all defects was 4.18 years old (range = 2.5–5.67). A comparative analysis of individual LEH frequency in Buckeye Knoll and populations in the Global History of Human Health Database (Steckel & Rose, 2002) found that Buckeye Knoll frequencies were significantly higher with one or more LEH on their canine (see Table 3) (Chi-Square = 58.425, df = 4, p = 0.000). LEH incidence in another Early Archaic population, Windover, was more than twice that of Buckeye Knoll (see Table 4) (Berbesque & Doran, 2008). LEH data collection methods for both sites used the same photographic methods. |Total n||0 LEH||1 LEH||2 LEH||3 LEH||4 LEH| |Site||Total n||0 LEH||1 LEH||2 + LEH| |Mandibular canine||Maxillary canine| |Windover||Buckeye Knoll||Windover||Buckeye Knoll| Discussion and Conclusions Juveniles with permanent dentition had the highest incidence of LEH. Also, having greater numbers of LEH defects was associated with earlier age of death, providing some evidence for a mortality curve that would support the use of LEH as a stress indicator in this population and indicating social factors that warrant further investigation. This finding provides some evidence for the Barker Hypothesis; wherein individuals exposed to stressors earlier in life may actually have damaged immunological competence as a consequence of those stressors (Armelagos et al., 2009; Goodman & Armelagos, 1989). The location of each defect gives insight into the timing of metabolic insult. Cusp enamel completion occurs at 1.7 years for maxillary canines and 0.98 years for mandibular canines (Reid & Dean, 2000). As the first period on the occlusal surface of the crown is often worn away by attrition, much of the data on the second year of life is lost. Clustering of LEH around a location on the tooth that corresponds to a particular age might indicate some stressful milestone event whether culturally flexible (e.g., age of weaning) or not (e.g., birth). Weaning ages across hunter-gatherer societies vary considerably, with New World hunter-gatherers weaning earlier (mean = 2.32 years old) than Old World hunter-gatherers (mean = 3.20 years old) and a combined range of 1 to 4.5 (Marlowe, 2005). Age of the mean earliest defect for Buckeye Knoll is within this range (mean = 3.92), but late for the mean age of weaning in ethnographically described hunter-gatherers in the New World. Perhaps the developmental timing of most LEH defects has less to do with extreme stress from weaning and more with the more with the acute angles formed by the Striae of Retzius relative to the enamel surface to enamel formation. It has been suggested that these acute angles make even small disruptions in enamel production are more pronounced and visible in the intermediate and occlusal thirds of the tooth (Blakey, Leslie & Reidy, 1994; Newell et al., 2006). Of the limited samples of comparable antiquity (minimally over 3,000 years old on average) in the Global History of Human Health Database (Steckel & Rose, 2002; Steckel, Sciulli & Rose, 2002), most populations demonstrated lower incidence of LEH compared to Buckeye Knoll (59% with at least one defect). The comparative sample with the closest frequency of Buckeye Knoll LEH was Tlatilco. Tlatilco was a sedentary population with evidence of domesticated plants and animals. Sedentary populations and those using domesticated plants were found to have higher incidence of various stress indicators, and agriculturalists are documented as having higher LEH incidence than foragers (Larsen, 1995; Starling & Stock, 2007). It has been suggested that fishing populations might be at higher risk for LEH defects due to parasite load (Bathurst, 2005). One example of this is found in Japan; prehistoric hunter-gatherer-fishers have surprisingly high rates of LEH but these are sedentary complex stratified populations (Hoover & Matsumura, 2008; Temple, 2010). And, the higher incidence of defects is widely documented across the island and throughout time; given the abundance of resources and consistently high rates of LEH, a likelier explanation might be a genetic etiology (Hoover & Hudson, 2016; Hoover & Matsumura, 2008; Hoover & Williams, 2016). Coastal populations share a host of traits that may contribute to LEH defects, such as sedentism and reliance on domesticates. Although the Buckeye Knoll population likely relied at least partially on coastal resources, there is no evidence of domesticated plants or animals or sedentism at Buckeye Knoll. The population most comparable to Buckeye Knoll is Windover. Windover has been assessed for LEH defects using the same methods used in the GHHD as well as the photogrammatic methods. Even when examining data on LEH defects using the unaided eye, Windover had a very high number of individuals affected by LEH defects. In the GHHD, 100% represents a population completely unaffected by LEH, and the GHHD score for LEH in Windover was = 39.5% (Wentz et al., 2006). It is not clear why these two Early Archaic populations both appear to have a surprisingly high incidence of LEH, but a possible ecological explanation for the high overall incidence of LEH defects in this population is the climate shift during this time that may have caused physiological stress during periods of diminished resources. Buckeye Knoll had greater incidence of LEH than any other population in the Global History of Health Database of comparable age. However, these data are taken by unaided visual assessment only, and photogrammetric methods have been shown to result in identification of greater numbers of LEH defects. However, Buckeye Knoll had fewer LEH defects compared with data collected using the same photogrammetric methods from Windover, a population of comparable antiquity. It is not clear whether the higher incidence of defects seen in these populations are entirely due to methodological differences in data collection, or whether an environmental factor such as the climate change documented during the Early Archaic period affected the health of coastal/riverine foragers such as the Windover and Buckeye Knoll populations. Buckeye Knoll developmental timing of LEH In depth data from Buckeye Knoll LEH defects only. Includes counts, and calculated timing of defects (according to Reid & Dean (2000)). GHHD with Buckeye for Comparison Published data (http://global.sbs.ohio-state.edu/western_hemisphere_module.htm) older than 3,000 years on average for comparative purposes with Buckeye Knoll data recoded (coding scheme available at link above along with data). Filter variable (select 1) selects for sites with average site age >3,000 years. LPI = Left permanent canine for all sites but Buckeye Knoll, where we included all canines in the analysis.
<urn:uuid:e862e4b7-83b7-4c5b-a5c2-b0a03747ee6c>
CC-MAIN-2022-33
https://peerj.com/articles/4367/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571692.3/warc/CC-MAIN-20220812105810-20220812135810-00097.warc.gz
en
0.921504
4,736
3.125
3
OVERVIEW: What every practitioner needs to know Are you sure your patient has Acute Community-Acquired Pneumonia? What should you expect to find? Symptoms consistent with the diagnosis of acute pneumonia include cough (80-90%) that may or may not be productive of sputum, shortness of breath, and fever. Nonrespiratory symptoms may also be present; these include fatigue (90%), sweats (70%), headache, nausea (40%), and myalgia. With increasing age, both respiratory and nonrespiratory symptoms become less common. Fever is present in 65-90% of patients with pneumonia. Heart rate and blood pressure may be normal, unless the patient is dehydrated or showing severe signs of sepsis. Early in the illness, definite signs of infection on auscultation may be missing in up to 60% of patients. If present, they include evidence of consolidation (i.e., dullness on percussion or bronchial breath sounds) or moist rales. Patients with mycoplasmal or viral pneumonia have few, if any, abnormalities on physical examination. Hospitalized patients who develop pneumonia either when in-patients (hospital-acquired or nosocomial pneumonia) or within 90 days of hospitalization, hemodialysis, home wound care, or residence in a housing home (healthcare-associated pneumonia; HAP) may have few of the listed symptoms and may present with fever or hypothermia, a decline in functional status, or confusion. How did the patient develop Acute Community-Acquired Pneumonia? What was the primary source from which the infection spread? Infectious agents gain access to the lower respiratory tract through a number of routes: aspiration of micro-organisms present in oral secretions, inhalation of aerosolized material, and less often following hematogenous seeding to the lungs. Typically, an intact immune system is able to handle these breaches. The upper airways are designed to trap (nasal hairs and turbinates) and clear (ciliated epithelium and mucous-producing cells) inhaled micro-organisms. The local production of complement and bacterial interference from resident flora serve as important factors in local host defense. Secretory IgA possesses antibacterial and antiviral activity despite being a relatively poor opsonin. Adherence to surface epithelium is a crucial step in colonization and subsequent infection. Epithelial cells produce airway surface liquid, a complex mixture of proteins and peptides mixed with plasma transudate. This mixture has antimicrobial activity. Phagocytic cells, including macrophages and neutrophils, play a major role. The alveolar macrophage is located in the alveolar lining fluid at the interface between air and lung tissue. They eliminate certain microorganisms and can mediate the inflammatory response if the number of organisms is overwhelming or if the organisms are particularly virulent. Cell-mediated immunity is necessary for defense against viruses and intracellular organisms, such as Mycobacterium tuberculosis. Which individuals are of greater risk of developing Acute Community-Acquired Pneumonia? Acute community-acquired pneumonia can occur at any age but most commonly occurs in patients in the 50-60 years of age group. Risk factors for CAP include alterations in level of consciousness following a stroke or seizure, as a consequence of drug or alcohol intoxication, or during normal sleep. Smoking impairs natural pulmonary defenses, including mucociliary function and macrophage activity. Iatrogenic manipulations of the upper airways during surgical procedures or during mechanical ventilation predispose to infection. Older adults are at increased risk of developing pneumonia due to associated underlying disorders, such as chronic obstructive airways disease and an increased risk of hospitalization. Also, natural pulmonary defenses, such as cough and mucociliary clearance, may be impaired. This, as well as the changes in humoral and cell-mediated immunity that occur with ageing, contribute to this increased risk. Patients with defects in host defense, either inherited or acquired, are at increased risk. Impairment of leukocyte function and immunoglobulin production, as well as ciliary dysfunction, are associated with recurrent episodes of pneumonia. Acquired host defense defects are more varied and include malignancies, infection (HIV), and iatrogenic causes, such as immunosuppressive therapies. Beware: there are other diseases that can mimic Acute Community-Acquired Pneumonia: Acute bronchitis and acute exacerbation of chronic bronchitis Tricuspid valve bacterial endocarditis Collagen vascular disorders Hypersensitivity lung disease What laboratory studies should you order and what should you expect to find? Results consistent with the diagnosis A raised peripheral white cell count may be present. Liver enzymes may be elevated in bacterial sepsis. A raised C-reactive protein (CRP) may be present. What imaging studies will be helpful in making or excluding the diagnosis of Acute Community-Acquired Pneumonia? Computed tomography (CT) is helpful in evaluating patients with recurrent pneumonia or infections unresponsive to treatment. What consult service or services would be helpful for making the diagnosis and assisting with treatment? Most patients will be assessed in the Emergency or Acute Assessment Department. There are a number of severity assessment strategies or scoring systems that can be applied to determine whether the patient can be treated as an outpatient, needs admission, or may require intensive care admission. Assessment of severity is complex, and the use of a scoring system does not negate the need for thorough history taking, risk factor assessment, and examination. A recent meta-analysis looking at the use of severity assessment tools to predict mortality showed no significant difference in overall test performance for predicting mortality from CAP. More recently developed systems have focused on predicting admission to intensive care unit (ICU) rather than 30 day mortality. What algorithms can be used to assess severity? CURB-65 – confusion, urea, respiratory rate, blood pressure, 65 years of age or older; one point for each feature present: 0-1 low severity (risk of death <3%), 2 moderate severity (risk of death 9%), and 3-5 high severity (risk of death 15-40%) CRB-65 – CURB-65 without measured urea and suitable for outpatient settings Pneumonia Severity Index (PSI) – uses patient demographics, co-existence of co-morbid conditions, findings on clinical examination, vital signs, and laboratory results; 20-point score that classifies patients into one of five risk categories. SMART-COP – relies on vital signs, CXR features, and laboratory results (i.e., BP, CXR, albumin, respiratory rate, presence of tachycardia, confusion, acidosis, and hypoxemia); maximum score is 11; an increasing score is associated with increasing risk of needing ICU admission. CORB – acute confusion, low oxygen saturation, elevated respiratory rate, systolic or diastolic hypotension; 1 point for each feature, and severe pneumonia is defined as a score of at least 2 points. Severe Community-Acquired Pneumonia (SCAP) score – major criteria are pH < 7.30 and systolic blood pressure <90 mm Hg and minor criteria are confusin, urea > 30 mg/dL (>10.71 mmol/L), multilobar bilateral pneumonia on CXR, PaO2 < 54 mmHg, or PaO2/FiO2 <250 mm Hg and age ≥ 80 years. One or more major criteria or 2 or more minor criteria = severe community-acquired pneumonia If you decide the patient has Acute Community-Acquired Pneumonia, what therapies should you initiate immediately? The first dose of antibiotics should be given in the Emergency or Acute Assessment area. The choice of antimicrobial agent is dependent on the causative pathogen and the antibiotics that it is susceptible to. However, there are a wide range of pathogens that can cause acute pneumonia, and a rapid and accurate diagnostic test is not available. As a consequence, most patients are treated empirically. If a specific pathogen is identified, then the choice of antimicrobial agent should be directed at that pathogen. 1. Anti-infective agents If I am not sure what pathogen is causing the infection what anti-infective should I order? The most common causative pathogen of CAP is S. pneumoniae. Other pathogens, such as Mycoplasma,Legionella, Chlamydophila, and respiratory viruses, may also contribute to up to 50% of cases, depending on seasonality and locale demographics. International guidelines have adopted similar approaches but differ subtly in choice of antimicrobial agent (Table I). |Mild – outpatient treatment|| 1 gram orally, 8-hourly for 5 to 7 days 200mg orally for the first dose then 100mg daily for a further 5 days 250mg orally, 12-hourly for 5 to 7 days |Penicillin allergy – use doxycycline or macrolide (clarithromycin or azithromycin)| |Moderate to severe||Benzyl penicillin 1.2g IV 6-hourly until significant improvement then amoxycillin 1 g orally 8-hourly 100mg orally, 12-hourly for 7 days 500mg orally, 12-hourly for 7 days |Penicillin allergy – uss a respiratory fluoroquinolone (moxifloxacin or levofloxacin)| |If Gram negative bacilli identified in sputum or blood||see for severe CAP| 1 g IV daily 1.2g IV, 4-hourly 4 to 6 mg/kg for 1 dose, then determine dosing interval for a maximum of either 1 or 2 further doses based on renal function 1 g IV, 8-hourly 500mg IV, daily |For patients with immediate penicillin hypersensitivity use a respiratory fluoroquinolone plus azithromycin| |If Pseudomonas is a consideration|| and a macrolide |If community-acquired methicillin-resistant S. aureus is a consideration||Add vancomycin or linezoid to severe CAP regimen| When to switch to oral antibiotics and length of treatment. For patients with low to moderate severity CAP, there is no contraindication to oral therapy. For patients initially treated with parenteral antibiotics, the switch to an oral regimen should occur as soon as clinical improvement occurs and temperature has been normal for 24 hours. The length of treatment is determined by the severity of disease. Patients managed in the community and for most patients admitted with low or moderate severity, 5-7 days of appropriate antibiotics are recommended. For those with high severity, 7-10 days of treatment should be given. For more uncommon causes of CAP, such as CAP due to S. aureus or Gram-negative bacilli, longer courses may be required. 2. Next list other key therapeutic modalities. Consider the requirement for supplemental oxygen Intravenous hydration if the patient is dehydrated or unable to maintain an oral intake Analgesia for pleuritic chest pain Control of underlying comorbidities, such as heart failure Bronchodilators to treat airflow limitations or improve mucociliary clearance Respiratory physiotherapy to improve clearance of secretions Steroids are not recommended for severe CAP. What complications could arise as a consequence of Acute Community-Acquired Pneumonia? Potential complications of acute community-acquired pneumonia include empyema and lung abscess, as well as sterile parapneumonic effusions What should you tell the family about the patient's prognosis? There are a number of factors that impact on a given patients prognosis. Although the severity scoring systems can be used to predict 30 day mortality or need for ICU admission, the performance of the scoring system differs between different patient populations and different healthcare settings. To be useful, they should be validated in each locale. Poorer prognosis is seen in the elderly patient with multiple comorbidities and extensive disease at presentation. How do you contract Acute Community-Acquired Pneumonia and how frequent is this disease? Acute pneumonia is among the top ten most common causes of death in those 65 years of age or older and the single most common cause of infection-related mortality. About 156 million new episodes of pneumonia occurred in children worldwide in 2000. The vast majority occurred in children in developing countries, and childhood pneumonia is the leading single cause of mortality in children 5 years of age or younger. The incidence in children younger than 5 years of age is estimated to be 0.29 episodes per child-year in developing countries and 0.05 episodes per child-year in developed countries. The annual incidence in adults is more difficult to determine because of different approaches used to determine the incidence. The overall annual rate of pneumonia in the United States is 12/1000 persons, and the incidence of CAP requiring hospitalization in adults is 2.6/1000.1 Recent studies report an incidence of CAP in the range of 3.7-10.1/1000 inhabitants in a community in Germany, an annual incidence of 1.62/1000 inhabitants in Barcelona, Spain, and an overall incidence of 233/100,000 person-years in the United Kingdom. In North America, pneumonia and influenza continue to be common causes of death, ranked eighth in the United States and seventh in Canada,and there were more than 60,000 deaths due to pneumonia in persons 15 years of age or older in the United States alone. The incidence is higher in children younger than 5 years of age, adults older than 65 years of age, males, and those living in socially deprived areas. The pathogens that cause acute pneumonia are commonly transmitted by the respiratory route. Zoonotic transmission of respiratory pathogens is uncommon. Several pathogens that can cause acute pulmonary infections are associated with animal hosts; these include Bacillus anthracis, Coxiella burnetti, Francisella tularensis, Yersinia pestis,Hantavirus,dimorphic molds, C. psittaci,and Brucellaspp. A thorough history of occupation, exposure to animals, and travel should alert the clinician to consider these organisms. What pathogens are responsible for this disease? The most common organisms causing acute community-acquired pneumonia are: Respiratory viruses, such as influenza, rhinovirus, respiratory syncytial virus, human metapneumovirus, coronavirus, parainfluenza virus, adenovirus, and bocavirus Streptococcus pneumoniae– the most common cause of acute community-acquired pneumonia; affects all age groups; typically causes lobar pneumonia Haemophilus influenzae – more common in patients with underlying lung disease Mycoplasma pneumoniae -commonly affects older children and young adults, particularly during autumn; cough may be prominent; extra-pulmonary manifestations may be seen, such as rash; and rarely neurological symptoms, including ataxia, aseptic meningitis, cranial nerve palsies, and encephalitis Chlamydophila pneumoniae- an uncommon cause of pneumonia; affects all age groups Chlamydia psittaci – occurs mainly in persons with contact with pet birds and in those employed in the poultry industry Legionellaspp. – L. pneumophila (serogroup 1-14), L. longbeachae, L. micdadei, and L. bozemanii cause more than 85% of cases of Legionellosis; occurs following exposure to water and soil harboring these organisms; occurs sporadically, in clusters, or in large outbreaks; Legionellosis occurs mainly in the elderly, smokers, immunocompromised, and those with underlying cardiac, respiratory, and renal disease Staphylococcus aureus – may follow an episode of influenza; may also occur in otherwise healthy young people in association with community-acquired methicillin-resistant strains of S. aureus; strains of S. aureusproducing the Panton Valentine leukocidin virulence factor can cause severe necrotizing pneumonia Mycobacterium tuberculosisis a very uncommon cause of acute pneumonia. It is usually seen in patients with a chronic cough longer than 2 weeks duration. Dimorphic molds, such as Histoplasma capsulatum, Blastomyces dermatitidis,and Coccidioides immitis, can cause pulmonary infections but are rare outside of specific geographical regions. Viral pneumonitis can be caused by a number of viruses, including influenza A and B, adenovirus, human metapneumovirus, respiratory syncytial virus (especially the older adult and immunosuppresses patient), and parainfluenza virus, which are the most common. Infections can be mixed, either with bacteria, predominantly S. pneumoniae, or dual viral pathogens. How do these pathogens cause Acute Community-Acquired Pneumonia? Particulate material and microbes are inhaled with inspired air. These tiny particles can evade the upper airways defense mechanisms and are deposited in the lower airways. Microaspiration of oral secretions can also occur. Once in the normally sterile lower airways, the alveolar lining fluid can bind a variety of organisms, including viruses, bacteria, and fungi, and decrease their virulence or enhance phagocytosis by neutrophils and alveolar macrophages. The phagocytic cells have a role in eliminating the pathogen and initiating the inflammatory response leading to neutrophil recruitment and dendritic cell activation with production of cytokines and chemokines. Other lung parenchymal cells also help to regulate the immune response. Cell mediated immunity is central to adaptive immune responses, and it is especially important against viruses and intracellular organisms. What other clinical manifestations may help me to diagnose and manage Acute Community-Acquired Pneumonia? The history should elicit symptoms consistent with pneumonia, the clinical setting in which the pneumonia has arisen, risk factors in the patient that may have led to the development of pneumonia, and possible exposures to specific pathogens. Legionellosis may be associated with gastrointestinal symptoms. Ear pain can be seen with M. pneumoniaeinfection. The patient with pneumonia will typically be febrile, have an elevated respiratory rate, and may have an elevated heart rate. There may be dullness on percussion with bronchial breath sounds or rales heard on auscultation. A relative bradycardia may be seen in patients with legionellosis, psittacosis, or Q fever. Additional features seen on examination that may point to a specific pathogen include: herpes labialis is seen in up to 40% of patients with pneumococcal pneumonia; bullous myringitis is an infrequent but significant finding in mycoplasma pneumonia; the presence of poor dentition may suggest aspiration pneumonia; and rash may be seen with mycoplasma pneumonia. What other additional laboratory findings may be ordered? Other laboratory investigations that may provide an etiology include: 1. Blood cultures – the yield is low but when positive it is highly likely that this is the pathogen causing the pneumonia 2. Sputum – Gram stain and culture of expectorated sputum can be performed only if a good quality sputum can be obtained prior to starting antimicrobials. , The laboratory should apply specific criteria to assess the quality of the sputum and not culture if these are not achieved 3. Urinary antigens –available for L. pneumophila serogroup 1, S. pneumoniae and Histoplasma capsulatum. The sensitivity and specificity for detection of L. pneumophila serogroup 1 using a lateral flow assay is 97% and 98%, respectively and for S. pneumonia the sensitivity is 66-87% when blood or pleural fluid culture is the comparator and the specificity is 80-100%. The sensitivity is much lower in colonised children. Testing for H. capsulatum urinary antigen should only be performed if there is a strong history of exposure. 4. Molecular testing for respiratory pathogens. Nasopharyngeal swab/aspirate –detection of viral and bacterial DNA/RNA using single and multiplexed PCR or nested PCR with filmarray assays. Lower respiratory tract – detection of viral and bacterial DNA/RNA using single or multiplexed PCR assays A good understanding of the limitations of the PCR method used by the local microbiology laboratory is necessary when interpreting the results of these tests 5. Serum transaminases may be elevated in patients with psittacosis, Q fever, or legionellosis. 6. Cold-agglutinins are elevated in about 75% of patients with M. pneumoniae. 7. Serological testing – acute and convalescent sera should be collected for diagnosing M. pneumoniae, C. pneumoniae, C. psiitaci,and Legionellaspp. infections. A single acute sample is of limited value. A convalescent sample taken greater than 2 weeks after onset of the illness is required. Seroconversion may take up to 6 weeks especially if antimicrobial therapy is started promptly. Serum procalcitonin (PCT) levels rapidly increase in patients with invasive bacterial disease. How PCT fits in the innate immune response is poorly understood. Which respiratory pathogens stimulate the synthesis of PCT and is there a difference in the response between different bacterial and viral pathogens is also poorly understood. Some studies have shown rates increased in patients with bacterial infections compared those of viral origin, but there has been no attempt to correlate PCT levels with the microbial etiology of the patient’s respiratory tract infection. The PCT showed good sensitivity and negative predictive value for bacterial pneumonia in critically ill patients during the 2009 influenza pandemic but it has low sensitivity in diagnosing bacterial pneumonia. Studies have shown that the PCT result may be normal in patients with bacterial pneumonia or mixed viral and bacterial pneumonia. What newer technologies are available for the detection of respiratory pathogens? The 2009 H1N1 influenza A pandemic and the outbreaks associated with the newly recognized coronaviruses, SARS and MERS, has resulted in improved diagnostic tests for respiratory pathogens. Molecular tests have an increased yield over routine culture. Molecular tests enable the screening of upper and lower respiratory tract specimens for a wide variety of viral and atypical bacterial pathogens and increase the microorganism detection rate to about 80% at best. It is more difficult to interpret PCR results for more typical bacterial pathogens such as S. pneumoniae, because the same microorganisms exist in oropharyngeal flora. Quantification of the bacterial load may assist with differentiating infection from oropharyngeal contamination in sputum. Currently there are few studies reporting on this approach. How can Acute Community-Acquired Pneumonia be prevented? Vaccination against influenza and, in some high risk groups, against S. pneumoniae,are important for preventing pneumonia How do you define hospital-acquired pneumonia (HAP), ventilator –associated pneumonia (VAP), and healthcare-associated pneumonia (HCAP)? HAP is defined as pneumonia that occurs more than 48 hours after admission to hospital and was not incubating at the time of admission. VAP is defined as a type of HAP that develops 48 hours after endotracheal incubation. HCAP requires healthcare contact, as defined by 1 or more of the following modes: intravenous therapy, wound care, intravenous chemotherapy during the prior 30 days, residence in a nursing home or other long-term care facility, hospitalization in an acute care hospital for 2 days or more during the prior 90 days, or attendance at a hospital or hemodialysis outpatient service during the prior 30 days. How are hospital-acquired pneumonia (HAP), ventilator –associated pneumonia (VAP), and healthcare-associated pneumonia (HCAP) diagnosed? The diagnosis of HAP/VAP/HCAP is suspected in a patient with new or progressive radiologic infiltrate in the context of clinical features suggesting infection. The clinical features include new onset of fever, leukocytosis, purulent sputum, and increasing oxygenation requirements. The diagnosis of HAP/VAP/HCAP is difficult with most diagnoses made on clinical grounds alone; the patient has fever and a productive cough. The diagnostic criteria of a radiologic infiltrate and at least one clinical feature (i.e., fever, leukocytosis, or purulent tracheal secretions) has a high sensitivity but low specificity for HAP and VAP. Combinations of signs and symptoms increase the specificity. Hospitalized patients, including those who are ventilated are highly likely to have colonized upper airways. Colonization precedes infection, but the routine monitoring of tracheal aspirate cultures as a means of predicting the likely pathogen is not warranted as it has been found to be misleading in a significant proportion of cases. Antibiotic treatment of simple colonization is not recommended. The absence of a “gold standard” diagnostic test has made interpreting studies looking at diagnostic tests for HAP/VAP/HCAP difficult. Prior antibiotic use and recent changes in antibiotic treatment make interpreting cultures difficult. The etiologic diagnosis requires lower respiratory tract culture, endotracheal aspirates, bronchoalveolar lavages, or protected specimen brush. Blood cultures may be positive in up to 25% of cases of HAP and may not necessarily be directly attributable to HAP. Pleural fluid may occasionally be cultured but is seldom useful for determining the etiology. A sterile culture from lower respiratory tract symptoms has a high negative predictive value and can be used to exclude HAP/VAP/HCAP as a cause of fever. Also, the absence of multiple-antibiotic resistant organisms from these cultures is strong evidence that they are not causing infection and narrowing of the antibiotic spectrum can occur. Quantitative cultures of lower respiratory tract secretions may be useful in sorting out colonization from true infection. Quantitative cultures are not routinely performed by most diagnostic laboratories. Culture of lower respiratory tract specimens for viruses has a low yield. PCR assays for respiratory viruses can be performed on upper and lower respiratory tract specimens. Hospital transmission of respiratory viruses is well described, particularly in immunocompromised patient groups. Legionellosis is an under-recognised cause of both HAP and HCAP. The diagnosis should be considered in patients with exposure to potable water and with risk factors for micro-aspiration. Should all patients with healthcare-associated pneumonia receive empiric treatment with broad spectrum antimicrobials? The 2005 American Thoracic Society guideline for the management of adults with hospital-acquired, ventilator-associated, and healthcare-associated pneumonia recommends early, appropriate, broad spectrum antimicrobial therapy prescribed at an appropriate dose. Combination therapy should be used judiciously, and a short-course of beta-lactam and aminoglycoside therapy may be used to treat Pseudomonas aeruginosa infections. De-escalation should occur in response to microbiology culture results. Patients with HCAP are a heterogeneous population, and not all patients share the same risks for severe pneumonia. The main driver for the use of empiric broad spectrum antibiotics in this group is to provide adequate cover for multiple antibiotic resistant organisms. Local epidemiological data should be used to determine if this approach is warranted in ambulatory patients, those from residential care facilities, and those using outpatient hemodialysis services. Monotherapy can be used in this group of patients who present with mild to moderate pneumonia. Should combination therapy be routinely used for confirmed or suspected Gram-negative HAP/VAP/HCAP? Combination therapy in this situation, usually with a beta-lactam and aminoglycoside, is recommended in some guidelines. The justification for this approach is to achieve synergy against Pseudomonas aeruginosa. However, synergy has only been demonstrated in vitro and in patients with neutropenia or concurrent bacteremia. Combination therapy is also proposed as a means to avoid the development of resistance of treatment. This is a common phenomenon when P. aeruginosais treated with monotherapy and when Enterobacterare treated with third generation cephalosporins. Prevention of the development of resistance by combination therapy has been poorly documented. Another justification is to provide a broad spectrum empiric treatment likely to include at least one drug active against multiple antibiotic resistant organisms. This is only a concern if resistance is a major issue. If combination therapy is used, its ongoing use should be reviewed early in the course of treatment and de-escalation to monotherapy should occur if the patient is clinically improving. What is the duration of antibiotics for HAP/VAP/HCAP? Significant clinical improvement should be observed over the first 3-5 days of therapy. The standard duration of treatment is 7-8 days for most pathogens and longer (usually 14 days) for non-fermenting Gram-negative bacteria, such as P. aeruginosa, Acinetobacter,and Stenotrophomonas maltophilia. Longer courses of treatment are associated with the emergence of dug resistance. Copyright © 2017, 2013 Decision Support in Medicine, LLC. All rights reserved. No sponsor or advertiser has participated in, approved or paid for the content provided by Decision Support in Medicine LLC. The Licensed Content is the property of and copyrighted by DSM. - OVERVIEW: What every practitioner needs to know - Are you sure your patient has Acute Community-Acquired Pneumonia? What should you expect to find? - How did the patient develop Acute Community-Acquired Pneumonia? What was the primary source from which the infection spread? - Which individuals are of greater risk of developing Acute Community-Acquired Pneumonia? - Beware: there are other diseases that can mimic Acute Community-Acquired Pneumonia: - What laboratory studies should you order and what should you expect to find? - What imaging studies will be helpful in making or excluding the diagnosis of Acute Community-Acquired Pneumonia? - What consult service or services would be helpful for making the diagnosis and assisting with treatment? - What algorithms can be used to assess severity? - If I am not sure what pathogen is causing the infection what anti-infective should I order? - When to switch to oral antibiotics and length of treatment. - What complications could arise as a consequence of Acute Community-Acquired Pneumonia? - What should you tell the family about the patient's prognosis? - How do you contract Acute Community-Acquired Pneumonia and how frequent is this disease? - What pathogens are responsible for this disease? - How do these pathogens cause Acute Community-Acquired Pneumonia? - What other clinical manifestations may help me to diagnose and manage Acute Community-Acquired Pneumonia? - What other additional laboratory findings may be ordered? - What newer technologies are available for the detection of respiratory pathogens? - How can Acute Community-Acquired Pneumonia be prevented? - How do you define hospital-acquired pneumonia (HAP), ventilator –associated pneumonia (VAP), and healthcare-associated pneumonia (HCAP)? - How are hospital-acquired pneumonia (HAP), ventilator –associated pneumonia (VAP), and healthcare-associated pneumonia (HCAP) diagnosed? - Should all patients with healthcare-associated pneumonia receive empiric treatment with broad spectrum antimicrobials? - Should combination therapy be routinely used for confirmed or suspected Gram-negative HAP/VAP/HCAP? - What is the duration of antibiotics for HAP/VAP/HCAP?
<urn:uuid:caed70aa-4226-4aa1-b18b-959fa0bc501b>
CC-MAIN-2022-33
https://www.infectiousdiseaseadvisor.com/home/decision-support-in-medicine/infectious-diseases/acute-community-acquired-pneumonia-cap-and-vap-hap/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571284.54/warc/CC-MAIN-20220811103305-20220811133305-00498.warc.gz
en
0.897634
7,101
2.921875
3
Deformation in continuum mechanics is the transformation of a body from a reference configuration to a current configuration. A configuration is a set containing the positions of all particles of the body. Strain is a description of deformation in terms of relative displacement of particles in the body that excludes rigid-body motions. Different equivalent choices may be made for the expression of a strain field depending on whether it is defined with respect to the initial or the final configuration of the body and on whether the metric tensor or its dual is considered. In a continuous body, a deformation field results from a stress field induced by applied forces or is due to changes in the temperature field inside the body. The relation between stresses and induced strains is expressed by constitutive equations, e.g., Hooke's law for linear elastic materials. Deformations which are recovered after the stress field has been removed are called elastic deformations. In this case, the continuum completely recovers its original configuration. On the other hand, irreversible deformations remain even after stresses have been removed. One type of irreversible deformation is plastic deformation, which occurs in material bodies after stresses have attained a certain threshold value known as the elastic limit or yield stress, and are the result of slip, or dislocation mechanisms at the atomic level. Another type of irreversible deformation is viscous deformation, which is the irreversible part of viscoelastic deformation. In the case of elastic deformations, the response function linking strain to the deforming stress is the compliance tensor of the material. A strain is measure of deformation representing the displacement between particles in the body relative to a reference length. A general deformation of a body can be expressed in the form x = F(X) where X is the reference position of material points in the body. Such a measure does not distinguish between rigid body motions (translations and rotations) and changes in shape (and size) of the body. A deformation has units of length. We could, for example, define strain to be where I is the identity tensor. Hence strains are dimensionless and are usually expressed as a decimal fraction, a percentage or in parts-per notation. Strains measure how much a given deformation differs locally from a rigid-body deformation. A strain is in general a tensor quantity. Physical insight into strains can be gained by observing that a given strain can be decomposed into normal and shear components. The amount of stretch or compression along material line elements or fibers is the normal strain, and the amount of distortion associated with the sliding of plane layers over each other is the shear strain, within a deforming body. This could be applied by elongation, shortening, or volume changes, or angular distortion. The state of strain at a material point of a continuum body is defined as the totality of all the changes in length of material lines or fibers, the normal strain, which pass through that point and also the totality of all the changes in the angle between pairs of lines initially perpendicular to each other, the shear strain, radiating from this point. However, it is sufficient to know the normal and shear components of strain on a set of three mutually perpendicular directions. If there is an increase in length of the material line, the normal strain is called tensile strain, otherwise, if there is reduction or compression in the length of the material line, it is called compressive strain. Depending on the amount of strain, or local deformation, the analysis of deformation is subdivided into three deformation theories: - Finite strain theory, also called large strain theory, large deformation theory, deals with deformations in which both rotations and strains are arbitrarily large. In this case, the undeformed and deformed configurations of the continuum are significantly different and a clear distinction has to be made between them. This is commonly the case with elastomers, plastically-deforming materials and other fluids and biological soft tissue. - Infinitesimal strain theory, also called small strain theory, small deformation theory, small displacement theory, or small displacement-gradient theory where strains and rotations are both small. In this case, the undeformed and deformed configurations of the body can be assumed identical. The infinitesimal strain theory is used in the analysis of deformations of materials exhibiting elastic behavior, such as materials found in mechanical and civil engineering applications, e.g. concrete and steel. - Large-displacement or large-rotation theory, which assumes small strains but large rotations and displacements. In each of these theories the strain is then defined differently. The engineering strain is the most common definition applied to materials used in mechanical and structural engineering, which are subjected to very small deformations. On the other hand, for some materials, e.g. elastomers and polymers, subjected to large deformations, the engineering definition of strain is not applicable, e.g. typical engineering strains greater than 1%, thus other more complex definitions of strain are required, such as stretch, logarithmic strain, Green strain, and Almansi strain. The Cauchy strain or engineering strain is expressed as the ratio of total deformation to the initial dimension of the material body in which the forces are being applied. The engineering normal strain or engineering extensional strain or nominal strain e of a material line element or fiber axially loaded is expressed as the change in length ΔL per unit of the original length L of the line element or fibers. The normal strain is positive if the material fibers are stretched and negative if they are compressed. Thus, we have where e is the engineering normal strain, L is the original length of the fiber and l is the final length of the fiber. Measures of strain are often expressed in parts per million or microstrains. The true shear strain is defined as the change in the angle (in radians) between two material line elements initially perpendicular to each other in the undeformed or initial configuration. The engineering shear strain is defined as the tangent of that angle, and is equal to the length of deformation at its maximum divided by the perpendicular length in the plane of force application which sometimes makes it easier to calculate. The stretch ratio or extension ratio is a measure of the extensional or normal strain of a differential line element, which can be defined at either the undeformed configuration or the deformed configuration. It is defined as the ratio between the final length l and the initial length L of the material line. The extension ratio is approximately related to the engineering strain by This equation implies that the normal strain is zero, so that there is no deformation when the stretch is equal to unity. The stretch ratio is used in the analysis of materials that exhibit large deformations, such as elastomers, which can sustain stretch ratios of 3 or 4 before they fail. On the other hand, traditional engineering materials, such as concrete or steel, fail at much lower stretch ratios. The logarithmic strain ε, also called, true strain or Hencky strain (although nothing is particularly "true" about it compared to other valid definitions of strain). Considering an incremental strain (Ludwik) the logarithmic strain is obtained by integrating this incremental strain: where e is the engineering strain. The logarithmic strain provides the correct measure of the final strain when deformation takes place in a series of increments, taking into account the influence of the strain path. The Green strain is defined as: The Euler-Almansi strain is defined as Normal and shear strain Strains are classified as either normal or shear. A normal strain is perpendicular to the face of an element, and a shear strain is parallel to it. These definitions are consistent with those of normal stress and shear stress. Consider a two-dimensional, infinitesimal, rectangular material element with dimensions dx × dy, which, after deformation, takes the form of a rhombus. From the geometry of the adjacent figure we have For very small displacement gradients the squares of the derivatives are negligible and we have The normal strain in the x-direction of the rectangular element is defined by Similarly, the normal strain in the y- and z-directions becomes |γ or ε| |SI unit||1, or radian| |γ = τ/| The engineering shear strain (γxy) is defined as the change in angle between lines AC and AB. Therefore, From the geometry of the figure, we have For small displacement gradients we have For small rotations, i.e. α and β are ≪ 1 we have tan α ≈ α, tan β ≈ β. Therefore, By interchanging x and y and ux and uy, it can be shown that γxy = γyx. Similarly, for the yz- and xz-planes, we have The tensorial shear strain components of the infinitesimal strain tensor can then be expressed using the engineering strain definition, γ, as A strain field associated with a displacement is defined, at any point, by the change in length of the tangent vectors representing the speeds of arbitrarily parametrized curves passing through that point. A basic geometric result, due to Fréchet, von Neumann and Jordan, states that, if the lengths of the tangent vectors fulfil the axioms of a norm and the parallelogram law, then the length of a vector is the square root of the value of the quadratic form associated, by the polarization formula, with a positive definite bilinear map called the metric tensor. Description of deformation Deformation is the change in the metric properties of a continuous body, meaning that a curve drawn in the initial body placement changes its length when displaced to a curve in the final placement. If none of the curves changes length, it is said that a rigid body displacement occurred. It is convenient to identify a reference configuration or initial geometric state of the continuum body which all subsequent configurations are referenced from. The reference configuration need not be one the body actually will ever occupy. Often, the configuration at t = 0 is considered the reference configuration, κ0(B). The configuration at the current time t is the current configuration. For deformation analysis, the reference configuration is identified as undeformed configuration, and the current configuration as deformed configuration. Additionally, time is not considered when analyzing deformation, thus the sequence of configurations between the undeformed and deformed configurations are of no interest. The components Xi of the position vector X of a particle in the reference configuration, taken with respect to the reference coordinate system, are called the material or reference coordinates. On the other hand, the components xi of the position vector x of a particle in the deformed configuration, taken with respect to the spatial coordinate system of reference, are called the spatial coordinates There are two methods for analysing the deformation of a continuum. One description is made in terms of the material or referential coordinates, called material description or Lagrangian description. A second description is of deformation is made in terms of the spatial coordinates it is called the spatial description or Eulerian description. There is continuity during deformation of a continuum body in the sense that: - The material points forming a closed curve at any instant will always form a closed curve at any subsequent time. - The material points forming a closed surface at any instant will always form a closed surface at any subsequent time and the matter within the closed surface will always remain within. A deformation is called an affine deformation if it can be described by an affine transformation. Such a transformation is composed of a linear transformation (such as rotation, shear, extension and compression) and a rigid body translation. Affine deformations are also called homogeneous deformations. Therefore, an affine deformation has the form where x is the position of a point in the deformed configuration, X is the position in a reference configuration, t is a time-like parameter, F is the linear transformer and c is the translation. In matrix form, where the components are with respect to an orthonormal basis, The above deformation becomes non-affine or inhomogeneous if F = F(X,t) or c = c(X,t). Rigid body motion A rigid body motion is a special affine deformation that does not involve any shear, extension or compression. The transformation matrix F is proper orthogonal in order to allow rotations but no reflections. A rigid body motion can be described by In matrix form, A change in the configuration of a continuum body results in a displacement. The displacement of a body has two components: a rigid-body displacement and a deformation. A rigid-body displacement consists of a simultaneous translation and rotation of the body without changing its shape or size. Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration κ0(B) to a current or deformed configuration κt(B) (Figure 1). If after a displacement of the continuum there is a relative displacement between particles, a deformation has occurred. On the other hand, if after displacement of the continuum the relative displacement between particles in the current configuration is zero, then there is no deformation and a rigid-body displacement is said to have occurred. The vector joining the positions of a particle P in the undeformed configuration and deformed configuration is called the displacement vector u(X,t) = uiei in the Lagrangian description, or U(x,t) = UJEJ in the Eulerian description. A displacement field is a vector field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. It is convenient to do the analysis of deformation or motion of a continuum body in terms of the displacement field. In general, the displacement field is expressed in terms of the material coordinates as or in terms of the spatial coordinates as where αJi are the direction cosines between the material and spatial coordinate systems with unit vectors EJ and ei, respectively. Thus and the relationship between ui and UJ is then given by It is common to superimpose the coordinate systems for the undeformed and deformed configurations, which results in b = 0, and the direction cosines become Kronecker deltas: Thus, we have or in terms of the spatial coordinates as Displacement gradient tensor The partial differentiation of the displacement vector with respect to the material coordinates yields the material displacement gradient tensor ∇XU. Thus we have: where F is the deformation gradient tensor. Similarly, the partial differentiation of the displacement vector with respect to the spatial coordinates yields the spatial displacement gradient tensor ∇xU. Thus we have, Examples of deformations Homogeneous (or affine) deformations are useful in elucidating the behavior of materials. Some homogeneous deformations of interest are - uniform extension - pure dilation - simple shear - pure shear Plane deformations are also of interest, particularly in the experimental context. A plane deformation, also called plane strain, is one where the deformation is restricted to one of the planes in the reference configuration. If the deformation is restricted to the plane described by the basis vectors e1, e2, the deformation gradient has the form In matrix form, From the polar decomposition theorem, the deformation gradient, up to a change of coordinates, can be decomposed into a stretch and a rotation. Since all the deformation is in a plane, we can write where θ is the angle of rotation and λ1, λ2 are the principal stretches. Isochoric plane deformation If the deformation is isochoric (volume preserving) then det(F) = 1 and we have A simple shear deformation is defined as an isochoric plane deformation in which there is a set of line elements with a given reference orientation that do not change length and orientation during the deformation. If e1 is the fixed reference orientation in which line elements do not deform during the deformation then λ1 = 1 and F·e1 = e1. Therefore, Since the deformation is isochoric, Then, the deformation gradient in simple shear can be expressed as we can also write the deformation gradient as - Euler–Bernoulli beam theory - Deformation (engineering) - Finite strain theory - Infinitesimal strain theory - Moiré pattern - Shear modulus - Shear stress - Shear strength - Stress (mechanics) - Stress measures - Truesdell, C.; Noll, W. (2004). The non-linear field theories of mechanics (3rd ed.). Springer. p. 48. - Wu, H.-C. (2005). Continuum Mechanics and Plasticity. CRC Press. ISBN 1-58488-363-4. - Lubliner, Jacob (2008). Plasticity Theory (PDF) (Revised ed.). Dover Publications. ISBN 0-486-46290-0. - Rees, David (2006). Basic Engineering Plasticity: An Introduction with Engineering and Manufacturing Applications. Butterworth-Heinemann. ISBN 0-7506-8025-3. - "Earth."Encyclopædia Britannica from Encyclopædia Britannica 2006 Ultimate Reference Suite DVD .. - Rees, David (2006). Basic Engineering Plasticity: An Introduction with Engineering and Manufacturing Applications. Butterworth-Heinemann. p. 41. ISBN 0-7506-8025-3. - Hencky, H. (1928). "Über die Form des Elastizitätsgesetzes bei ideal elastischen Stoffen". Zeitschrift für technische Physik. 9: 215–220. - Ogden, R. W. (1984). Non-linear Elastic Deformations. Dover. - Bazant, Zdenek P.; Cedolin, Luigi (2010). Three-Dimensional Continuum Instabilities and Effects of Finite Strain Tensor, chapter 11 in "Stability of Structures", 3rd ed. Singapore, New Jersey, London: World Scientific Publishing. ISBN 9814317039. - Dill, Ellis Harold (2006). Continuum Mechanics: Elasticity, Plasticity, Viscoelasticity. Germany: CRC Press. ISBN 0-8493-9779-0. - Hutter, Kolumban; Jöhnk, Klaus (2004). Continuum Methods of Physical Modeling. Germany: Springer. ISBN 3-540-20619-1. - Jirasek, M; Bazant, Z.P. (2002). Inelastic Analysis of Structures. London and New York: J. Wiley & Sons. ISBN 0471987166. - Macosko, C. W. (1994). Rheology: principles, measurement and applications. VCH Publishers. ISBN 1-56081-579-5. - Mase, George E. (1970). Continuum Mechanics. McGraw-Hill Professional. ISBN 0-07-040663-4. - Mase, G. Thomas; Mase, George E. (1999). Continuum Mechanics for Engineers (2nd ed.). CRC Press. ISBN 0-8493-1855-6. - Nemat-Nasser, Sia (2006). Plasticity: A Treatise on Finite Deformation of Heterogeneous Inelastic Materials. Cambridge: Cambridge University Press. ISBN 0-521-83979-3. - Prager, William (1961). Introduction to Mechanics of Continua. Boston: Ginn and Co. ISBN 0486438090.
<urn:uuid:ad56ca23-1c54-4ee1-8ce8-f3f3bb57329b>
CC-MAIN-2022-33
https://bafybeiemxf5abjwjbikoz4mc3a3dla6ual3jsgpdr4cjr3oz3evfyavhwq.ipfs.dweb.link/wiki/Strain_(materials_science).html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571246.56/warc/CC-MAIN-20220811073058-20220811103058-00299.warc.gz
en
0.873855
4,313
3.46875
3
LB701: Insects Live in an Infrared World Ben Franklin Centre for Theoretical Research PO Box 27, Subiaco, WA 6008, Australia. How Animals Smell We know a fair amount about how an animal's sense of smell operates, although there is still much to be discovered. The olfactory system, or sense of smell, is the part of the sensory system used for smelling (olfaction). Most mammals and reptiles have a main olfactory system and an accessory olfactory system. The main olfactory system detects airborne substances . Fig. LB701-F1. The mechanism of smell. From . Basically, smelling occurs when molecules of certain substances (for convenience called "odour molecules") come into contact with the mucous lining of the nostrils (the olfactory epithelium). This surface contains "olfactory receptor cells" which may be activated by odour molecules to fire off electrical signals along nerve cells connected with the brain. The Mucous Lining The mucous lining, the layer of mucus which coats the inside of the nostrils and some adjacent parts (also called the olfactory membrane), is one of the 2 vital parts of the smell system. It is not a minor part of the metabolism; every day, the average human body produces over 1 kilogram of nasal mucus. Mucus is made by mucosal glands that line the body's respiratory tract, which includes the nose, the throat and the lungs . When things are working properly, your body is pretty good at getting rid of it. The mucus in your nose, for example, is moved to the back of the nasal passages and then into the throat by tiny hairs on nasal cells called cilia. And from there, you gulp it down. That's right -- you're swallowing your snot all day, every day. You just don't notice it . And when you consider that an average human takes in about 2 kg of food and about 2 litres of liquid a day, mucus production is a major activity. Of course, the mucus is recycled, it's not used up. For a human, the mucous lining is a very important part. We know that it is vital for smelling -- in my view, it is also very important for protection against invading germs and the such, a part of the immune system. The mammalian smell system has some similarities with a method of chemical analysis called Gas-Liquid Chromatography. In GLC, an inert gas such as nitrogen is fed through a system of columns which have liquid or semi-liquid linings, similar in principle to those in the nasal passages. Fig. LB701-F2. Liquid-lined columns and inert-gas flow in a gas-liquid chromatography setup. From . When the GLC setup has a steady flow of inert gas passing through it, a small sample of the substance to be analyzed (usually a mixture) is injected into the flow with a syringe. The essential feature of GLC, and of other chromatography techniques, is that different molecules pass through the setup at different rates -- movement through the liquid linings "smears out" the different components over time. The timing in arrival of different components at the outlet of the GLC can be detected and recorded with suitable instruments. Generally speaking, for a given GLC setup, the time of arrival of a given component will be the same, so the setup will give the amount of each component with an identified signal. Obviously, the nature of the liquid in the columns and the nature and speed of the carrier gas can be varied to pick up different ranges of components. Fig. LB701-F3. Recorder response pattern from a GLC setup. From . A typical recorder pattern from a GLC is as shown. In this setup, some components appeared at the outlet within 5 minutes of injection, others took more than 25 minutes to appear. Generally, volatile (light) substances pass more rapidly. In this example, about 20 different major components and a similar number of minor components were detected in the mixture. How Smelling is analyzed in the Brain While a GLC analyzer has just one detector, the mechanism by which odours are analyzed in the brain is enormously more complex. Signals from the scent receptors in the mucous lining go first to the two Olfactory Bulbs which lie above the nostrils. Fig. LB701-F4. The Olfactory Bulb. From . A lot of processing of these signals takes place in the Olfactory Bulbs. The outputs from this processing are then passed on to other parts of the brain, some for immediate action, others for longer-term analysis and recording (memory). In higher mammals, some immediate reactions to smell are triggered in the more primitive part of the brain, and would be classed as instinctive. The nature of these connections from the olfactory bulbs to other parts of the brain is quite involved. According to , the olfactory bulb connects to numerous areas of the amygdala, thalamus, hypothalamus, hippocampus, brain stem, retina, auditory cortex, and olfactory system. In total it has 27 inputs and 20 outputs. So the decoding of smells is a complex process, perhaps as complex as that of sight. While some reactions may be thought of as "built in", as when a baby antelope shies away from the scent of a lion it has never seen, other reactions will be learned through experience, as good or bad. For some types of animal, such as dogs, smell can be vitally important. Here are some extracts from The Dog's Amazing Nose! . "The Olfactory Bulb is a bulb of neural tissue within the dog's brain. It is located in the fore-brain and is responsible for processing scents detected by cells in the nasal cavity. It is approximately 40 times larger in dogs than in humans, relative to total brain size. A human's brain is dominated by a large visual cortex whilst a dog's brain is dominated by the olfactory cortex. The Olfactory Bulb accounts for one eighth of the dog's brain. The Olfactory Bulb is extremely important to the dog due to its function of processing scent. Scent information travels from the Olfactory Bulb to the limbic system, which is the most primitive part of the brain (dealing with emotions, memory and behaviour). It also travels to the cortex (the cortex is the outer part of the brain that has to do with conscious thought). Because olfactory information goes to both the primitive and complex part of the brain it affects the dog's actions in more ways than we may think. A dog's sense of smell is probably more important to it than any other sense, with the possible exception of touch. The sense of smell and the sense of touch are the predominant senses for a dog and they are in place and fully functioning at birth, unlike hearing and sight, which develop later, and taste, which although present at birth and connected to smell, takes a back seat. A dog has around 220 million scent receptors in his nose -- that's 44 times the number of receptors in our own human nose. The bloodhound exceeds this standard with nearly 300 million scent receptors!" Bloodhounds, the Kings of Smell It will be apparent that for a better sense of smell, you need a bigger area of olfactory membrane containing more receptor cells, and bigger olfactory bulbs to process the increased amount of information. This is clear when the dog breed with the best sense of smell, the bloodhound, is compared with other breeds, or with humans. Here are some extracts from . "The back of a dog's nasal cavity contains a membrane called olfactory mucosa. The olfactory mucosa membrane helps trap scents. The bigger the nose, the bigger the membrane. The membrane's size varies among breeds, from 45 cm2 to almost 390 cm2. Once the scent molecules are trapped by the olfactory mucosa, smell- or scent-detecting cells process the scent molecules and send the information to the brain". Fig. LB701-F5. The bloodhound is famous for his developed sense of smell. From . "The bigger the dog's nose, the more smell-detecting cells it contains. The best noses for smell-detecting activity are long, wide noses because they can hold the most scent-detecting cells. The size of the dog doesn't matter as much as the size of the nose. A beagle, for example, has just as many smell-detecting cells as a German shepherd. The top scent-smelling dog is the bloodhound, a breed with a large and wide nose. That breed has 300 million scent-detecting cells, which is why bloodhounds have traditionally been used as hunting companions and to track humans both in search-and-rescue operations and to catch criminals. Besides the long, wide nose that helps the bloodhound pick up scents easily, the long neck allows the breed to follow a scent with the nose to the ground without becoming fatigued in the shoulders. The bloodhound has the most scent-detecting cells. His nose might not be the longest of all the breeds, but it is the most massive; it's long and wide. Combine that with the droopy ears that sometimes act to direct odors to the nose during tracking and trailing and with the neck that allows the bloodhound to remain with his nose to the ground for a long time, and you have a smelling machine. Compare the bloodhound with the German shepherd, who has 225 million scent-detecting cells, and the dachshund, with 125 million. People have only 5 million of them. Even a flat-nosed dog has a better sense of smell than humans, and likely has close to 100 million scent-detecting cells". How Insects detect "Odours" Insect physiology is totally unlike that of a mammal, but insects too can apparently detect substances at considerable distances. Under the topic "Insect Pheromones", cases are described whereby male insects can "pick up the scent" of a female insect as far as 11 kilometres away . Moreover, the amount of scent produced is generally stated to be extremely little. In it notes that "Moths are popularly characterized by two remarkable traits associated with chemical communication in a sexual context. First is the apparent ability of males to detect and respond to female sex pheromones over impressively long distances, including one anecdotal report of 11 km in an emperor moth, even though females typically produce very small quantities of sex pheromone in the order of nanograms or even picograms". (A nanogram is a billionth of a gram, and a picogram is a trillionth of a gram). So, how do insects manage such long-range detection? In an article How do Insects Smell?, Debbie Hadley gives a typical explanation . "Insects don't have noses the way mammals do, but that doesn't mean they don't smell things. Insects are able to detect chemicals in the air using their antennae or other sense organs. An insect's acute sense of smell enables it to find mates, locate food, avoid predators, and even gather in groups. Some insects rely on chemical cues to find their way to and from a nest, or to space themselves appropriately in a habitat with limited resources. Insects produce semiochemicals, or odor signals, to interact with one another. Insects actually use scents to communicate with each other. These chemicals send information on how to behave to the insect's nervous system. Plants also emit pheromone cues which dictate insect behaviors. In order to navigate such a scent-filled environment, insects require a fairly sophisticated system of odor detection". So some insects have an extremely powerful detection system, much more powerful than anything known in the animal world. How is this possible? Let's look at the standard explanation, and see what bits are reasonable, and what bits defy commonsense. First, insect detection systems are usually tied up with possession of elaborate antennas. There seems no doubt that the properties of their antennas, which may be extremely elaborate, govern insects' abilities to pick up signals from afar. In it mentions that "males of many species have beautiful and conspicuous feathery antennae". Fig. LB701-F6. Some types of insect antenna. From . Some scans of insect antennas have been made at very high magnification, using electron microscopes. These antennas may have intricate detail, with blends of long and short "hairs" of various thicknesses. Below is shown some of the detailed structure of a mosquito antenna. Fig. LB701-F7. Electron-microscope scans of mosquito antennas. From . How Antennas work While antennas are certainly at the base of insect sensing systems, these clearly are totally unrelated to the smell detection systems of mammals. To obtain good smell detection, mammals have developed two main parts, a mucous membrane containing huge numbers of smell receptors, and olfactory lobes to process signals from the smell receptors and pass on processed outputs to other parts of the brain. Neither of these parts is present in any way in insects. Insect signal detection very obviously must operate with quite different mechanisms. We are familiar with the term "antenna" in radio and telecommunication systems, where antennas are used both to send out broadcast or beamed electromagnetic signals, and to receive such signals at a point of use. It is a feature of antennas that they have physical components of similar size to the wavelengths of the electromagnetic waves they handle. That is why, for example, home television receiving antennas have become smaller as television stations move to shorter-wavelength signals. And so also with insect antennas. Their complex and varying parts are the right size to pick up infrared electromagnetic radiation, radiation of longer wavelength and lower energy than the red light which human eyes can detect. While Dogs live largely in a world of smell, and Humans in a world of visible light, Insects live in an infrared world. Once the concept of Insect Infrared Sensing (IIS) is grasped, so much of what has been puzzling in the past becomes clear. On the old Pheromone Model, a tiny production of a chemical compound could be "smelled" up to 11 kilometres away. And smelled by a creature without the sensitive mucous membrane and brainpower of a mammal. But in the Infrared Sensing Model, information is being passed by electromagnetic waves, like light but a of a slightly longer wavelength. The infrared waves travel at the speed of light, and can be detected at great distances. A human being can detect visible light from the Great Andromeda Galaxy with the naked eye, and that light has travelled for some 2.3 million light-years, so the idea of detecting infrared light from 11 kilometres away is easy to accept. What we call "infrared" actually occupies a much wider band of the electromagnetic spectrum than does visible light -- some 40 times as wide. However, the "far infrared" portion is low-energy radiation which we perceive as heat, and this is given off by all matter at "room temperature", and would be of lesser use for passing signals. Fig. LB701-F8. The visible and infrared light spectrum. From . It's worth commenting on the sizes of the infrared wavelengths and radiating structures involved. If you look back at the mosquito-antenna photos, at the bottom of each picture is a small white bar. In the first photo, this bar is marked 100 μm, in the second and third the bar is 10 μm, in the fourth 1 μm. Here "μm" means micron or micrometre (one-millionth of a metre), so the longest "hairs" are about 200 microns long, the short horn about 5 microns long. Turning now to the visible-infrared spectrum shown just above, on the wavelength scale at the bottom, the near-infrared panel is marked "1 um", the long-wave infrared "10 um", and part way into the far infrared, "100 um". Here "um" also means micron. Since the wavelengths received or omitted by antennas are similar to the sizes of the physical structures involved, this tells us that mosquito antennas are sensitive to wavelength from short-wave infrared (around 5 microns) to the mid-far infrared (around 100 microns). It suggests also that as regards the far-infrared bands contain the normal thermal emissions which we perceive as heat, these will tend to drown out the longer waves which the mosquito can distinguish during the heat of the day. And just as we can only see the stars in the sky when daylight is withdrawn, so some Infrared emissions will only be useful to insects during the night. This explains why moths, mostly working at night, have elaborate feathery antennas, while butterflies, active during the day, do not. Fig. LB701-F9. Antennas of moths, butterflies, and other insects. From . The concept that insect communication is by infrared, rather than pheromones, is not unknown, although very seldom acknowledged. As an example of previous disclosures of this by Philip Callahan, as far back as 1965, this extract from an article about why moths suicide in candle flames is interesting. "The idea that antennal sensilla of insects are dielectric waveguides or resonators to electromagnetic energy presumes the emission of such energies from insect pheromones and host plant scents. Many organic molecules chemiluminesce in the far infrared and particularly in the 7--14 μm and 15--26 μm windows. Luminescence from the insect pheromone (sex scent) was predicted by P. S. Callahan in 1965. The prediction was based on the form, arrangement, and dielectric properties of the moth antenna sensilla (spines) -- in short, on morphology and antenna design alone. The male cabbage looper moth is attracted to the acetate molecule given off by the female. The exact same coded far infrared lines (17 μm region) are emitted by a candle flame. The male moth is highly attracted to and dies attempting to mate with the candle flame". There is another far-reaching implication of the Insect Infrared Sensor Model. Antennas typically work both in receiving and transmitting, the functioning of a given antenna depends on how it is connected to power. Instead of producing so-called pheromones, a female insect produces an infrared wave pattern which is picked up by the male. No complex chemical-production process in the female is needed. Practical applications of the Insect Infrared Sensing Model Insect Infrared Sensing (IIS) technology will have enormous practical and commercial applications. It operates without any significant use of brainpower in the insect. If an infrared signal matching the insect's antenna structure is received, it's like a switch -- the insect has no choice but to react. This is unlike when a bait or lure is offered to an animal. The animal may react, but does so after processing the stimulus in the light of earlier encounters, and may decide to ignore the bait. The insect has no choice. For more on practical and commercial use of the Insect Infrared Sensing mechanism, see DS902: The Lurator Device for controlling Insects using Infrared . * * * * * * * * * * * * * * * * * * * * References and Links . Olfactory system. https://en.wikipedia.org/wiki/Olfactory_system . . 4.4 Tasting, Smelling, and Touching. http://open.lib.umn.edu/intropsyc/chapter/4-4-tasting-smelling-and-touching/ . . Gas-Liquid Chromatography. http://www.4college.co.uk/a/Cd/Glc.php . . Where Does All My Snot Come From?. https://www.livescience.com/54745-why-do-i-have-so-much-snot.html . . The Sense of Smell. http://www.humanphysiology.academy/Smell/Smell.html . . The Dog's Amazing Nose!. http://www.balancebehaviour.org/blah-1/ . . Does the Length of a Dog's Nose Help It Smell Better?. http://dogcare.dailypuppy.com/length-dogs-nose-smell-better-5927.html . . Debbie Hadley. How Do Insects Smell?. https://www.thoughtco.com/how-insects-smell-1968161. . Philip S. Callahan. Moth and candle: the candle flame as a sexual mimic of the coded infrared wavelengths from a moth sex scent (pheromone) . http://www.opticsinfobase.org/abstract.cfm?id=21558 . . Y T Qiu. Scanning electron micrographs of antennal sensilla of a female mosquito. Chem. Senses, 2006:31:845-863. . Antennae. http://etc.usf.edu/clipart/30700/30774/antennae_30774.htm . . Pheromone production, male abundance, body size, and the evolution of elaborate antennae in moths. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3297191/ . . UG101: Near Infrared and the Electromagnetic Spectrum. http://dew.globalsystemsscience.org/key-messages/near-infrared-and-the-electromagnetic-spectrum . . Antenna: insects. http://kids.britannica.com/students/article/insect/275066/media . . David Noel. DS902: The Lurator Device for controlling Insects using Infrared. http://aoi.com.au/devices/Lurator/index.htm . Go to the LB Home Page LB701 Commenced writing 2017 Sep 5. First version 1.0 on Web 2017 Sep 15.
<urn:uuid:98a00fc1-da35-477b-b4e8-cd2a2a9719b4>
CC-MAIN-2022-33
http://aoi.com.au/LB/LB701/index.htm
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573908.30/warc/CC-MAIN-20220820043108-20220820073108-00297.warc.gz
en
0.928813
4,779
3.515625
4
The Latest News and Ramblings A place to share various news, stories and video about wine, travel, winemaking There are a lot of wine terms that get thrown around in the wine world that can be quite overwhelming. One of those terms is malolactic fermentation. At the risk of taking you back to high school chemistry and biology class, I want to take a moment to explain malolactic and why it is important in winemaking. First off, there are typically two fermentations that a wine can undergo. Primary fermentation is the yeast turning sugar to alcohol (personally this is my favorite) and secondary fermentation which is malolactic fermentation. Malolactic fermentation is performed by lactic acid bacteria that can use malic acid as a food source. During the fermentation the bacteria turn malic acid, a sharp acid, into lactic acid (a softer acid). Malic acid is the same acid found in foods like granny smith apples which is why they are so tart and sharp. Conversely, lactic acid is often found in milk in the form of lactate which is typically mild. So why do we undergo malolactic fermentation in wine? There are several reasons, which include softening of the acidity, microbial stability, and flavor development. Most red wines undergo malolactic fermentation to soften out the acidity in the wine. Young red wine tends to be sharp and acidic. By converting the malic acid in the wine to lactic acid, we can soften the palate of the wine. Although we can sterile filter wines, there was a time when bottling wines that had not gone through malolactic fermentation would be at risk of fermenting in the bottle. Lactic acid bacteria are everywhere around us and would get into the wine during bottling (those of you who have ever made homemade sauerkraut know this well). As you can imagine, this is not a great situation as the fermentation releases a small amount of carbon dioxide which can push the cork, or the wine becomes hazy and tastes bad. Today many winemakers choose to bottle unfiltered which will require the need for the wine to have gone through malolactic fermentation. Finally, there is the desire to enhance the flavor of the wine. Some strains of malolactic bacteria have the potential to produce more of a compound called diacetyl. This is the same compound that makes your movie popcorn taste buttery. Those big buttery chardonnays will all have gone through malolactic fermentation to impart this buttery character in the wine. In fact, there are some well-known chardonnays that add extra malic acid to the wine to increase the amount of food for the bacteria, thus increasing the level of diacetyl and buttery flavor. Quite often these chardonnays also have a hefty dose of new oak on them as well, which is often the toasty vanilla and butterscotch flavors. These aromas work together to create an oaky buttery style that is quite sought after by some. As for me, all my red wines go through complete malolactic fermentation. However, I do not allow my white wines such as Pinot Gris, Chardonnay, Dry Riesling, Rose, and Sauvignon Blanc to go through that process. I love the beautiful high acidity we have in Oregon, and I believe the minerality and texture of these wines are really showcased by the higher acidity. So, the next time you smell a big buttery chardonnay or hear someone talking about the wine going through malolactic, you will know all about it. Often you will hear debates about whether winemaking is science or art. This is a question that opens a lot of discussion and is one that makes me reflect back on my career. They say your passion is what you would spend the rest of your life doing for free. I feel so lucky to have found my passion at an early age. In so many ways I feel that winemaking chose me. I did not start out to become a winemaker. Like many seventeen-year-olds coming out of high school, I had no idea what I wanted to do. I took a summer job my brother set up for me in a men’s clothing store where I spent my first two weeks in the tailor’s shop pressing clothes. I was miserable, and after a couple of weeks, my oldest brother, who had just taken a job as vineyard manager at Paraiso Springs Vineyards, checked to see how things were going and I said, “this is not my calling”. Fortunately for me, he said he had a job for me cleaning a mobile grape press. I took the job and went to work Monday morning at 6am. For the next two weeks, for 10 hours a day, I hand scrubbed a mobile grape press with Scotch-brite pads and TSP. My first experience in the wine industry was cleaning and sanitation (the most important lesson a winemaker can learn). Once I finished the press, I was asked to see if they needed help in the shop. Over the next couple of weeks, I was changing oil on tractors, welding farm equipment, and helping prepare a fleet of thirteen grape harvesters for the coming harvest. I learned how to work on hydraulics, troubleshoot electrical systems, fabricate, and fix equipment in the field in the middle of the night. There, I learned the value of problem solving, and being handy and I still use these skills every day. Later that summer, the owner called me in and asked if I would be interested in doing grape maturity sampling. Since it involved some basic chemistry with a pH probe and titration I was in heaven, as the only classes I had enjoyed in high school were chemistry and agriculture. Every day I would sample different vineyards then spend the end of the day presenting my results of Brix, pH, and Ta to the owner. He would also ask me to explain what pests or diseases I saw when I was walking the vineyard and from doing that, I learned the basic problems encountered in vineyards, how grapes mature and how decisions are made at harvest. At the end of the summer, Paraiso Springs held a release party, and I was invited to attend. I met their winemaker and was able to see how he interacted with the customers, holding court, and telling stories. Everyone loved meeting the winemaker. I learned a lot that first summer and asked the owner how you become a winemaker. He said go to UC Davis, they have a degree. Both my parents went to Davis and my maternal grandparents went to Davis, so needless to say I was enamored with the idea of a legacy. I worked for Paraiso Springs every summer and winter break to help pay for college and to get experience. When I finished my degree in Fermentation Science, I decided to stay an extra year to complete the Master Brewer’s program. I took a job as an Assistant Brewmaster and found myself bored with brewing the same batches over and over and yearned for the ever-changing nature of the wine industry. After a year, I moved to Napa Valley to start from the bottom at Robert Mondavi Winery and worked my way up to Director of Winemaking at age 26. Working for Robert Mondavi opened my eyes to the world of wine and the vast experiences it represents. Whether it was the architecture and design of Cliff May found in the iconic arch and tower, the sculptures by Beniamino Bufano, the annual summer concert series, or the host of culinary events every year, we were always immersed in the culture of wine, food, and the arts at Robert Mondavi. We honed our craft while continuously pushing the envelope on research in both the winery and the vineyard. Selling wine was a big part of a winemaker’s job at Robert Mondavi and I learned quickly how to present my wines to large groups of people and to hold court by telling great stories at a winemaker dinner. While in charge of our Joint Ventures in Italy and Australia I was lucky enough to experience the best of the old-world tradition and New World techniques that is very reflective of my winemaking style today. As I look back on my time at Robert Mondavi Winery, I feel truly lucky to have had the opportunity to experience so much. After 25+ years as a winemaker I have come to believe winemaking is not just art, or science. Wine is history, travel, theater, engineering, biology, chemistry, physics, food, literature, education, spirituality, and so much more. This quote by Robert Mondavi sums it up best: "Wine to me is passion. It's family and friends. It's warmth of heart and generosity of spirit. Wine is art. It's culture. It is the essence of civilization and the art of living." I have had a few great questions about what makes a wine age. Ageability really comes down to three things in wine…acidity, tannin and alcohol. Aging is just very slow oxidation. Without getting too crazy with the chemistry, I will try to break each one down. Acidity and more importantly the pH of the wine has a huge effect in determining the reactivity form of many of the compounds in wine that act as antioxidants such as color and tannin. Wines with higher acid and lower pH tend to age better than wines with low acid and high pH. Acid also plays a key role in the balance of a wine and preserving the fruit and aromatics. Tannins are a class of compounds that come from the skins and seeds of the grape as well as from oak barrels. Tannins act as natural antioxidants. The higher the tannin the more antioxidants available to slow down the aging process. Wines like Cabernet Sauvignon, Petite Verdot, Tannat and Petite Sirah all tend to have very high tannin levels. Whereas Pinot Noir, Gamay, and Barbera tend to have low natural tannin levels but often will have higher acidity to assist with aging. A good indication of the tannin level of a particular wine is the level of astringency you sense on your palate. This is the sensation of drying you feel on your gums after you swallow a sip of wine. Alcohol can also play a role in ageability, especially when it comes to dessert wines and fortified wines such as port. Alcohol levels of 17-21% act as a preservative and allow these wines to age incredibly well. The oldest wine I have ever tasted was a Madeira from the 1800’s. Madeira is a fortified wine with extremely high acidity thus giving it both high acid and high alcohol. It was still young! Aging wine is romantic. Many consumers fall in love with the idea of dusting off that bottle you have been saving for a really special experience only to have the wine be past its prime. I can’t tell you the number of wines I have saved for that special night only to have them be over aged and undrinkable. At the end of the day, you have to ask yourself if you really like aged wine. As a wine ages the fruit will typically go from fresh fruit characters to dried fruit characters. The acidity will soften over time and the tannins will soften. The overall wine will hopefully become more complex and integrated but eventually there will come a time when the wine peaks and starts to diminish in quality. Just because you age a wine does not make it better. Most wines these days are made to be drank upon release, even some very expensive bottles may not hold up to aging. If you want to age your wine be sure to cellar in a dark place with a consistent temperature. Ideally 55-60F is a good aging temperature, but the important thing is to not let the temperature fluctuate a lot. This expands and contracts the headspace inside the bottle allowing more oxygen to enter the bottle. If you do not have a wine fridge built for aging, a closet in the center of the house works well. Be aware of your climate and how the temperature varies. Just remember, life is short and enjoying a wine a little young is never as bad as not enjoying it at all! “Nice legs…. why thank you!” While we all know the story of this lamp, wine legs have a legacy of their own… Let’s admit it, we have all had that wine expert friend or family member tell us “Wow this wine has nice legs.” Ever wonder why people look at the legs of a wine and what it tells them? The legs running down the inside of your glass are actually alcohol. The more legs the more alcohol (try putting some vodka in a wine glass, talk about legs!!). So, what does this tell us, and should we really evaluate a wine based on its legs? Well for the answer we must go back about 800 hundred years to when English wine merchants were tasked with buying wine to bring back to the castles of England. In those days it was exceedingly difficult to get grapes ripe without some sort of terrible weather, pest or disease event forcing an early harvest. However, when those great vintages came along that allowed the grapes to ripen a bit longer, they developed more flavor and sugar. These vintages were sought after for being of higher quality because they had more hang time which in turn usually meant higher sugar. Since more sugar means more alcohol in the finished wine, merchants would use the legs of the wine to evaluate the wine’s “ripeness” with hopes of purchasing the best wine. Keep in mind the alcohol of wine back then was much lower than we typically see today. A wine with 10-11% alcohol would have been high back then! Today of course you can just look at the label to see the alcohol and of course higher alcohol does not necessarily mean higher quality. So, the next time someone holds up a glass to look at the legs, you can explain to them that they may want to just look at the label if they want to know the alcohol. Yesterday I was catching up with a friend who is recovering from COVID-19. He had a pretty mild case but like a lot of people who have had COVID, he lost his sense of smell and taste. Five weeks later it still has not come back 100%. It made me realize just how important my sense of taste and smell is. As a winemaker, my livelihood depends on my palate and more specifically, my ability to sense the aromas, flavors, textures and taste of a wine. Customers often say, “You must have an amazing palate!” like it is some sort of gift I was born with. The truth is we are all born with the same biological abilities to smell and taste. What separates those that are deemed to have “great palates” is the individual’s ability to connect their brain with their olfactory and reference a previous smell or taste. Remember, none of us were born with a memory bank of smells and tastes, nor were we born with the ability to put language to describe those senses. This concept of what wine tasting is really became clear one day while watching the “Actor’s Studio” with James Lipton. He was interviewing Dennis Hopper about his many great films and he brings up “Easy Rider”. He asks Hopper what being on the set was like and the many rumors about the drug use during filming. Hopper laughs and explains that while he was never high while acting in the film, he could have never acted out those scenes if he had never experienced being high in real life. He said the essence of acting is relying on your “Sense Memory”. That really resonated as the same idea of wine tasting; we are calling on our sense memory to recall flavors experienced in our past to describe a wine. Becoming a great wine taster takes two things in my opinion, experience and clarity. Experience comes from our everyday life. Since the day we were born we have been taking in the world around us. Since almost all of the flavors in wine are associated with those in the natural world, our vocabulary of wine terms is often associated with things found in food and nature. I had the pleasure of working with a great chef at one of my first jobs with the Robert Mondavi winery. Her name is Denise and she taught me a lot about the importance of fresh ingredients in cooking. She would often invite me along to purchase the items needed for an upcoming luncheon we were putting together at the winery. As we strolled the aisles at the market, Denise would pull out a pocketknife and started cutting into fruits and vegetables. “Close your eyes” she said, “here smell this tangerine…now this orange, see the difference?” I had never really taken the time to smell the difference and let it register in my brain. It was a turning point in my career. From then on, I spent more time taking in the smells of the natural world and storing them for later. We all come from different areas of the world and different cultures. Based on your experiences in the world you have specific set of personal sense memories that you can use to describe wine. I will never forget the first time I was invited to sit with the winemakers at my first job, we would taste 6-8 wines blind, then discuss. This particular tasting was chardonnay from our cellar that was being blended for bottling. As I let my pen flow with whatever came to mind, I started listing smells, flavors and tastes. When it came time to discuss, the head winemaker called on me to tell everyone what I saw in the first wine. Looking at my notes I was quite nervous, “lemon curd, pear, yeast, and mother’s makeup” everyone laughed. When asked what “mother’s makeup” was I explained that the wine smelled exactly like the base makeup my mother used to apply. I would sit on the end of her bed before school while she got ready for work…. the wine reminded me of that experience, it was my sense memory. Another important idea in being a good taster is emotional clarity. When you are experiencing emotions such as frustration, nervousness, worry or anger, I believe our ability to sense aromas and taste wines is greatly diminished. Most days I have a routine where I come in have my coffee, answer emails and then go downstairs to taste through monthly QC, competitive sets or blend tastings. I have found that if I am still thinking about an email or if something is bothering me, or in general my head is not clear, my tasting notes are simple and nondescriptive. Over the years when I have found myself in these situations I put on some music, the sense memories start to connect, and I am once again able to connect the dots between what I am sensing and my ability to describe it with language. I find Pink Floyd’s “Dark Side of the Moon” is particularly good for Pinot Noir tasting days! Quite often we see first time tasters in the tasting room, and they are worried about the etiquette of wine. Unfortunately, wine has had its fair share of snobbery over the years that makes many people uncomfortable when tasting wine. Tasters are often so worried about how to swirl the glass or what they are supposed to be tasting that they never really get to experience all the great things about wine. We preach non-pretentious education in the tasting room and really try to help people learn and become comfortable with the terminology and etiquette, all the while having fun. I urge you to do the same when sharing wine with friends and family. We can educate without being snobs! Remember at the end of the day wine is supposed to be fun. We all experience wine differently and there are really only two terms you need to know Yum and Yuck! July 6, 2022 April 6, 2022 January 3, 2022 December 1, 2021 February 24, 2021 February 26, 2018 February 21, 2018 February 5, 2018 January 19, 2018 September 11, 2014
<urn:uuid:48252fbe-c6d6-4fce-92d8-7e49c6df2d9d>
CC-MAIN-2022-33
https://www.paulobrienwines.com/News/Blog
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573172.64/warc/CC-MAIN-20220818063910-20220818093910-00298.warc.gz
en
0.978673
4,256
2.640625
3
Civil War Naval History 1 U.S.S. Tyler, Lieutenant Gwin, and U.S.S. Lexington, Lieutenant Shirk, engaged Confederate forces preparing to strongly fortify Shiloh (Pittsburg Landing), Tennessee. Under cover of the gunboats' cannon, a landing party of sailors and Army sharpshooters was put ashore from armed boats to determine Confederate strength in the area. Flag Officer Foote commended Gwin for his successful "amphibious" attack where several sailors met their death along with their Army comrades. At the same time he added: "But I must give a general order that no commander will land men to make an attack on shore. Our gunboats are to be used as forts, and as they have no more men than are necessary to man the guns, and as the Army must do the shore work, and as the enemy want nothing better than to entice our men on shore and overpower them with superior numbers, the commanders must not operate on shore, but confine themselves to their vessels." Flag Officer Foote again requested funds to keep the captured Eastport. He telegraphed: "I have applied to the Secretary of the Navy to have the rebel gunboat, Eastport, lately captured in the Tennessee River, fitted up as a gunboat, with her machinery in and lumber. She can be fitted out for about $20,000, and in three weeks. We want such a fast and powerful boat. Do telegraph about her, as we now have carpenters and cargo ahead on her and she is just what we want. I should run about in her and save time and do good service, Our other ironclad boats are too slow. The Eastport was a steamer on the river, and she, being a good boat, would please the West. No reply yet from the Secretary and time is precious." Had the Confederates been able to complete this fine ship, over 100 feet longer than the armored gunboats, before the rise of the rivers enabled the Federal forces to move with such devastating effect, she could well have disrupted the whole series of Union victories and postponed the collapse of Confederate defenses. U.S.S. Mount Vernon, Commander Glisson, captured blockade running British schooner British Queen off Wilmington with cargo including salt and coffee. 3 Flag Officer Du Pont, commanding joint amphibious expedition to Fernandina, Florida, reported to Secretary of the Navy Welles that he was "in full possession of Cumberland Island and Sound, of Fernandina and Amelia Island, and the river and town of St. Mary's." Confederate defenders were in the process of withdrawing heavy guns inland from the area and offered only token resistance to Du Pont's force. Fort Clinch on Amelia Island, occupied by an armed boat crew from U.S.S. Ottawa, had been seized by Confederates at the beginning of the war and was the first fort to be retaken by the Union. Commander Drayton on board Ottawa took a moving train under fire near Fernandina, while launches under Commander C. R. P. Rodgers captured steamer Darlington with a cargo of military stores. Du Pont had only the highest praise for his association with Brigadier General Wright, commanding the brigade of troops on the expedition: "Our plans of action have been matured by mutual consultation, and have been carried into execution by mutual help." The Fernandina operation placed the entire Georgia coast actually in the possession or under the control of the Union Navy. Du Pont wrote Senator Grimes three days late? that: "The victory was bloodless, but most complete in results." Du Pont also noted that: ''The most curious feature of the operations was the chase of a train of cars by a gunboat for one mile and a half-two soldiers were killed, the passengers rushed out in the woods The expedition was a prime example of sea-land mobility and of what General Robert E. Lee meant when he said: "Against ordinary numbers we are pretty strong, but against the hosts our enemies seem able to bring everywhere, there is no calculating." 4 Union forces covered by Flag Officer Foote's gunboat flotilla, now driving down the Mississippi, occupied strongly fortified Columbus, Kentucky, which the Confederates had been compelled to evacuate. Foote reported that the reconnaissance by U.S.S. Cincinnati and Louisville two days earlier had hastened the evacuation, the rebels leaving quite a number of guns and carriages, ammunition, and large quantity of shot and shell, a considerable number of anchors, and the remnant of chain lately stretched across the river, with a large number of torpedoes.'' The powerful fort, thought by many to be impregnable, had fallen without a struggle. Brigadier General Cullum wrote: "Columbus, the Gibraltar of the West, is ours and Kentucky is free, thanks to the brilliant strategy of the campaign, by which the enemy's center was pierced at Forts Henry and Donelson, his wings isolated from each other and turned, compelling thus the evacuation of his strongholds at Bowling Green first and now Columbus." Confederate Secretary of the Navy Mallory summarized his Navy's needs to President Davis: fifty light-draft and powerful steam propellers, plated with 5- inch hard iron, armed and equipped for service in our own waters, four iron or steel-clad single deck, ten gun frigates of about 2,000 tons, and ten clipper propellers with superior marine engines, both classes of ships designed for deep- sea cruising, 3,000 tons of first-class boiler-plate iron, and 1,000 tons of rod, bolt, and bar iron are means which this Department could immediately employ. We could use with equal advantage 3,000 instructed seamen, and 4,000 ordinary seamen and landsmen, and 2,000 first rate mechanics.'' Commander Daniel B. Ridgely, U.S.S. Santiago de Cuba, reported the capture of sloop O.K. off Cedar Keys, Florida, in February. Proceeding to St. Mark's, Florida, O.K. foundered in heavy seas. 5 Flag Officer Foote observed that the gunboats could not immediately attack the Confederate defenses at Island No. 10, down the river from Columbus. "The gunboats have been so much cutup in the late engagements at Forts Henry and Donelson in the pilot houses, hulls, and disabled machinery, that I could not induce the pilots to go in them again in a fight until they are repaired. I regret this, as we ought to move in the quickest possible time, but I have declined doing it, being utterly unprepared, although General Halleck says go, and not wait for repairs; but that can not be done without creating a stampede amongst the pilots and most of the newly made officers, to say nothing of the disasters which must follow if the rebels fight as they have done of late." Two days later he added other information: "The Benton is underway and barely stems the strong current of the Ohio, which is 5 knots per hour in this rise of water, but hope, by putting her between two ironclad steamers to-morrow, she will stem the current and work comparatively well . . . I hope on Wednesday [12 March] to take down seven ironclad gunboats and ten mortar boats to attack Island No. 10 and New Madrid. As the current in the Mississippi is in some places 7 knots per hour, the ironclad boats can hardly return here, therefore we must go well prepared, which detains us longer than even you would imagine necessary from your navy-yard and smooth-water standpoint . . . We are doing our best, but our difficulties and trials are legion." Flag Officer Farragut issued a general order to the fleet in which he stressed gunnery and damage control training. ''I expect every vessel's crew to be well exercised at their guns . . . They must he equally well trained for stopping shot holes and extinguishing fire. Hot and cold shot will no doubt be freely dealt us, and there must be stout hearts and quick hands to extinguish the one and stop the holes of the other." U.S.S. Water Witch, Lieutenant Hughes, captured schooner William Mallory off St. Andrew's Bay, Florida. 6 Lieutenant Worden reported U.S.S. Monitor had passed over the bar in New York harbor with U.S.S. Currituck and Sachem in company. "In order to reach Hampton Roads as speedily as possible,'' Worden wrote Secretary of the Navy Welles, ''whilst the fine weather lasts, I have been taken in tow by the tug [Seth Low]." Commander Semmes, C.S.S. Sumter, wrote J. M. Mason, Confederate Commissioner in London, it is quite manifest that there is a combination of all the neutral nations against us in this war and that in consequence we shall be able to accomplish little or nothing outside of our own waters. The fact is, we have got to fight this war out by ourselves, unaided, and that, too, in our own terms . . . The foreign intervention so much hoped for by the Confederacy was in large measure forestalled by the impressive series of Union naval successes and the effectiveness of the blockade. U.S.S. Pursuit, Acting Lieutenant David Cate, captured schooner Anna Belle off Apalachicola, Florida. 8 Ironclad C.S.S. Virginia, Captain Buchanan, destroyed wooden blockading ships U.S.S. Cumberland and U.S.S. Congress in Hampton Roads. Virginia, without trials or under way-training, headed directly for the Union squadron. She opened the engagement when less than a mile distant from Cumberland and the firing became general from blockaders and shore batteries. Virginia rammed Cumberland below the waterline and she sank rapidly, "gallantly fighting her guns," Buchanan reported in tribute to a brave foe, "as long as they were above water. Buchanan next turned Virginia's fury on Congress, hard aground, and set her ablaze with hot shot and incendiary shell. The day was Virginia's but it was not without loss. Part of her ram was wrenched off and left imbedded in the side of stricken Cumberland, and Buchanan received a wound in the thigh which necessitated his turning over command to Lieutenant Catesby ap R. Jones. Secretary of the Navy Mallory wrote to President Davis of the action: "The conduct of the Officers and men of the squadron . . . reflects unfading honor upon themselves and upon the Navy. The report will be read with deep interest, and its details will not fail to rouse the ardor and nerve the arms of our gallant seamen. It will be remembered that the Virginia was a novelty in naval architecture, wholly unlike any ship that ever floated; that her heaviest guns were equal novelties in ordnance; that her motive power and obedience to her helm were untried, and her officers and crew strangers, comparatively, to the ship and to each other; and yet, under all these disadvantages, the dashing courage and consummate professional ability of Flag Officer Buchanan and his associates achieved the most remarkable victory which naval annals record.'' U.S.S. Monitor, Lieutenant Worden, arrived in Hampton Roads at night. The stage was set for the dramatic battle with C.S.S. Virginia the following day. ' Upon the untried endurances of the new Monitor and her timely arrival,'' observed Captain Dahlgren, ''did depend the tide of events. . . " Flag Officer Foote's doctor reported on the busy commander's injury received at Fort Donelson where, as always, he was in the forefront: ''Very little, if any, improvement has taken place in consequence of neglect of the main [requirements] of a cure, viz, absolute rest and horizontal position of the whole extremity." U.S.S. Bohio, Acting Master W. D. Gregory, captured schooner Henry Travers off Southwest Pass, mouth of the Mississippi River. 9 Engagement lasting four hours took Place between U.S.S. Monitor, Lieutenant Worden, and C.S.S. Virginia, Lieutenant Jones, mostly at close range in Hampton Roads. Although neither side could claim clear victory, this historic first combat between ironclads ushered in a new era of war at sea. The blockade continued intact, but Virginia remained as a powerful defender of the Norfolk area and a barrier to the use of the rivers for the movement of Union forces. Severe damage inflicted on wooden-hulled U.S.S. Minnesota by Virginia during an interlude in the fight with Monitor underscored the plight of a wooden ship confronted by an ironclad. The broad impact of the Monitor-Virginia battle on naval thinking was summarized by Captain Levin M. Powell of U.S.S. Potomac writing later from Vera Cruz: ''The news of the fight between the Monitor and the Merrimack has created the most profound sensation amongst the professional men in the allied fleet here. They recognize the fact, as much by silence as words, that the face of naval warfare looks the other way now and the superb frigates and ships of the line. . . supposed capable a month ago, to destroy anything afloat in half an hour . . . are very much diminished in their proportions, and the confidence once reposed in them fully shaken in the presence of these astounding facts." And as Captain Dahlgren phrased it: ''Now comes the reign of iron and cased sloops are to take the place of wooden ships." Naval force under Commander Godon, consisting of U.S.S. Mohican, Pocahontas, and Potomska, took possession of St. Simon's and Jekyl Islands and landed at Brunswick, Georgia. All locations were found to be abandoned in keeping with the general Confederate withdrawal from the seacoast and coastal islands. U.S.S. Pinola, Lieutenant Crosby, arrived at Ship Island, Mississippi, with prize schooner Cora, captured in the Gulf of Mexico. Landing party from U.S.S. Anacostia and Yankee of the Potomac Flotilla, Lieutenant Wyman, destroyed abandoned Confederate batteries at Cockpit Point and Evansport, Virginia, and found C.S.S. Page blown up. 10 Amidst the Herculean labors of lightening and dragging heavy ships through the mud of the "19 ft. bar" that turned out to be 15 feet, and organizing the squadron, Flag Officer Farragut reported: I am up to my eyes in business. The Brooklyn is on the bar, and I am getting her off. I have just had Bell up at the head of the passes. My blockading shall be done inside as much as possible. I keep the gunboats up there all the time . . . Success is the only thing listened to in his war, and I know that I must sink or swim by that rule. Two of my best friends have done me a great injury by telling the Department that the Colorado can be gotten over the bar into the river, and so I was compelled to try it, and take precious time to do it. If I had been left to myself, I would have been in before this." Tug U.S.S. Whitehall, Acting Master William J. Baulsir, was accidentally destroyed by fire off Fort Monroe. 11 Landing party from U.S.S. Wabash, Commander C. R. P. Rodgers, occupied St. Augustine, Florida, which had been evacuated by Confederate troops in the face of the naval threat. Two Confederate gunboats under construction at the head of Pensacola Bay were burned by Confederate military authorities to prevent their falling into Northern hands in the event of the anticipated move against Pensacola by Union naval forces. 12 Landing party under Lieutenant Thomas H. Stevens of U.S.S. Ottawa occupied Jacksonville, Florida, without opposition. U.S.S. Gem of the Sea, Lieutenant Baxter, captured British blockade runner Fair Play off Georgetown, South Carolina. Gunboats U.S.S. Tyler, Lieutenant Gwin, and U.S.S. Lexington, Lieutenant Shirk, engaged a Confederate battery at Chickasaw, Alabama, while reconnoitering the Tennessee River. 13 Major General John P. McCown, CSA, ordered the evacuation of Confederate troops from New Madrid, Missouri, under cover of Flag Officer Hollins' gunboat squadron consisting of C.S.S. Livingston, Polk, and Pontchartrain. Flag Officer Foote advised Major General Halleck of the problems presented the partly armored ironclads by an attack downstream, much different difficulties than those encountered going up rivers in Tennessee: ''Your instructions to attack Island No. 10 are received, and I shall move for that purpose tomorrow morning. I have made the following telegram to the Navy Department, which you will perceive will lead me to be cautious, and not bring the boats within short range of the enemy's batteries. Generally, in all our attacks down the river, I will bear in mind the effect on this place and the other rivers, which a serious disaster to the gunboats would involve. General Strong is telegraphing Paducah for transports, as there are none at Cairo. The ironclad boats can not be held when anchored by stern in this current on account of the recess between the fantails forming the stern yawing them about, and as the sterns of the boats are not plated, and have but two 32-pounders astern, you will see our difficulty of fighting downstream effectually. Neither is there power enough in any of them to back upstream. We must, therefore, tie up to shore the best way we can and help the mortar boats. I have long since expressed to General Meigs my apprehensions about these boats' defects. Don't have my gunboats for rivers built with wheels amidships. The driftwood would choke the wheel, even if it had a powerful engine. I felt it my duty to state these difficulties, which could not be obviated, when I came here, as the vessels were modeled and partly built.'' Commander D. D. Porter reported the arrival of the morter flotilla at Ship Island, and five days later took them over the bar and into the Mississippi in preparation for the prolonged bombardment of Forts Jackson and St. Philip. 14 Joint amphibious attack under Commander Rowan and Brigadier General Burnside captured Confederate batteries on the Neuse River and occupied New Bern, North Carolina, described by Ruwan as "an immense depot of army fixtures and manufactures, of shot and shell Commander Rowan, with 13 war vessels and transports carrying 12,000 troops, departed his anchorage at Hatteras Inlet on 12 March, arriving in sight of New Bern that evening. Landing the troops, including Marines, the following day under the protecting guns of his vessels, Rowan continued close support of the Army advance throughout the day. The American flag was raised over Forts Dixie, Ellis, Thompson, and Lane on 14 Match, the formidable" obstructions in the river including torpedoes were passed by the gunboats, and troops were transported across Trent River to occupy the city. In addition to convoy, close gunfire support, and transport operations, the Navy captured two steamers, stores, munitions, and cotton, and supplied a howitzer battery ashore under Lieutenant Roderick S. McCook, USN. Wherever water reached, combined operations struck heavy blows that were costly to the Confederacy. Flag Officer Foote departed Cairo with seven gunboats U.S.S. Louisville was soon forced to return for repairs) and ten mortar boats to undertake the bombardment of Island No. 10, which stood astride the sweep of Union forces down the Mississippi. Foote wired Major General Halleck: " . . . I consider it unsafe to move without troops to occupy No. 10 if we [naval forces] capture it . . . should we pass No. 10 after its capture, the rebels on the Tennessee side would return and man their batteries and thus shut up the river in our rear." 15 Flag Officer Foote's flotilla moved from Hickman, Kentucky, down river to a position above Island No. 10. Foote reported, "The rain and dense fog prevented our getting the vessels in position [to commence the bombardment] . 16 Union gunboats and mortar boats under Flag Officer Foote commenced bombardment of strongly fortified and strategically located Island No. 10 in the Mississippi River. After the loss of Forts Henry and Donelson, and as General Grant continued to wisely use the mobile force afloat at his disposal, the Confederates fell back on Island No. 10, concentrated artillery and troops, and prepared for an all-out defense of this bastion which dominated the river. Meanwhile, Lieutenant Gwin reported the operations of the wooden gunboats on the Tennessee River into Mississippi and Alabama where they kept constantly active: ''I reported to General Grant at Fort Foote on the 7th instant and remained at Danville Bridge, 25 miles above, awaiting the fleet of transports until Monday morning, by direction of General Grant, when, General Smith arriving with a large portion of his command, forty transports, I convoyed them to Savannah, arriving there without molestation on the 11th. The same evening, with General Smith and staff on board, made a reconnaissance of the river as high as Pittsburg. The rebels had not renewed their attempts to fortify at that point, owing to the vigilant watch that had been kept on them in my absence by Lieutenant Commanding Shirk.'' U.S.S. Owasco, Lieutenant John Guest, captured schooners Eugenia and President in the Gulf of Mexico with cargoes of cotton. 17 First elements of the Army of the Potomac under General McClellan departed Alexandria, Virginia, for movement by water to Fort Monroe and the Navy- supported Peninsular Campaign aimed at capturing Richmond. His strategy was based on the mobility, flexibility, and massed gunfire support afforded by the Union Navy's control of the Chesapeake; indeed, he was to be saved from annihilation by heavy naval guns. U.S.S. Benton, with Flag Officer Foote on board, was lashed between U.S.S. Cincinnati and St. Louis to attack Island No. 10 and Confederate batteries on the Tennessee shore at a range of 2,000 yards. "The upper fort," Foote reported, "was badly cut up by the Benton and the other boats with her. We dismounted one of their guns . . . In the attack, Confederate gunners scored hits on Benton and damaged the engine of Cincinnati. A rifled gun burst on board St. Louis and killed or wounded a number of officers and men. C.S.S. Nashville, Lieutenant Pegram, ran the blockade out of Beaufort, North Carolina, through the gunfire of U.S.S. Cambridge, Commander W. A. Parker, and U.S.S. Gemsbok, Lieutenant Cavendy. News of the escape of Nashville caused concern to run high in Washington. Assistant Secretary of the Navy Fox wrote Flag Officer L. M. Goldsborough: "It is a terrible blow to our naval prestige . . . you can have no idea of the feeling here. It is a Bull Run of the Navy.'' 18 U.S.S. Florida, James Adger, Sumpter, Flambeau, and Onward captured British blockade runner Emily St. Pierre off Charleston. The master and steward, left on board, overpowered prize master Josiah Stone off Cape Hatteras, recaptured the vessel, and sailed to Liverpool, England. 19 Flag Officer Foote's forces attacking Island No. 10 continued to meet with strong resistance from Confederate batteries. "This place, Island No. 10,'' Foote observed, ''is harder to conquer than Columbus, as the island shores are lined with forts, each fort commanding the one above it. We are gradually approaching . . . The mortar shells have done fine execution Flag Officer Farragut described the noose of seapower: ''I sent over to Biloxi yesterday, and robbed the post-office of a few papers. They speak volumes of discontent. It is no use -the cord is pulling tighter, and I hope I shall he able to tie it. God alone decides the contest; but we must put our shoulders to the wheel." 20 Confederate President Davis wrote- regarding the defense of the James River approach to Richmond: "The position of Drewry's Bluff, seven or eight miles below Richmond was chosen to obstruct the river against such vessels as the Monitor. The work is being rapidly completely. Either Fort Powhatan or Kennon's Marsh, if found to be the proper positions, will be fortified and obstructed as at Drewry's Bluff, to prevent the ascent of the river by ironclad vessels. Blockading the channel where sufficiently narrow by strong lines of obstructions, filling it with submersive batteries [torpedoes], and flanking the obstructions by well protected batteries of the heaviest guns, seem to offer the best and speediest chances of protection with the means at our disposal against ironclad floating batteries.'' The Confederate Navy contributed in large part to these successful defenses that for three years resisted penetration. Naval crews proved especially effective in setting up and manning the big guns, many of which had come from the captured Navy Yard at Norfolk. 21 Major General Halleck wrote Flag Officer Foote, commenting on the Navy's operations against the Confederate batteries guarding Island No. 10: ''While I am certain that you have done everything that could be done successfully to reduce these works, I am very glad that you have not unnecessarily exposed your gunboats. If they had been disabled, it would have been a most serious loss to us in the future operations of the campaign . . . Nothing is lost by a little delay there." Foote's gunboat and mortar boat flotilla continued to bombard the works with telling effect. 22 C.S.S. Florida, Acting Master John Low, sailing as British steamer Oreto, cleared Liverpool, England, for Nassau. The first ship built in England for the Confederacy, Florida's four 7-inch rifled guns were sent separately to Nassau in steamer Bahama. Commander Bulloch, CSN, wrote Lieu-tenant John N. Maffitt, CSN: "Another ship will be- ready in about two months . . . Two small ships can do but little in the way of materially turning the tide of war, but we can do something to illustrate the spirit and energy of our people General Lovell wrote Secretary of War Benjamin that he bad six steamers of the River Defense Fleet to protect New Orleans. Lovell added: ''The people of New Orleans thought it strange that all the vessels of the Navy should be sent up the river and were disposed to find fault with sending in addition fourteen steamers leaving this city without a single vessel for protection against the enemy Confederate officials in Richmond were convinced than the greatest threat to New Orleans would come from upriver rather than from Flag Officer Farragut's force below Forts Jackson and St. Philip. Boat crew from U.S.S. Penguin, Acting Lieutenant T. A. Budd, and U.S.S. Henry Andrew', Acting Master Mather, was attacked while reconnoitering Mosquito Inlet, Florida. Budd, Mather, and three others were killed. 24 Lieutenant Gwin, U.S.S. Tyler, reported the typically ceaseless activity of the gunboats: ''. since my last report, dated March 21, 1 have been actively employed cruising up and down the river. The Lexington arrived this morning. The 'Tyler, accompanied by the Lexington, proceeded up the river to a point 2 miles below Eastport, Mississippi, where we discovered the rebels were planting a new battery at an elevation above water of 60 (degrees), consisting of two guns, one apparently in position. We threw several shell into it, but failed to elicit a reply. The battery just below Eastport, consisting of two guns, then opened upon us. Their shot fell short. I stood up just outside of their range and threw three or four 20 [second] shell at that battery, none of which exploded, owing to the very defective fuze (army). The rebels did not respond. I have made no regular attack on their lately constructed batteries, as they are of no importance to us, our base of operations being so much below them. I have deemed it my duty, however, to annoy them, where I could with little or no risk to our gunboats . . . The Lexington, Lieutenant Commanding Shirk, will cruise down the river from this point. The Tyler will cruise above." U.S.S. Pensacola, towing a chartered schooner into which she had discharged guns and stores at Ship Island, arrived at the mouth of the Mississippi. She grounded and failed on four attempts to cross the bar even though water conditions were favorable and small steamships were towing her through the mud on one occasion parting a hawser that killed two men and injured others. 25 C.S.S. Pamlico, Lieutenant William G. Dozier, and C.S.S. Oregon, Acting Master Abraham L. Myers, engaged U.S.S. New London, Lieutenant Read, at Pass Christian, Mississippi. The rifled gun on board Pamlico jammed during the nearly two hour engagement, and the Confederate vessels broke off the action, neither side having been damaged in the test of the strength of Flag Officer Farragut's gathering forces. Transports with General Butler and troops arrived at Ship Island which, until Pensacola was retaken, became the principal base for operations west of Key West. Flag Officer Farragut wrote: "I am now packed and ready for my departure to the mouth of the Mississippi River . . I spent last evening very pleasantly with General Butler. He does not appear to have any very difficult plan of operations, but simply to follow in my wake and hold what I can take. God grant that may be all that we attempt . . victory. If I die in the attempt, it will only be what every officer has to expect. He who dies in doing his duty to his country, and at peace with his God, has played out the drama of life to the best advantage." Confederate Secretary of the Navy Mallory ordered Flag Officer Tattnall to relieve the injured Flag Officer Buchanan and "take command of the naval defenses on the waters of Virginia and hoist your flag on board the Virginia." Reports of Confederate ironclads on the river disturbed Union commanders far and wide. Major General Halleck wired Flag Officer Foote: ''It is stated by men just arrived from New Orleans that the rebels are constructing one or more ironclad river boats to send against your flotilla. Moreover, it is said that they are to be cased with railroad iron like the Merrimack. If this is so I think a single boat might destroy your entire flotilla, pass our batteries and sweep the Western rivers. Could any of your gunboats be clad in the same way so as to resist the apprehended danger? If not, how long would it require to build a new one for that purpose? I have telegraphed to the Secretary of War for authority to have any suitable boat altered or prepared; or if there be none suitable, to build a new one. As no time is to be lost, if any one of the gunboats now in service will bear this change it should be taken in preference to building a new one. I shall await your answer. Could not the Essex be so altered?" Flag Officer Foote sent Lieutenant Joseph P. Sanford, his ordnance officer, to confer with the General on the subject and replied: ''There is no vessel now in the flotilla that can be armored as you suggest. This [Benton] is the only one which could bear the additional weight of iron required and she already is so deep and wanting in steam power that it would make her utterly useless with the additional weight of iron. I suggest that a strong boat be fitted up in St. Louis and armored in fact, two vessels-in the shortest possible manner, with a view of protecting the river at Cairo, or Columbus would do better, if it was fortified with heavy guns sweeping the river below. These boats will require at least a month to be fitted up. As to the place, etc., Lieutenant Sanford will consult with you. Commander Porter of the Essex, is also in St. Louis, who is fitting out the Essex, and who will remain there for the present. He will attend to the new boats and get them ready in the shortest possible time.'' Gunboat U.S.S. Cairo, Lieutenant Bryant, seized guns and equipment abandoned by Confederate troops evacuating Fort Zollicoffer, six miles below Nashville. Gunboat U.S.S. Cayuga, Lieutenant Harrison, captured schooner Jessie J. Cox, en route from Mobile to Havana with cargo of cotton and turpentine. 26 Flag Officer Foote, off Island No. 10, dispatched a warning to Commander Alexander M. Pennock, his fleet captain at Cairo: "You will inform the commanders of the gunboats Cairo, Tyler, and Lexington not to be caught up the river with too little water to return to Cairo. They, of course, before leaving, will consult the generals with whom they are cooperating. As it is reported on the authority of different persons from New Orleans that the rebels have thirteen gunboats finished and ready to move up the Mississippi, besides the four or five below New Madrid, and the Manassas or ram, at Memphis, the boats now up the rivers and at Columbus or Hickman, should be ready to protect Cairo or Columbus in case disaster overtakes us in our flotilla." Union commanders in the west and elsewhere recognized how much the margin of Union superiority and the power to thrust deep into the Confederacy depended upon the gunboats, and care was exercised not to lose the effectiveness of this mobile force. Meanwhile, greatly concerned about threats of Confederate naval ironclads, Secretary of War Stanton wired the President of the Board of Trade at Pittsburg: "This Department desires the immediate aid of your association in the following particulars 1st. That you would appoint three of its active members most familiar with steamboat and engine building who would act in concert with this Department and under its direction, and from patriotic motives devote some time and attention for thirty days in purchasing and preparing such means of defense on the Western waters against ironclad boats as the engineers of this Department may devise . . My object is to bring the energetic, patriotic spirit and enlightened, practical judgment of your city to aid the Government in a matter of great moment, where hours must count and dollars not be squandered." Two armed boats from U.S.S. Delaware, Lieutenant Stephen P. Quackenbush, captured schooners Albemarle and Lion at the head of Panzego Creek, North Carolina. 27 Secretary of 'vat Stanton instructed Engineer Charles Ellet, Jr., '' You will please proceed immediately to Pittsburg, Cincinnati, and New Albany and take measures to provide steam rams for defense against ironclad vessels on the "'Western waters.'' The next day he wired Ellet at Pittsburg: "General [James K.] Moorhead has gone to Pittsburg to aid you and put you in communication with the committee there. The rebels have a ram at Memphis. Lose no time.'' Later Stanton described the Ellet rams to General Halleck: ''They are the most powerful steamboats, with upper cabins removed, and bows filled in with heavy timber. It is not proposed to wait for putting on iron. This is the mode in which the Merrimack will be met. Can you not have something of the kind speedily prepared at St. Louis also?'' Armed boat expedition from U.S.S. Restless Acting Lieutenant Conroy, captured schooner Julia Worden off South Carolina, with cargo of rice for Charleston, and burned sloop Mart Louisa and schooner George Washington. Flag Officer Du Pont reported to Secretary of the Navy Welles that Confederate batteries on Skiddaway and Green Islands, Georgia, had been withdrawn and placed nearer Savannah, giving Union forces complete control of Wassaw and Ossabaw Sounds and the mouths of the Vernon and Wilmington Rivers, important approaches to the city. 28 Commander Henry H. Bell reported a reconnaissance in U.S.S. Kennebee of the Mississippi River and Forts Jackson and St. Philip. He noted that the "two guns from St. Philip reached as far down the river as any from Jackson" and called attention to the obstruction, "consisting of a raft of logs and eight hulks moored abreast," across the river below St. Philip. Scouting missions of this nature enabled Flag Officer Farragut to make the careful and precise plans which ultimately led to the successful passage of the forts and the capture of New Orleans. Lieutenant Stevens reported his return to Jacksonville with a launch and cutter from U.S.S. Wabash and steamers U.S.S. Darlington and Ellen after raising yacht America which had been found sunk by the Confederates earlier in the month far up St. John's River, Florida. Stevens reported that it was "generally believed she was bought by the rebels for the purpose of carrying Slidell and Mason to England." 29 U.S.S. R. R. Cuyler, Lieutenant F. Winslow, captured blockade running schooner Grace E. Baker off the coast of Cuba. Boat under command of Acting Master's Mate Henry Eason from U.S.S. Restless, captured schooner Lydia and Mary with large cargo of rice for Charleston, and destroyed an unnamed schooner in Santee River, South Carolina. 30 Flag Officer Foote ordered Commander Henry Walke, U.S.S. Carondelet.' "You will avail yourself of the first fog or rainy night and drift your steamer down past the batteries, on the Tennessee shore, and Island No. 10 . . . for the purpose of covering General Pope's army while he crosses that point to the opposite, or to the Tennessee side of the river, that he may move his army up to Island No. 10 and attack the rebels in the rear while we attack them in front." Five days later Walke made his heroic dash past Island No. 10 to join the Army at New Madrid.
<urn:uuid:b7fb22e8-3498-4842-94be-23c9498acd4c>
CC-MAIN-2022-33
https://historycentral.com/navy/cwnavalhistory/March1862.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00698.warc.gz
en
0.963875
8,046
2.96875
3
Information and communication technologies are essential in the contemporary process of teaching - learning. Integrating social media in education is an alternative to support teaching and learning activities, both formally and informally. In Romania there is little empirical research on professional use of Social Media in the educational environment. The purpose of this paper is to investigate the dimensions of social media use in educational context as new form of learning community and their implications for professional development. The research method is based on a questionnaire survey. The instrument was created and administrated online using Survey Monkey. The questionnaire was created through qualitative research with two focus groups on the most important aspects of social media as a new learning community. In research participated 103 teachers (48 teachers from university level and 55 teachers who work with pre-university level). The results show that the use of ITC is in the process of transition from Web 1.0 to Web 2.0. Teachers exchange information, use social media platforms at different levels and contribute significant with original product in virtual environment. Keywords: Teacher’s learningonline communitiessocial media Lifelong learning, systematic improvement of professional training is imperative in the teaching profession. The teacher is an important agent, through which younger generations can be shaped in order to effectively integrate in a dynamic, continuously changing society. For Romanian education to become a high-quality educational system is necessary for its principal actors, teachers, to own a set of social attitudes that enable engaging in high quality educational approaches (Gavreliuc, Gavreliuc, 2012). Research in this field shows that there is often a discrepancy between beliefs, attitudes of teachers and teaching practices. Among the factors that influence the occurrence of this situation are: contextual constraints (e.g., curricular requirements, social pressure), competing beliefs, insufficient professional training, etc. (Li, 2016). Some of transversal skills, highly valued in the labor market are the digital ones.We consider that teachers are the ones who should be the first to learn and use these skills in order to able to form individuals trained for the contemporary society. These “techno-pedagogical competences” (Guichon, Hauck, 2011, p. 189) enable teachers to deliver effectively, while utilizing the technical skills. Scientific literature highlights the contemporary situation in which teachers’ uses technology and social media for personal use and less for educational purposes (Madge et al., 2009). Continuous training of the teachers in Romania is one of the directions to strengthen in the educational reform. This aims personal and socio-professional development of teachers, preparing them for the challenges of contemporary society, incorporating the principles of lifelong education. Teachers have the obligation to obtain every five years a minimum of 90 transferable professional credits. Acquiring them is ensured mostly through: methodical activities, by attending to scientific events (conferences, seminars, etc.) at the school level, county and national level, by attending to training courses, constituting into genuine learning communities. The main reasons why teachers learn are: to improve students’ achievement, enrichment of teaching competences and accumulation of new knowledge (Dârjan, 2010). A large part of these learning situations refer implicitly or explicitly to the digital skills of the teachers. Some allow the development, practicing these skills, but few ongoing training offers allow the blend of formal and / or non-formal learning experiences with informal ones. This connection between the forms of education enables the adult access to relevant learning experiences, impacting on teachers’ beliefs and teaching strategies, sharing ideas with other professionals, reasoning, practices. For an optimal learning experience, the adult needs collaboration in a secure context, in which he/she can be actively involved, creative, experimenting, receiving feedback, in partnership between trainer and trainees The need for development and improvement of teachers has generated the creation of professional groups. Scientific literature defines them as “teacher teams” (Knapp, 2010), “teacher communities” (Little, 2003), or “teacher networks” (Lieberman, 2000), highlighting the role of social learning activities from these groups. We agree with the definition of social learning in teacher communities as: “undertaking (a series of) learning activities by teachers in collaboration with colleagues, resulting in a change in cognition and/ or behavior at the individual and/or group level” (Doppenberg, Bakx, Den Brok, 2012: 548-549). This definition highlights the constructive nature of these social learning activities representing the search for solutions to current and challenges, as well as the construction in a collaboratively manner of new concepts (Vrieling, Van den Beemt, De Laat, 2016).“Teams” are considered to be groups of coworkers, sharing a common goal. In order to describe informal social groups, although presented as equivalent terms, the concepts of "community" and "network" are significantly different. "Community" emphasizes creating an identity around a shared concern and "network" highlights the set of connections which the individual creates (Wenger, Trayner, de Laat, 2011). The structure of teacher groups who learn can take different forms depending on their developmental needs, taking the form of either a network or a community (Vrieling, Van den Beemt, De Laat, 2016). Learning through social interaction may improve practices (Van Maele, Moolenaar, Daly, 2015), add new values in knowledge area (Schechter, Sykes, Rosenfeld, 2008), and give the teachers a sense of collective identity (Vrieling, Van den Beemt, De Laat, 2016). Based on social capital theory, we argue that teachers form networks or take part of learning community to access new and shared relational resources (attitudes, knowledge, ideas, practices etc.). Learning communities and networks have new forms because of the advances in technology. These are extended beyond the physical space and grow in the virtual environment (Lord, Lomicka, 2008), making learning experiences more interactive and collaborative (Maor, 2003). Social media is a revolutionary concept, defined in the broadest sense as “any online service through which users can create and share a variety of content” (Bolton et al., 2013, p. 248). A nearest perspective define the concept as “a group of Internet-based applications that build on the ideological and technological foundations of Web 2.0, and that allow the creation and exchange of user-generated content” (Kaplan, Haenlein, 2012). Although some authors believe that Web 2.0 and social media terms are interchangeable (Dabbagh, Reo, 2011), in reality Web 2.0 is a platform that enables social media evolution “the ideological and technological foundation” of social media (Kaplan, Haenlein, 2010). Under the name of Social media, the following are brought together: blogs and micro-blogging (Twitter), social networking sites (Facebook, LinkedIn, ResearchGate, Google, forums, etc.), communities that share information in different formats (audio, video) and collaborative projects (Wikipedia, Google doc, Edmodo, Prezi, etc.). Social media and teaching The use of social media in educational context exhibits strong effects on Collaborative Learning, helping in constructing knowledge and performing better in complex tasks (Zhang et al., 2015). The use of social media was studied in the past decades especially in relation to student learning approaches. Were revealed connections between social media use and development of positive attitudes towards learning (Kirschner, Karpinski, 2010), supporting student self-regulated learning (Dabbagh, Kitsantas, 2012), enrichment of cooperation and interaction between students and teachers (Qureshi, Raza, Whitty, 2015), developing communication competencies (Wright et al, 2013), developing interpersonal intelligence (Erezet al., 2013). Compared with the so-called Generation Y (those born between 1981 and 1999) or digital natives (Bolton et. all, 2013; Popescul, Georgescu, 2016), previous generations are slightly reserved in using social media in social and professional activities. Research shows that teachers use social media differently, depending on the country of origin and the learning environment in which they teach. Universities are more receptive to the use of social media in communicating with students (Ranieri, Manca, Fini, 2012, p. 757) compared with the staff that teaches in the undergraduate level (Grosseck, Malița, 2015). Developed countries frequently use social media in the professional field (Perrotta, 2013) compared to those still in development (Popescul, Georgescu, 2016). In developing countries, it was notice an increased use of electronic communication devices (phones, tablets, and laptops) with Internet access, but rather used for personal rather than professional use (Whyte, 2014). Although there are positive intentions in using social media in teaching, infrastructure and know-how are limited (Popa, Bazgan, Pălăşan, 2015). Studies concerning the use of social media in education was focused on: practice a differential pedagogy (Hew, 2011, p. 663), educational relations (Selwyn, 2009), the use there of as tools for e- learning (Qureshi, Raza, Whitty, 2014). Some recent topics in research investigated: the teachers’ motivation to join an online learning community (Facebook groups) and their impact on the real professional life (Ranieri, Manca, Fini, 2012, p. 756), the interdisciplinary use of blogs in teachers education (Caldwell, Heaton, 2016), the role of online groups in supporting socialization among teachers (Edwards, Darwent, Irons, 2016, p. 420), the role social media plays in problem solving (Buus, 2012; Barber, King, Buchanan, 2015). Social Media And Teacher’s Learning Community Being in a professional community means sharing a passion or develops your expertise by interacting with others. The online community of practice represents “socio-technological learning environments” that facilitate knowledge construction (Ozturk, Ozcinar, 2013). Although the way teachers learn has been a constant concern of educational research (Caldwell, Heaton, 2016), the influence and attractiveness exerts on teachers professional learning communities is less investigated. There were studied the reasons for which teachers are part of virtual communities (Trust, 2012), factors that influence the use of Information and Communication Technology (Mumtaz, 2000), the use of social media in teachers’ training (Munoz, Pellegrini - Lafont, Cramer, 2014), the communication in social media environments (Li, Greenhow, 2015). All the reviewed research shows an insufficient explored field, an invitation for more investigation. Objective, Hypotheses, Methods, Instruments This study investigates the use of social media for pedagogical rationale. Another aim is to investigate benefit and disadvantages that virtual communities have. Considering these objectives, the research has the following research questions: a)What are the main factors that make teachers consider being part of professional Social Media b)In what ways were Social Media communities helps them to develop professional knowledge? c)What are the advantages that determine teachers to use them for their teaching practice? d)What are the limitations that prevent teachers from using them in daily pedagogical work? The research method is based on a questionnaire survey. The instrument was created and administrated online using SurveyMonkey. The questionnaire was created through qualitative research with two focus groups on the most important aspects of social media as a new learning community. One of the groups is consisted of 8 teachers, who teach in the university environment, and the other is consisted of 9 teachers who work in primary and secondary schools. Following the results of the focus groups the 4 investigated themes emerged, a number of 20 items, with 5 for each theme. To these were added 5 items aimed to investigate the socio-demographic factors (gender, age, teaching environment, experience in teaching, and science) and 5 items which reflected the types of social media use. In total the questionnaire reached a number of 30 items, rated on a Likert scale from 0 (“Not at all”) to 5 (“In a high degree”). In research participated 103 teachers (48 teachers from university level and 55 teachers who work with pre-university level), who know and use social media. Out of the total participants, 79 (76.7%) were women and 24 (23.3%) are men. This reflects the female representatives’ preference for this job. Age of the participants varies between 26 and 56 years. Thus, between 26-36 years are 80 (77.7%) participants, aged between 37 -46 years are 13 (12.6%) persons, between 47 -56 years are 10 (9.6%) subjects. The sample is relatively balanced in terms of experience in teaching. Thus, 20 (25%) of respondents stated that they have less than 10 years of experience, 40 (39%) teachers stated that they have less than 20 years of experience and 37 (36%) of people said they that they have over 20 years of teaching experience. The most used types of Social media are: networking platforms (100%), collaboration platforms (70%), image-video sharing platforms (56%) and blogging (26%). The most significant networks for teachers are: Facebook (97%), Google+ (89%), forums (63%), instant messaging (54%), Researchgate (40%) and Linkedin (35%). Frequency of use of these platforms was one of the interest points of the study. The results show that more than ¾ of all participants (78%) use daily, at least one of the forms listed above. A weekly frequency of use was stated by 19% of subjects, and only 3% of them stated that they access of these platforms on a monthly basis. Regarding the use of social media in teaching, research results show an interesting situation. Although all participants are consumers of social media, they are reserved in using these platforms in the teaching process. The vast majority (83%) use social media to communicate quickly with students and their parents, possibly to create a space for socializing with them, for easy democratization of teacher- student relations. The flow of communication is a largely unidirectional (from teacher to student). Teachers forward information, communicate, send tasks and control the activity of the formed groups, determining a formal situation, with a low degree of participation and commitment of the pupils or students. In the teaching process, very few use social media (27%), distributing materials to students, pupils, but the use during the instruction time is extremely low. Regarding evaluation, none of the participants use social media. 40% of them transmit the results of knowledge tests through those Apart from how these tools are handled, the focus of the research is related to goals which determine adherence to social media's learning community to which teachers are part of. The questionnaire has four sections, aimed at: factors that influence the presence in these online communities, the main aspects regarding professional development that social media communities facilitates, advantages and limitations of using this medium. Determinants of teachers’ presence in social media are: personal promotion, ensuring visibility (49%), concerns forenhancing learning by engaging students (17%), modeling practice (16%), searching and promote active collaboration (15%) and shared values and vision (3%). Concerns for professional development determines teachers to seek in a large proportion (38%) in the online environment fast connection firsts, quick ways to connect to news in their field, access to innovative ideas and new perspectives of analysis and interpretation of problems, the timely knowledge of events and professional events (22%), problem solving in real context was chosen by 19% of participants, transfer of ideas in practice fostering reflexivity (17%), motivator role of belonging to a competitive professional group (4%). The main advantages are highlighted: promotion of the person or institution (40%), sharing information, promotion of scientific events, training opportunities (39%), obtaining support, feed-back, advise and collaboration opportunities (14%), creation of teaching resources (5%),sense of belonging Disadvantages of using social media for professional purposes, perceived by teachers are ranked as follows: lack of privacy (39%), lack of or limited access of many teachers to the technologies and digital media (37%), lack or limited knowledge of know-how for creating online material (13%), the issue of accountability (6%), the superficiality of many of those who post, reflected in inappropriate publishing and uploading (5%). The results show that the use of ITC is in the process of transition from Web 1.0 to Web 2.0. Teachers exchange information, use social media platforms at different levels and contribute significant with original product in virtual environment. A large part of the teachers who participated in this study are rather consumers than creators of social media content. Teachers demonstrate positive attitudes towards this environment, and would like to use it more intensively, especially in teaching. The main reasons for restraint in the use of social media as a real learning community environment are: lack of confidence in the veracity of information, fear of self-discover, limited abilities to protect their identity, insufficient information literacy. Implications. Discussions. Limitations The study is meant to be an invitation to the competent forums to create the necessary leverage to encourage the founding of online learning communities for teachers. Low costs, accessibility, the independence which it provides, many other advantages make them attractive opportunities for real democratization of communication, support and professional development of all teachers, regardless of where they teach. The research has some limitations, the small number of participating subjects, their lack of representativeness.Therefore the results cannot be generalized, but calls for research on the phenomenon on a larger scale, on Romanian population. - Barber, W., King, S., Buchanan, S. (2015). Problem Based Learning and Authentic Assessment in Digital Pedagogy: Embracing the Role of Collaborative Communities. Electronic Journal of e-Learning, 13(2), 59-67. Retrieved from: https://eric.ed.gov/?id=EJ1060176 - Bolton, R. N., Parasuraman, A., Hoefnagels, A., Migchels, N., Kabadayi, S., Gruber, T., Loureiro, Y. K., & Solnet, D. (2013). Understanding Generation Y and their use of social media: a review and research agenda. Journal of Service Management, 24(3), 245-267. DOI: 10.1108/09564231311326987 - Buus, L. (2012). Scaffolding Teachers Integrate Social Media into a Problem-Based Learning Approach?. Electronic Journal of e-Learning, 10(1), 13-22. Retrieved from: https://eric.ed.gov/?id=EJ969432 - Caldwell, H., & Heaton, R. (2016). The interdisciplinary use of blogs and online communities in teacher education. The International Journal of Information and Learning Technology. 33(3). 2056-4880. Retrieved from: http://www.emeraldinsight.com/doi/abs/10.1108/IJILT-01-2016-0006 Dabbagh, N., & Kitsantas, A. (2012). Personal Learning Environments, social media, and self-regulated - learning: A natural formula for connecting formal and informal learning. The Internet and higher education,15(1), 3-8. Retrieved from: http://www.sciencedirect.com/science/article/pii/S1096751611000467; - Dabbagh, N., & Reo, R. (2011). Back to the future: Tracing the roots and learning affordances of social software. In Lee, M. J. W., McLoughlin C. (Eds.), Web 2.0-based e-learning: Applying social informatics for tertiary teaching (pp. 1–20). Hershey, PA: IGI Global. - Dârjan, I. (2010). Management comportamental în clasa de elevi. Timișoara: Editura Universității de Vest. - Doppenberg, J., Bakx, A., & Den Brok, P. (2012). Collaborative teacher learning in different primary school settings. Teachers and Teaching: Theory and Practice, 18(5), 547-566. doi: - Edwards, M., Darwent, D., & Irons, C. (2016). That blasted facebook page: supporting trainee-teachers professional learning through social media. ACM SIGCAS Computers and Society, 45(3), 420-426. doi: - Erez, M., Lisak, A., Harush, R., Glikson, E., Nouri, R., & Shokef, E. (2013). Going global: Developing management students' cultural intelligence and global identity in culturally diverse virtual teams. - Academy of Management Learning & Education, 12(3), 330-355. Retrieved from: http://amle.aom.org/content/12/3/330.short - Gavreliuc, D., & Gavreliuc, A. (2012). Şcoala şi schimbare social. Timișoara: Editura Universității de Vest. - Grosseck, G., & Malița, L. (2015). Ghid de bune practice e-learning. Timișoara: EdituraUniversității de Vest. - Guichon, N. & M. Hauck. (2011). “Teacher education research in CALL and CMC: more in demand than ever”. ReCALL, 23(3), 187-199. doi:10.1017/S0958344011000139 - Hew, K.F. (2011). Students’ and teachers’ use of Facebook.Computers in Human Behaviour. 27(2), pp. 662-676. doi:10.1016/j.chb.2010.11.020. - Kaplan, A. M., & Haenlein, M. (2010). Users of the world, unite! The challenges and opportunities of Social Media. Business horizons, 53(1), 59-68. Retrieved from: http://www.sciencedirect.com/science/article/pii/S0007681309001232, - Kaplan, A. M., & Haenlein, M. (2012). Social media: back to the roots and back to the future. Journal of Systems and Information Technology, 14(2), 101-104. Retrieved from: http://www.emeraldinsight.com/doi/abs/10.1108/13287261211232126?journalCode=jsit - Kirschner, P. A., & Karpinski, A. C. (2010). Facebook and academic performance. Computers in Human Behavior, 26, 1237–1245. Retrieved from: http://www.sciencedirect.com/science/article/pii/S0747563210000646 - Knapp, R. (2010). Collective (team) learning process models: A conceptual review.Human Resource Development Review, 9(3), 285-299. Retrieved from: http://journals.sagepub.com/doi/abs/ - Li, J. (2016). Social Media and Foreign Language Teacher Education: Beliefs and Practices. In Lin, C. H., Zhang, D., Zheng, B. (Eds.) Preparing Foreign Language Teachers for Next-Generation Education, (pp.261). Hershey PA: IGI Global. - Li, J., & Greenhow, C. (2015). Scholars and social media: tweeting in the conference backchannel for professional learning. Educational Media International, 52(1), 1-14. Retrieved from: http://www.tandfonline.com/doi/abs/ - Lieberman, A. (2000). Networks as learning communities: Shaping the future of teacher development. Journal of Teacher Education, 51(3), 221-227. Retrieved from: http://journals.sagepub.com/doi/abs/10.1177/0022487100051003010 - Little, J.W. (2003). Inside teacher community: Representations of classroom practice. Teachers College Record, 105(6), 913-945. Retrieved from: https://eric.ed.gov/?id=EJ677635 - Lord, G., Lomicka, L. (2008). Blended learning in teacher education: An investigation of classroom community across media. Contemporary Issues in Technology and Teacher Education, 8(2), 158-174. Retrieved from: http://www.citejournal.org/volume-8/issue-2-08/general/blended-learning-inteacher-education-an-investigation-of-classroom-community-across-media/ - Madge, C., J. Meek, J., Wellens T. & Hooley, T. (2009). “Facebook, social integration and informal learning at university”. Learning, Media and Technology, 34(2), 141-155. Retrieved from: http://www.tandfonline.com/doi/abs/ - Maor, D. (2003). The teacher's role in developing interaction and reflection in an online learning community. Educational Media International, 40(1-2), 127-138. Retrieved from: http://www.tandfonline.com/doi/abs/10.1080/0952398032000092170 - Mumtaz, S. (2000). Factors affecting teachers' use of information and communications technology: a review of the literature. Journal of information technology for teacher education, 9(3), 319-34. Retrieved from: http://www.tandfonline.com/doi/abs/ - Munoz, L. R., Pellegrini-Lafont, C., Cramer, E. (2014). Using Social Media in Teacher Preparation Programs: Twitter as a Means to Create Social Presence. Penn GSE Perspectives on Urban Education, 11(2), 57-69. Retrieved from: https://eric.ed.gov/?id=EJ1044072 - Ozturk, H. &Ozcinar, H. (2013). Learning in multiple communities from the perspective of knowledge capital. The International Review of Research in Open and Distributed Learning, 14.1, 204-22. Retrieved from: http://www.irrodl.org/index.php/irrodl/article/view/1277/2438? - Perrotta, C. (2013). Do school-level factors influence the educational benefits of digital technology? A critical analysis of teachers' perceptions. British Journal of Educational Technology, 44(2), 314-327. Retrieved from: http://onlinelibrary.wiley.com/doi/10.1111/j.1467-8535.2012.01304.x/full Popa, D., Bazgan, M., Pălășan, T. (2015). Teachers’ Perception About Using Digital Tools in Educational Process. EDULEARN15 Proceedings. 7994-7998. Retrieved from: https://library.iated.org/view/POPA2015TEA - Popescul, D., Georgescu, M. (2016). Generation Y Students in Social Media: What Do We Know about Them?. BRAIN. Broad Research in Artificial Intelligence and Neuroscience, 6(3-4), 74-81. Retrieved from: http://www.brain.edusoft.ro/index.php/brain/article/view/520 - Qureshi, I. A., Raza, H., & Whitty, M. (2014). Facebook as e-learning tool for higher education institutes. Knowledge Management & E-Learning, 6(4), 440–448. Retrieved from: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.700.2376&rep=rep1&type=pdf - Qureshi, I. A., Raza, H., & Whitty, M. (2015). Facebook as e-learning tool for higher education institutes. Knowledge Management & E-Learning: An International Journal (KM&EL),6(4), 440-448. Retrieved from: http://www.kmel-journal.org/ojs/index.php/online-publication/index - Ranieri, M., Manca, S., & Fini, A. (2012). Why (and how) do teachers engage in social networks? An exploratory study of professional use of Facebook and its implications for lifelong learning. British Journal of Educational Technology, 43, 5, 754-769. doi: - Schechter, C., Sykes, I., & Rosenfeld, J. (2008). Learning from success as leverage for school learning: Lessons from a national programme in Israel. International Journal of Leadership in Education: Theory and Practice, 11, 301-318. Retrieved from: http://www.tandfonline.com/doi/abs/10.1080/13603120701576274 - Selwyn, N. (2009). Faceworking: Exploring students’ education-related use of Facebook. Learning, Media and Technology, 34(2), 157–174. Retrieved from: http://www.tandfonline.com/doi/abs/10.1080/17439880902923622 - Trust, T. (2012). Professional learning networks designed for teacher learning. Journal of Digital Learning in Teacher Education, 28(4), 133-138. Retrieved from: http://www.tandfonline.com/doi/abs/10.1080/21532974.2012.10784693 - Van Maele, D., Moolenaar, N. M., & Daly, A. J. (2015). All for one and one for all: A social network perspective on the effects of social influence on teacher trust. Leadership and School Quality, 171-196. Retrieved from: https://www.researchgate.net/profile/Dimitri_Van_Maele/publication/268512573_All_for_One_and_One_for_All_A_Social_Network_Perspective_on_the_Effects_of_Social_Influence_on_Teacher _Trust/links/5476e69b0cf29afed61432a0.pdf - Vrieling, E., Van den Beemt, A., & de Laat, M. (2016). What’s in a name: Dimensions of social learning in teacher groups. Teachers and Teaching, 22(3), 273-292. Retrieved from: http://www.tandfonline.com/doi/abs/10.1080/13540602.2015.1058588 - Wenger, E., Trayner, B., & de Laat, M. (2011). Telling stories about the value of communities and networks: A toolkit. Heerlen: Open University of the Netherlands. Retrieved from: https://www.researchgate.net/profile/Maarten_Laat/publication/220040553_Promoting_and_Assess ing_Value_Creation_in_Communities_and_Networks_A_Conceptual_Framework/links/00463535 36fa177004000000.pdf - Whyte, S. (2014). Bridging the gaps: Using social media to develop techno-pedagogical competences in pre-service language teacher education. Recherche et pratiques pédagogiques en langues de spécialité. Cahiers de l'Apliut, 33(2), 143-169. Retrieved from: https://apliut.revues.org/4432 - Wright, K. B., Rosenberg, J., Egbert, N., Ploeger, N. A., Bernard, D. R., & King, S. (2013). Communication competence, social support, and depression among college students: a model of facebook and face-to-face support network influence. Journal of Health Communication, 18(1), 41-57. Retrieved from: http://www.tandfonline.com/doi/abs/ - Zhang, X., Wang, W., de Pablos, P. O., Tang, J., & Yan, X. (2015). Mapping development of social media research through different disciplines: Collaborative learning in management and computer science. Computers in Human Behavior, 51, 1142-1153. Retrieved from: http://www.sciencedirect.com/science/article/pii/S0747563215001387 This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. About this article 18 December 2019 Print ISBN (optional) Educational strategies, educational policy, organization of education, management of education, teacher, teacher training Cite this article as: Popa, D., & Voinea, M. (2019). Social Media – New Form Of Learning Community. In E. Soare, & C. Langa (Eds.), Education Facing Contemporary World Issues, vol 23. European Proceedings of Social and Behavioural Sciences (pp. 1842-1850). Future Academy. https://doi.org/10.15405/epsbs.2017.05.02.226
<urn:uuid:474549a8-6354-4787-a9de-852ecb1fa519>
CC-MAIN-2022-33
https://www.europeanproceedings.com/article/10.15405/epsbs.2017.05.02.226
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00699.warc.gz
en
0.846413
7,456
2.921875
3
From W. C. Röntgen's Third Communication, March 1897: 'The experiments on the permeability (for X-rays) of plates of constant thickness cut from the same crystal in different orientations, which were mentioned in my first Communication, have been continued. Plates were cut from calcite, quarz, turmaline, beryl, aragonite, apatite and barytes. Again no influence of the orientation on the transparency could be found.' 'Ever since I began working on X-rays, I have repeatedly sought to obtain diffraction with these rays; several times, using narrow slits, I observed phenomena which looked very much like diffraction. But in each case a change of experimental conditions, undertaken for testing the correctness of the explanation, failed to confirm it, and in many cases I was able directly to show that the phenomena had arisen in an entirely different way than by diffraction. I have not succeeded to register a single experiment from which I could gain the conviction of the existence of diffraction of X-rays with a certainty which satisfies me.' 2.1. Physics at the Time of Röntgen's Discovery of X-rays The first half of the nineteenth century was a period of tumultuous development of the exact sciences. The great mathematicians - Cauchy, Euler, Gauss, Hamilton, to name only a few - not only perfected the methods of analysis, but they also laid the foundations for a mathematical, quantitative, understanding of celestial and other Mechanics, of Hydrodynamics, Elasticity, Magnetism, and Optics. Following Lavoisier's introduction of the balance for checking reactions, Chemistry became a quantitative science. A series of brilliant experiments between 1820 and 1831 disclosed the relation of magnetism to galvanic electricity, and Faraday developed his notion of an electromagnetic field which was amplified and given mathematical expression by Maxwell in the 1860's. By 1848 the concept of Energy was clearly defined and the equivalence of energy and heat demonstrated. Clausius and Maxwell formulated the basic laws of Thermodynamics. The Kinetic Theory of Matter, long but vaguely foreshadowed in the works of Lucretius and of Boscovich, reached the first quantitative stage in the Theories of Gases of Maxwell and of Boltzmann. The discovery of the polarization of light (Malus, 1808) had proved that light was a transverse wave motion, and although hardly anything was known about the production of light, nearly all seemed to be known about its propagation. As a consequence, much improved telescopes, microscopes and other ingenious optical devices were being constructed and helped to open up vast new regions of the skies and of the animal and plant world. The application of the laws of physics to chemistry, engineering, and physiology made great strides and rational, quantitative and ever more precise relations replaced the former vague empiricism. Considering the enormous advances in the mathematical description of nature, some scientists thought that science had reached such a stage of perfection that little more fundamental work remained to be done; working out new problems along the given lines was all that could be expected of future scientists. Instead, in the last one or two decades of the century a hidden new world of physical entities and facts was. discovered which stood quite apart from the classical system of physics. It turned out eventually to be the foreshore of the twentieth century physics. This discovery began in 1854 when, among other physicists, Julius Plücker in Bonn studied the spectra produced by the electric discharge in rarified gases. These brilliantly coloured and variable discharges in evacuated glass tubes, usually manufactured by the Bonn glassblower Geisler, were being very gradually classified and analysed in a descriptive way by their dark spaces, luminous band structure etc. A full understanding of the processes producing these effects came only in the 1930's when atomic theory was well advanced. In 1859 Plücker observed that in highly evacuated tubes a bright luminescence occurred on the glass wall opposite to the cathode and that this was influenced in a peculiar way by the approach of a magnet. Johann Wilhelm Hittorf found in 1869 that with increasing evacuation of the discharge tube the dark space adjoining a disc-shaped negative pole (cathode) gains in length until it finally suppresses all the luminosity in the gas and reaches out to the glass wall opposite the cathode which then shines up in a bright green light called fluorescence. Hittorf in Münster, Crookes in London and other physicists investigating this form of discharge showed that the bright spot on the glass is produced by something that leaves the cathode surface at right angles and travels in straight lines, so that the shadow of an opaque metal cross is formed in the fluorescent spot. For this reason the name of cathode rays was given to the invisible something. If these rays fall on pieces of calcite or fluorite these minerals glow in beautiful colours, which differ according to the mineral species. Here then was a novel mode of producing light which attracted many investigators. Meanwhile two important developments took place regarding cathode rays: while Plücker had already indicated that the 'rays' were, perhaps, streams of electrically charged particles emitted by the cathode and deflected by a magnet, this view was shaken by experiments undertaken by Heinrich Hertz which showed no deflection of the rays by the electric field when they passed between the plates of a condenser. (Only much later the reason for this negative result was recognized in the electrical leakage between the condenser plates caused by too poor a vacuum.) The second development came from Ph. Lenard, then a student of H. Hertz, who succeeded in letting the cathode rays pass out of the tube through a very thin aluminium foil or 'window'. The rays would traverse a few inches of air (the higher the voltage on the tube, the longer the path), while their intensity, as indicated by the brightness of a fluorescent screen, diminished exponentially as the traversed layer of air grew. The Lenard window permitted a much easier observation of fluorescence of minerals and other compounds, for no longer had a special tube to be constructed and evacuated for each observation. It should be noted that the atomistic nature of the electric charge, which in our 'Electronics Age' is a familiar fact, was still unknown in the early 1890's. True, already in 1834 Faraday had shown that in the conduction of current through salt solutions, the charges were transported in a certain unit or a small multiple of this, and never in fractional or irregular quantities. But these electric charge units were carried by ponderable masses, say by the atoms of the silver deposited on the cathode of an electrolytic trough, and.the appearance of a unit charge could be caused equally well by the carrying capacity of the atom as by some inherent property of charge itself. In fact, the - apparent - absence of any deflection of cathode rays by electric fields, together with their power to penetrate through metal foils which are impervious to gas gave support to the view of Hertz and many other German physicists that cathode rays were a special form of electromagnetic field, perhaps longitudinal waves, rather than a stream of corpuscles. This view persisted until 1895 and 1896 when Jean Perrin in France and J. J. Thomson in Cambridge achieved electrostatic deflection of cathode rays, and the latter, soon afterwards, using a Faraday cage collected and measured the charge transported in the cathode ray. By deflection experiments, he also determined the ratio of the charge to the mass of the cathode ray particles, e/m; and found that, assuming the charge to be the same as that occurring in electrolysis, the mass of the particle would be only about l/1800 of the smallest known atomic mass, that of the hydrogen atom. In 1891 finally, on the proposal of Johnstone Stoney, the name of electron was universally accepted for this unit of charge. Its absolute value was determined in 1910 by Robert Millikan in Chicago as 4.77 - 10-10 el. static units and this value, one of the most fundamental ones in Nature, was revised in 1935 by E. Backlin as a consequence of Laue's discovery. The accepted value is today 4.803 - 10-10 el. static units or 1.601 - 10-16 Coulomb. 2.2. Röntgen's Discovery Let us go back to the summer of 1895 and to the beautiful old Bavarian university town and former seat of an independent bishop, Würzburg. Here, six years earlier, Wilhelm Conrad Röntgen had been appointed Professor of Physics. In the course of the summer of 1895 Röntgen had assembled equipment, such as a fairly large induction coil and suitable discharge tubes, for taking up work on the hotly contested subject of cathode rays. From Lenard's work it was known that these rays are absorbed in air, gases, and thin metal foils roughly according to the total mass of the matter traversed, and that the absorption decreases if a higher voltage is put across the discharge tube. It was also known that the intensity of the fluorescence excited in different crystals varies with the voltage used, fluorite being a good crystal for 'soft' cathode rays - those obtained with low voltage - and barium platino cyanide fluorescing strongly under bombardment by 'hard' cathode rays. Röntgen never divulged what measurements he intended to make, nor what type of discharge tube he was using when he made his great discovery. The fact that the tube was fully enclosed in a light-tight cardboard box shows that he intended to observe a very faint luminescence. But the question of whether he was interested in the law of absorption of cathode rays or in the excitation of fluorescence in different media remains unanswered. The fact is that he noticed that a barium platino cyanide screen lying on the table at a considerable distance from the tube showed a flash of fluorescence every time a discharge of the induction coil went through the tube. This flash could not be due to cathode rays because these would have been fully absorbed either by the glass wall of the tube, or by the Lenard window and the air. Röntgen, in a breathless period of work between 8 November and the end of the year, convinced himself of the reality of his observations which at first he found hard to believe. He soon concluded that the fluorescence was caused by something, the unknown X, that travelled in a straight path from the spot where the cathode ray in the tube hit the glass wall; that the unknown agent was absorbed by metals and that these cast a shadow in the fluorescent area of the screen. He therefore spoke of X-rays; he showed that these rays were exponentially absorbed in matter with an exponent roughly proportional to the mass traversed, but very much smaller than the one found by Lenard for the corresponding cathode rays; he found the photographic action of X-rays and took the first pictures of a set of brass weights enclosed in a wooden box, and, soon after, the first photo of the bones in the living hand; he remarked that the output of X-rays can be increased by letting the cathode rays impinge on a heavy metal 'anticathode' (which may also be the anode of the tube) instead of on the glass wall and thereby started the development of the technical X-ray tube; he found that X-rays render air conductive and discharge an electrometer; he performed ingenious but entirely negative experiments for elucidating the nature of X-rays, in which he searched in vain for reflection or refraction or diffraction, the characteristic features of wave phenomena. Röntgen was well aware of the fact that he had found something fundamentally new and that he had to make doubly sure of his facts. He hated nothing more than premature or incorrect publications. According to his habit he did the work single-handed and spoke not even to his assistants about it. Finally, in December 1895 he wrote his famous First Communication for the local Würzburg Scientific Society. In its 10 pages he set out the facts in a precise narrative, but he omitted - as also in all of his previous and his later work - all personal or historical indications, as transitory elements which he considered to detract from the finality of scientific publication. The paper was quickly set and Röntgen sent out proofs or reprints as New Year's Greetings to a number of his scientific friends. After three months (March 1896) the First Communication was followed by a second one of seven printed pages. In it, Röntgen reported careful experiments on the discharge of charged insulated metals and dielectrics, by irradiation when in air, gases or vacuum; he finds that an anode of platinum emits more X-rays than one of aluminium and recommends for efficient production of X-rays the use of an aluminium cathode in form of a concave mirror and a platinum anode at its focus, inclined at 45° to the axis of the cathode. Finally he states that the target need not be simultaneously the anode of the tube. A year later (March 1897) a third and final Communication appeared, slightly longer than the first two taken together and containing further observations and measurements. From it, the Motto on page 5 of this book is taken. Together these 31 pages of the three Communications testify to the classical conciseness of Röntgen's publications. * * * The response which this discovery prompted was unheard of at a time when, in general, Science was still a matter for the select few. In seeing on the fluorescent screen the bones of a living hand divested of the flesh around them, medical and lay public alike were overcome by an uncanny memento mori feeling which was vented in many serious and satirical contributions to the contemporary newspapers. The first medical applications were promptly made, and the demand for 'Röntgen Tubes' quickly initiated an industry that has been expanding ever since. Röntgen, a fundamentally shy and retiring character, was ordered by the young Emperor William II to demonstrate his discovery in the Berlin palace - an invitation Röntgen could not well refuse, as he did many other demands. The writer remembers the unveiling of the four seated figures on the buttresses of the remodelled Potsdamer Brücke in Berlin which on orders of the Emperor were placed there as representative of German Science and Industry: Carl Friedrich Gauss, Hermann von Helmholtz, Werner Siemens and Wilhelm Conrad Röntgen. This must have been in 1898 or '99 and there was much discussion in the family circle whether it was appropriate to put such a novel and poorly understood discovery on an equal footing with the well-established achievements of the three other figures. - The reader will find an entertaining account of the post-discovery period (and many interesting details besides) in O. Glasser's book Wilhelm Conrad Röntgen and the History of X-rays. 2.3. Progress in the Knowledge of X-rays up to 1912 In spite of the universal enthusiasm for X-rays and the great number of physicists and medical men working in the field, only very few fundamental facts were discovered in the next fifteen years. True, a constant technical development of the X-ray tubes and of high-tension generators took place in response to the increasing demands of the medical profession, especially when the therapeutic use of very hard X-rays began to be recognized at the end of this period. The commercial availability of fairly powerful X-ray equipment greatly facilitated Friedrich and Knipping's later experiments in 1912. But of experiments disclosing something of the nature of X-rays only four need be mentioned: a. Polarization of X-rays (Barkla 1905). That X-rays are scattered, i.e. thrown out of their original direction, when passing through a body, was already noticed by Röntgen in his second communication. Barkla used this property for an experiment similar to that by which Malus had detected the polarization of light. Malus (1808) had found that the rays of the setting sun, reflected on the windows of the Palais du Luxembourg acquired a new property by this reflection; for if they were once more reflected under a certain angle by a glass plate which could be rotated around the direction of the ray coming from the windows the intensity of the twice reflected ray would vary with the angle of rotation of the glass plate, being smallest when the twice reflected ray travels at right angles to its previous two directions, and strongest if it travels in their plane. This was a proof that light is a transverse wave motion, not, like sound, a longitudinal one, which has axial symmetry. Barkla repeated this experiment with X-rays, with the only difference that, there not being any specular reflection of X-rays, he had to substitute for the reflections the much weaker scattering under an angle of approximately 90°. He found the dependence he was looking for and concluded that if X-rays were a wave motion, they were, like light, transverse waves. This was fully confirmed by later experiments of the same type by Herweg (1909) and H. Haga (1907). b. Barkla's discovery of 'characteristic' X-rays (1909). X-rays could at that time only be characterized by their 'hardness', i.e. penetrating power. In general, the higher the voltage applied to the X-ray tube, the harder is the X-radiation emitted, that is, the smaller is its absorption coefficient in a given material, say aluminium or carbon. The absorption coefficient is, however, not a constant, because, since the soft components of the radiation leaving the tube are absorbed in the first layers of the absorber, the remaining radiation consists of harder X-rays. Thus the variability of the absorption coefficient with penetration depth is an indication of the inhomogeneous composition of the X-radiation. Barkla, studying tubes with anticathodes of different metals, found that under certain conditions of running the tube the emergent X-rays contained one strong homogeneous component, i.e. one whose absorption coefficient was constant. He found that the absorption coefficient decreased with increasing atomic weight of the anticathode material, and that this relation was shown graphically by two monotonic curves, one for the lighter elements and one for the heavier ones. He called these two types of radiation, characteristic for the elements from which the X-rays came, the K- and the L-Series. This discovery formed the first, if still vague, link between X-rays and matter beyond the effects determined by the mere presence of mass. c. Photoelectric Effect. The photoelectric effect consists in the emission of electrons when light or X-rays fall on the atoms in a gas or a solid. Its first observation goes back to Heinrich Hertz, 1887, who noticed that the maximum length of the spark of an induction coil was increased by illuminating the gap with ultraviolet light. In the following year W. Hallwachs showed that ultraviolet light dissipates the charge of a negatively charged insulated plate, but not that of a positively charged plate. This happens in air as well as in vacuum and in the latter case it was proved by magnetic deflection that the dissipation of the charge takes place by the emission of electrons. In 1902 Philipp Lenard found the remarkable fact that the intensity of the light falling on the metal plate influences the rate of emission of electrons, but not their velocity. Three years later Albert Einstein recognized the importance of this result as fundamental, and in one of his famous four papers of the year 1905 he applied Planck's concept of quantized energy to the phenomenon by equating the sum of kinetic and potential energy of the emitted electron to the energy quantum hν provided by a monochromatic radiation of frequency ν: (1/2)mν2 + p = hν (p = potential energy) At the time this was a very bold application and generalization of the concept of quantized energy which Planck had been proposing for deriving the laws of black body radiation, and whose physical significance was by no means assured. Einstein's equation was at first not at all well corroborated by the experimental results with ultraviolet light, because the unknown work term p in the equation is of the same order of magnitude as the two other terms. This is not so if the much larger energy hν of an X-ray is used, and the fully convincing proof of Einstein's relation had therefore to wait until the wavelength and frequency of X-rays could be determined with accuracy by diffraction on crystals, and the equation could then in turn be used for a precision method of measuring the value of Planck's constant h. Prior to this, in 1907, Willy Wien made a tentative determination of the X-ray wave-length (provided X-rays were a wave motion) by reversing the sequence of the photoelectric effect: he considered the energy of the electron impinging on the target as given by the voltage applied to the tube and, neglecting the small work term p, calculated the frequency and wave-length of the radiation released. Assuming a voltage of 20000 volt this leads to λ = 0.5 Å. W. H. Bragg interpreted the ionization of gases by X-rays (the amount of which served as a measure for X-ray intensity) as primarily a photoelectric effect on a gas molecule, with further ionizations produced by the swift ejected electrons. The fact that in this process a large amount of energy has to be transferred from the X-ray to the gas in a single act led him to consider this as a collision process and further to the concept that X-rays are a particle stream of neutral particles, or doublets of ± charge. d. Diffraction by a Slit. Röntgen himself reports in his First Communication inconclusive attempts at producing diffraction effects by letting the X-rays pass through a fine slit. These attempts were repeated by the Dutch physicists Haga and Wind (1903). They claimed to have recorded faint diffraction fringes, but their results were challenged as possibly due to a photographic effect caused by the developing. In 1908 and 1909 B. Walter and R. Pohl in Hamburg repeated essentially the same experiment taking utmost care in the adjusting. The slit was a tapering one produced by placing the finely polished and gilded straight edges of two metal plates in contact at one end and separated by a thin flake of mica at the other. The X-rays fell normally on the slit and the photographic plate was placed behind the slit parallel to its plane. If diffraction took place, one would expect the narrowest part of the slit to produce the widest separation of fringes. On the other hand, they would be the least intense because of the narrowness of the slit. Since for complete absorption of the X-rays the plates forming the slit must have a thickness of the order of 1-2 mm and the slit width in the effective part is of the order of l/50 mm, the slit is in reality a deep chasm through which the X-rays have to pass. Walter and Pohl's plates showed the otherwise wedge-shaped image of the slit to fan out at its narrow end into a brush-like fuzzy fringe system. Fortunately, in 1910, one of Röntgen's assistants, P. P. Koch, was engaged in constructing the first automatic microphotometer by using a pair of the recently improved photoelectric cells for the continuous registration of the blackening of a photographic plate. As soon as the instrument had been completed and tested, Koch traced several sections through the original plates of Walter and Pohl, and these showed variations which could be caused by diffraction. So, once more, the probability rose that X-rays were a wave phenomenon. The order of magnitude of the wave-length could have been obtained roughly from the fringe separation and the width of the slit on any of the cross sections taken. But since the intensity profiles departed considerably from those obtained by diffracting light waves on a slit, Sommerfeld, the master-mathematician of diffraction problems, developed the theory of diffraction of light waves by a deep slit before discussing the Walter-Pohl-Koch curves. Both papers, Koch's and Sommerfeld's, were published together in 1912. Sommerfeld's conclusion was that the fuzziness of the fringes was caused by a considerable spectral range of the X-rays, and that the centre of this range lay at a wave-length of about 4 x 10-9 cm. This possible but by no means unique explanation was known among the physicists in Munich several months before it appeared in Annalen der Physik in May 1912. The wave-length checked approximately with W. Wien's estimate. e. Waves or Corpuscles? At the end of 1911 X-rays still remained one of the enigmas of physics. There was, on the one hand, the very strong argument in favour of their corpuscular nature presented by the photoelectric effect. The explanation of this concentrated and instantaneous transfer of relatively large amounts of energy from a radiation field into kinetic energy of an electron was utterly impossible according to classical physics. On the other hand some phenomena fitted well with a field-or wave concept of X-rays. As early as 1896 a plausible explanation of X-ray generation had been given independently by three physicists: in Manchester by Stokes, in Paris by Liénard, and in Königsberg by Wiechert. They assumed the cathode rays to consist of a stream of charged particles, each surrounded by its electromagnetic field. On impact with the target (or 'anticathode') these particles are suddenly stopped and the field vanishes or changes to the static field surrounding a particle at rest. This sudden change of field spreads outward from the anticathode with the velocity of light and it constitutes the single X-ray pulse. In many ways X-rays seem then analogous to the acoustical report of shot hitting an armour plate. In order to work out this theory so as to check it on experiments, assumptions about the impact process in the target had to be made, the simplest being a constant deceleration over a few atomic distances in the target. The theory accounted readily for the non-periodicity or spectral inhomogeneity of X-rays as shown by their non-uniform absorption; also for the polarization as shown in the double scattering experiments. It was, however, desirable to obtain more information regarding the actual stopping process, and for this purpose measurements were made in 1909 by G. W. C. Kaye on the angular distribution of the intensity of X-rays generated in thin foils, where it seemed likely that only few decelerating impacts occurred. Sommerfeld, who was one of the protagonists of the impact or 'Bremsstrahl' theory, calculated the angular distribution and found as a general result that the higher the applied voltage and therefore the velocity of the electrons, the more the emission of the field was confined to the surface of a cone surrounding the direction of the velocity, the opening of which decreased with increasing voltage. This was well confirmed by the measurements for X-rays as well as for γ-rays, provided the conditions were such that no characteristic radiation was excited in the target. One has thus to distinguish between the general X-rays generated as 'Bremsstrahlen' or 'pulses' or 'white X-rays' i.e. through the decelerating impact, and those much more homogeneous ones with respect to absorption which are determined by the emitting material ('characteristic X-rays'). The problem arose whether polarization and directional emission could also be found for characteristic radiation. In order to study this experimentally, Sommerfeld, towards the end of 1911, appointed an assistant, Walter Friedrich, who had just finished his Doctor's Thesis in the adjoining Institute of Experimental Physics of which Röntgen was the head. The subject of his thesis had been the investigation of the directional distribution of the X-rays obtained from a platinum target; he was thus fully acquainted with the technique to be used in extending the investigation to a target, and a mode of operating the tube, which yielded strong characteristic rays, instead of Bremsstrahlung. First published for the International Union of Crystallography 1962 by N.V.A. Oosthoek's Uitgeversmaatschappij, Utrecht, The Netherlands Digitised 1999 for the IUCr XVIII Congress, Glasgow, Scotland © 1962, 1999 International Union of Crystallography
<urn:uuid:eaf3370e-eb08-4084-bebd-4925773e2934>
CC-MAIN-2022-33
https://www.iucr.org/publ/50yearsofxraydiffraction/full-text/x-rays
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573533.87/warc/CC-MAIN-20220818215509-20220819005509-00297.warc.gz
en
0.965927
5,946
3.171875
3
What Is Corporate Social Responsibility? Corporate Social Responsibility has been evidently growing with Liberalization, Privatization and Globalization. As said by William Jr. Ford, Chairman, Ford Motor Co. “A good company delivers excellent products and services, and a great company does all that and strives to make the world a better place”, therefore booking profits for yourself in such a manner that is socially, environmentally and ethically acceptable thereby leading to an overall positive impact on society. CSR Practices to Be Followed by FMCG Sector CSR in India The concept of Corporate Social Responsibility (CSR) is not new in India. It emerged from the ‘Vedic period” when history was not recorded in India. In that period, kings had an obligation towards society and merchants displayed their own business responsibility by building places of worship, education, inns and wells. Although the core function of business was to create wealth for society and was based on an economic structure, the business community with their rulers believed in the philosophy of “Sarva loka hitam” which means ‘‘the well-being of all stakeholders. Also, some instances could be gathered from Mythology such as the encounter of Kubera ( Hindu Lord of Wealth) and Ganesha( Son of Lord Shiva and Parvati) that give the key learning that if you have been provided with an excess wealth, you owe an additional responsibility of upliftment of society, be it socially, environmentally or ethically, otherwise, a day would come, when imperialism and ecological disturbances created in society would be a black hole for you that will swallow you and all your efforts throughout the years would turn kaput. Chapple and Moon (2005) while analyzing the CSR activities in Asia found out that in India 72% of the companies claim to have a CSR strategy that is three times higher than other developing countries in the continent. CSR In FMCG Sector The Associated Chambers of Commerce and Industry of India (ASSOCHAM), recently released a report saying that Indian companies engaged in FMCG and chemical sectors were most active in CSR. Out of 175 Indian companies studied, 52 companies in the FMCG sector have taken the most amount of CSR initiatives. This was followed by the chemical sector and then the IT sector. Community welfare is the top CSR priority area for most Indian companies. The second most sought out CSR initiative was providing education and enlightening rural youth in the country. Environment based CSR initiatives placed third with big corporates placing importance on carbon auditing and working towards reducing their impact. Finally, the corporate sector is involved in health care by providing methods to eradicate diseases and educating rural people about hygiene and disease prevention. CSR forms an important tool in branding especially for FMCGs. The Indian FMCG sector is the fourth largest sector in the economy and is set to grow to US$ 33. 4 billion by 2015. It is characterized by many MNCs operating out of India as well as good distribution networks. The FMCG sector is also the sector that contributes most towards a growing waste problem within the country and this is something that the sector must address. The food-processing industry alone is set to grow by billions of dollars and this will create its own waste streams. The potential for CSR in this sector is vast and hopefully companies galvanize on their growth will continue to invest in CSR as well. As well said by Carter Roberts “It doesn’t matter what industry you’re in, the supply chain will include products from all around the world. Whether we’re talking about fabric made in China, soybeans grown in the Amazon, palm oil harvested in Indonesia, biofuels created in Africa—companies will have to know how their products and the raw materials they use in their operations are affecting places, people, biodiversity, and the environment” But as FMCG sector consume major of the resources of nature, be it agro raw material or the energy resources, so it is duly responsible towards the environment. Also as they are creator of Food and Cleanliness semantically( primarily because of their major production) so they should also take a step forward to eradicate hunger, malnutrition and unhygienic practices from the society. Few of the practices that Fast Moving Cosumer Goods should bring into operation has been categorized in following ways that could be fruitful for the overall development of society. Upliftment of the Lower Section of Society The major areas that has to be concentrated for the upliftment of society is by eradicating poverty and hunger and by promoting school education in rural areas. We have various programs such as SUNDESH of DABUR’S that focus on ensuring overall socio-economic development of the rural & urban poor on a sustainable basis, through different participatory and need-based initiatives. It aims to reach out to the weaker and more vulnerable sections — such as women and children, illiterate and unemployed – of the society. Also we have TCCI( Tata Council for Community Iniatives) working in collaboration with United Nations Programme India has crafted Tata Index for Sustainable Human Development, a pioneering effort aiming at directing, managing and enhancing the community work. Also, we have ITC working with the concept of Triple Bottom line that would lead to social, economic, environmental and social development. The major key focus area of the company is on raising agricultural areas for maintaining productivity and helping the rural economy to be more socially inclusive. Also, these organizations work with NGOs to provide shelter, food to the downtrodden sections of society. Apart from providing livelihood or recreational facilities to the backward sections of the society the sector is much more focused on the education of the children in the country as they are the future of the country. Also, there is a well-known Chinese proverb “If you are planning for a year, sow rice; if you are planning for a decade, plant trees; if you are planning for a lifetime, educate people”, so companies are indulging in the free educational facilities for children. As we have SHIKSHA, P&G India’s flagship Corporate Social Responsibility Program, an integral part of P&G’s global corporate cause—Live, Learn and Thrive, which currently reaches out to over 50 million children annually. The program funds NGO efforts to address the underlying causes of poor access to education, such as poverty, health issues, and access to immunization. In cases where schools don’t exist, the program also funds their construction. Now in its 8th year, Shiksha has enabled over 385,000 lesser-privileged children to access good, quality education by supporting the sustainable and tangible assets of schools. Shiksha has built and/or supported over 200 schools by interventions such as reactivating defunct government schools, building new schools or enhancing education infrastructure at existing schools. Also we have Support My School, a partnership among NDTV, Coca-Cola and UN-Habitat. The partnership took the shape of the campaign “Support My School” in January 2011. The campaign was designed to channelize strengths of the partners and come up with a model of healthy active schools across the country. Today through Support My School, over 100 schools spread over 10 states can lay claim to better access to sanitation, water, playing facilities, libraries, computer centers and a more welcoming and learning environment. Over the last year, Support My School has evolved from being a campaign into becoming a platform. Pearson Foundation and Tata Teleservices added a new dimension to the campaign. Several like-minded organizations, foundations, citizens and citizen groups have extended their support that has led to the schools being revitalized. Upliftment of the Lower Section of Society It is often said that there are two Indias – Bharat which exists in the villages and India which thrives in the urban areas. If our country has to have real progress and make its mark on the global stage, then these two Indias must converge. Prosperity will have to come to our villages, towns and cities. So for the overall development of the society we should concentrate on rural areas as it is a major factor for progress of the nation. The explosion in rural consumption and growing competition for scarce resources demands that we embrace a new collaborative model of development for agricultural practices. Therefore the key drivers of this model would be Access to Urban India, Technology Adoption, Financial Inclusion, Education & Health, and Skill Building. First of all coming to access to urban services, we could say that it had led to significantly higher level of knowledge and new sources of livelihood in villages located in 19 R-Urban (Rural-Urban) clusters such as the National Capital Region which has emerged as a single geographical entity from Meerut in UP to Faridabad in Haryana. Basically the idea is creating another 50 R-Urban hubs where every village is within one hour of travel time to an urban centre would be transformational. This could ensure that more than 2/3rd of the rural population has easy access to urban India. These urban hubs will support rural areas and become the big markets of tomorrow. Technology has the potential to dismantle social and cultural barriers to ensure not only quality of services but also equality of access to all. As it has been rightly said by Bill Gates that “Innovations that are guided by smallholder farmers, adapted to local circumstances, and sustainable for the economy and environment will be necessary to ensure food security in the future”, therefore ITC has taken an initiative of E-choupals that would help in innovating. E-Choupal project has been launched in 2000 year and has been successfully executed in over 40,000 villages and has gained appreciation worldwide. Basic problem encountered during the implementation of the project was infrastructure inadequacy, telecom connectivity and bandwidth. Therefore it also led to improvement of infrastructure in the country leading to job creation and improvement in the economy of the country. As India’s ‘Kissan’ company, ITC has taken to involve the farmers in the management of whole e-Choupal initiatives. This project is considered as a success as its acting as a win-win opportunity for the farmers and company too. In the coming few years this project will be extended to 15 more states and also plans to channelize other services along with, related to health and education sector through the same e-Choupal infrastructure. So like this many more companies should come up with these set of ideas that would help in skill building as it would be a boost for their business and nation’s economy as well. * EMPOWERING WOMEN India today is at the cusp of a paradigm change in its growth and its position in the world. We need to think big and scale up rapidly in each and every area, be it education, infrastructure, industry, financial services or equality of both genders. Even the Nobel laureate and a chief economist Joseph Stiglitz has made a remark that “Giving money to fathers does not mean it will go to their children. Money given to mothers have a greater chance of going to their children”, hence suggesting the importance of empowering the women in the country. For around two centuries, social reformers and missionaries in India have endeavored to bring women out of confines in which centuries of traditions had kept them. Educational attainment and economic participation are the key constituents in ensuring the empowerment of women. And as in India major population is in rural areas so we should be more focused on elevating their standards by educating women in rural India that make them self sufficient and wise enough so that they could take decisions. Also as we know as they are the so called Home Ministers in the family, therefore it would help women in knowing about proper diet and cleanliness for their family, how precautions taken during sex could control population and much more. Therefore, if they are educated, they can contribute to the health and education of the next generation. Basically educating a women is educating the nation. To quote an example HUL has started Project Shakti in rural areas. Hindustan Unilever’s Shakti Entrepreneurial Programme helps women in rural India set up small businesses as direct-to-consumer retailers. The scheme equips women with business skills and a way out of poverty as well as creating a crucial new distribution channel for Unilever products in the large and fast-growing global market of low-spending consumers. Also through the method they are more and more aware of the products in the market that are beneficial such as Lifebuoy soap is a cheap soap bar and helps maintaining cleanliness that eradicates diseases. So such programmes initiated by FMCG sector could create a win-win situation for both the companies and rural India as the market is still untapped in rural India and there is a huge scope. Involvement of Ethical Practices and Satisfying the Customers Primarily we know that Fast Moving Consumer Goods majorly comprises of edible items, cleaning products and much more. Therefore their consumption directly affects the physical and mental health of the person. As these goods are consumed on daily basis, therefore they should be available at reasonable prices. As nowadays it has been found that many of the companies are exploiting their customers by providing less weight for the same price, therefore standardized packing should be use as before profit maximization their major concern should be the satisfaction of the customer. Also, warnings and precautions to be taken while consumptions of these goods should be clearly specified as in case of cosmetics their expiry date, the composition is really important to be known before consumption. Also quality should be a major concern for the companies. Marico produces Parachute oil. Once they found that their oil consist of a dollop of some chemical that was not even harmful to anyone, but they still got all their lots lifted from markets so that nobody consumes it and stopped the production line in order to eliminate it, even it caused them a huge loss, but their major concern was quality to customers. On the other side, we do have an example of Coca Cola, they don’t even purify the water before making aerated drinks as it will cut down on their profits, and as they are directly consumed by children and others, therefore it could affect their health. Hence they should improve their practices. Also, one more example is that ITC should not sell cigarettes, because even though they specify the warnings, but they are making profits at the stake of somebody else’s life and both the incidents quoted above are against the Rule Utilitarianism and Ethics of Care. Some companies on the other hand realize the importance of society and help society to their level best. One of the examples is Britannia. As the market was competitive, so instead of using unethical practices to boost their profits, they used the Blue Ocean Strategy by launching a health range in the biscuit segment. In context to CSR initiatives, it has focused upon the health care sector, by providing healthy food products at minimal prices and with the best quality. Also, companies should not mislead their customers as in the case of Easy off Bang and should not commit false promises to customers that could lead to unnecessary hopes in their mind as Complan does by saying it increases your height and mental skills. To conclude, they should be more ethical in their conduct. Responsibility Towards Environment As said “The frog does not drink up the pond in which he lives”, hence we should be more responsible towards the environment as only after the last tree has been cut down, the last river has been poisoned, the last fish caught, only then will we find that money cannot be eaten. In order to protect the environment, all these companies should take initiatives of reducing CO2 emissions, should manage waste, reduce electricity and water consumption and should not destroy forests. To quote few of the practices in operation by some of the FMCG companies are following. ITC is working towards managing water soil level and forest resources to maintain balance and ecological security. One initiative of Dabur is the Medicinal plant project. Delhi me Badami te’ (As you give me, I give you in return). This quote from an ancient text sums up Dabur’s commitment to nature. With a strong foundation in the Himalayan Kingdom, Nepal, Dabur has taken many strong — but quiet — initiatives Nepal has been a major source for the herbal plants which are extensively used in Tibetan, Chinese, Nepalese and Indian medicines. However, due to indiscriminate use, over-exploitations, poor collection methods, early harvesting, and lack of post-harvest technology, these natural reserves are depleting speedily. Dabur Nepal has started the project on medicinal plants in Nepal to provide the modern technology for the cultivation of the required medicinal herbs of the Himalayas to the farmers. A state-of-the-art Greenhouse facility has been set up at Banepa, which has the capability to produce 5-6 million saplings of medicinal plants per annum. Besides helping preserve natural resources, this initiative has also gone a long way in generating employment and income for local people and improving the socio-economic conditions of the local populace in the Himalayan Kingdom. In the case of The Body Shop, they are removing low-efficiency lighting in stores and replacing it with LED lighting which uses much less electricity and lasts longer. In some stores, they have to pilot energy-management systems to automatically control equipment such as heating and air conditioning. In other stores they have Automatic Meter Readers (AMRs) which track energy usage. The data from AMRs shows exactly how their employees can reduce consumption through direct action. Coca-Cola Hellenic announced the inauguration of advanced energy-efficient power generating capacity at its plant in Ukraine that will reduce CO2 emissions by more than 40% and will increase energy efficiency by more than 32% versus traditional power generation. Since the year 2004, it has globally implemented the “Water Stewardship” strategy, which strategically promotes the three actions of reducing the water used to produce its beverages, recycling water used for beverage manufacturing processes, and replenishing water in local communities and nature. Such initiatives should also be taken in our country to lower carbon emissions and reduce the consumption of water and electricity. The coating added to the exterior walls of Coca-Cola Japan’s head office in 2005 contains an environmentally considerate substance that is a photocatalyst. When coming into contact with sunlight or rain, a photocatalyst has the characteristics of being environment considerate, cleaning the air, enhancing energy efficiency, serving as an anti-foulant. The company also planted Hedera canariensis, a type of ivy, on the roof of the head office building, in pursuit of the goal of contributing to a reduction in CO2. Also we do have LEED-certified buildings in the nation that help us to minimize the usage of electricity and water. Being Concerned for Your Employees and Improving Their Lives As the employees working in your company are responsible to increase your sales and profit, therefore they should be provided with healthy working conditions and extra benefits. Tata has a variety of programs among them most of which are community development programs. It is a leading provider of maternal and child health services such as free reproductive services for women, 98% immunization in Jamshedpur, family planning, etc. It promotes sports as a way of life. It has established a football academy, archery academy, and promotes sports among employees. AMUL, Verghese Kurien’s “Father of White Revolution” strongly believed that by placing technology and professional management in the hands of the farmers, the living standards of millions of rural poor could be improved. He believed that the greatest assets of this country are its people, and dedicated his life to the task of harnessing the power of the people in a manner that promoted their larger interests. Around 65 years later that figure has grown to a staggering 16,100 with 3. 0 million milk producers pouring millions of tonnes of milk into the GCMMF containers twice daily and also it has improved the living standards of these people in rural India. Similarly Nestle the living standard of people in Moga (a village in India) by Dairy Development. Managing Waste and Using the Resources Optimally It is said, “ Earth was not gifted to you by your parents, it was loaned to you by your children”, so we have to make it a better place and then return it back to children that is it should be happier and healthier. Therefore we should preserve nature and use the resources optimally. An unlikely partnership between profitable FMCG companies like Hindustan Unilever and Dabur and penniless rag pickers is now offering a hint of a fix to India’s 12,000 tonnes a day plastic junk pile-up. In an early pilot, HUL is trying to create market value for discarded sachets and lighter plastic packaging so that rag pickers find incentives to collect them from the streets. It has also partnered with a company in Chennai to turn such flexible plastic waste into fuel oil at a viable cost. HUL’s factory in Pondicherry has been using this fuel to power its boilers. Therefore they are using a strategy that helps in managing waste and producing energy using it that is best out of waste. Also, one more way for FMCG companies to reduce the waste would be improving their packaging practices and using environment-friendly material. Heinz will be bottling its famous ketchup in more earth-friendly packaging. The company would be using plant-based bottles developed by Coke — aptly named “Plant Bottles” — for all of its 20 oz. ketchup bottles. The plastic bottles consist of 30 percent plant material and are made with Brazilian sugarcane ethanol, which results in a lower reliance on unsustainable resources as compared with traditional PET bottles. Heinz wants to switch to more eco-friendly bottles that is a vital step in reducing the company’s greenhouse gas emissions, solid waste, water consumption and energy usage. When Coke first introduced Plant Bottles in 2009, an initial life-cycle analysis by the Imperial College London showed that the bottle had a 12 to 19 percent reduction in carbon impact. Coca-Cola said that last year, Plant Bottles eliminated the equivalent of 30,000 metric tons of CO2. Hence other FMCG companies should also come forward and adopt this kind of practices. Coca-Cola Central Japan Products Co., Ltd.’s Tokai Kita Plant has achieved a reduction in the volume of solid waste generated of approximately 90% by fermenting coffee grounds, used tea leaves and the sludge from processed wastewater and converting them to energy resources. Many tea and coffee companies in India can adopt the same strategy to manage waste and create biofuels out of it. RENEWING LIVES Also various FMCG companies in the country are taking specific measures to renew the lives of people by giving them a ray of hope and a reason to live. Few of the initiatives taken in the direction are stated. Himalaya Herbal Healthcare signed an agreement with the Department of Prison Rehabilitation, Government of Karnataka, to create employment opportunities for prisoners, with the objective of rehabilitating them. According to the agreement, the prisoners cultivate medicinal herbs for the Himalayas. This helps in skill-building and employment generation. The program targets prisoners charged with minor offenses, who have shown good behavior and a desire to rebuild their lives. Tata has an organized aid programme in case of natural disasters including long-term rehabilitation and reconstruction works. It did commendable work during the Gujarat earthquakes and Orissa floods. P&G, Protecting Futures works with partner organizations to provide puberty education, sanitary protection, and sanitary facilities to help vulnerable girls stay in school. Since 2006, Protecting Futures has worked with nine partners in 20 countries, reaching more than 720,000 girls in the developing world. Almost one billion people in the developing world do not have access to clean drinking water. As a result, thousands of children die every day. The P&G Children’s Safe Drinking Water Program (CSDW) reaches these people through P&G packets, a water purifying technology developed by P&G and the U. S. Centers for Disease Control and Prevention (CDC). One small P&G packet quickly turns 10 liters of dirty, potentially deadly water into clean, drinkable water. The packets can be used anywhere in the world, including areas affected by natural disasters. CSDW and its partners provide clean drinking water in schools, outreach to mothers in health clinics, and clean drinking water for malnourished children, and also help people living with AIDS to live positively. Defending Animal Rights Animals are entitled to the possession of their own lives, and their most basic interests such as an interest in not suffering should be afforded the same consideration as the similar interests of human beings. Therefore animals shouldn’t be molested or used for testing of cosmetics, products and other ingredients. As the human population grows and our demand for natural resources increases, more and more habitats are devastated. Today, we may be losing 30,000 species a year — a rate much faster than at any time since the last great extinction 65 million years ago that wiped out most of the dinosaurs. If we continue on this course, we will destroy even ourselves. The Body Shop is against animal testing. They comply with the very strict requirements of the Humane Cosmetics Standard. This standard was set by the British Union for the Abolition of Vivisection (BUAV) and is regarded as the highest standard for animal welfare in the cosmetics industry. They audit them regularly to ensure they comply. They also audit out themselves. Every two years they check their policies and compliances to ensure they adhere to the latest animal-welfare guidelines. There are many campaigns that The Body Shop has conducted for animals. Miracle Treatment Partnering with WSPA the Miracle Treatment provided the chance to perform a iracle – to join together to improve the welfare of endangered Borneo Orangutans. 20,000+ pledges. Join the Humane Chain Australian’s to take action to help end the live sheep export trade. Aiming for 40,000 signatures, a person signing for every sheep that dies on route each year. 63000 customers joined the humane chain. Coca-Cola started a holiday ad campaign to protect polar bears by donating up to $3 million to the World Wildlife Fund. Similar steps should be taken by Indian FMCG companies to save the species (Save the Tiger Campaign) and to stand for animal rights. Defending Human Rights Our species is one, and each of the individuals who compose it is entitled to equal moral consideration. The true civilization is where every man gives to every other man every right he claims for himself. Therefore initiatives should be taken in order to create an equivalent society for each and every person. The Body Shop has headed for the sake of the same. I am My Homelands It supports Indigenous human rights to stay on their homeland. All Together Now Aiming for 40,000 customers to leave a fingerprint and commit to constructive conversations about racism with friends, family and colleagues. 0,000 fingers prints collected. Your Beauty and Worth cannot be measured Highlighting dangers of restrictive dieting and excessive exercise, it encourages staff and customers to believe beautiful, healthy people come in all shapes and sizes. 26,000 signatures to the Minister for Health & Ageing, requesting eating disorders become a major health priority. Make Your Mark The Dalai Lama helps us launch Make Your Mark. The campaign, in partnership with Amnesty International, marks the 50th Anniversary of the Universal Declaration of Human Rights. Three million people sign the petition. Stop Sex Trafficking of Children and Young People In September 2009, The Body Shop started a journey together with ECPAT International to Stop Sex Trafficking of Children and Young People. From every corner of The Body Shop, around the world we engaged with our customers, friends, and family. We raised awareness and funds, secured petition signatures and we marched! YES, YES, YES to Safe Sex Help the fight against HIV by practicing safe sex and buying lip butter to fund the global HIV awareness work of the Staying Alive Foundation. And FMCG companies could be really helpful in spreading this by promoting the usage of condoms. These were a few of the campaigns conducted by The Body Shop. Indian companies should walk on similar steps in order to make the nation and people more secure as they a responsibility towards society. Also, companies while recruiting or compensating should not be biased on basis of caste, color, creed, religion, gender, race or AIDS. The policy of Distributive Justice should be followed in order to maintain a healthy environment. ITC believes that all its employees must live with social and economic dignity and freedom, regardless of nationality, gender, race, economic status or religion. In the management of its businesses and operations, therefore, ITC ensures that it upholds the spirit of human rights as enshrined in existing international standards such as the Universal Declaration and the Fundamental Human Rights Conventions of the ILO. The above mentioned are a few of the practices that could be implemented in order to do good for all the stakeholders involved. This is good for the health of your business, the health of the nation and the health of the Earth. Thereby, more strategies should be adopted in order to make the world a better and happier place. And also it’s always been difficult for corporations to lead an examined life. I’ve always felt like a company has the responsibility to not wait for the government to tell it what to do or to wait for the consumer to tell it what to do, but as soon as it finds out it’s doing something wrong, stop doing it. So the companies should realize their duties and start implementing various policies for the benefit and humanity.
<urn:uuid:4e479174-3ac2-4407-98e1-2e1065acfbeb>
CC-MAIN-2022-33
https://graduateway.com/what-is-corporate-social-responsibility/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573533.87/warc/CC-MAIN-20220818215509-20220819005509-00299.warc.gz
en
0.961184
6,186
2.703125
3
Slavery and the Slave Trade Slavery and the Slave Trade This guide deals primarily with aspects of the transatlantic slave trade and records in the National Records of Scotland (NRS). It also mentions some other Scottish archives relating to Scotland's involvement in the trade and its abolition. Some researchers are interested in information about enslaved individuals or former enslaved people, while others are interested in conditions or events on particular plantations, slave voyages, or the abolition movement. Research is also carried out in Scottish archives into other forms and aspects of slavery, for example the concepts of free and unfree status of women and serfs in medieval Scotland; transportation to the colonies of rebels during the religious wars and of criminals; bonded labour in the early modern period; and the enslavement of Scots by North African corsairs in the seventeenth century. It is possible to carry out research on some of these subjects in the NRS, which holds the records of Scottish courts and churches, and some estate papers relating to plantations owned by enslavers. Other aspects of the trade are better researched elsewhere, for example in The National Archives, London, or in other archives and libraries. The following sections deal with aspects of the slave trade and suggest relevant sources of information. Enslavement in Africa and slave trade voyages There is little evidence in the NRS of the enslavement and movement of the enslaved to African ports prior to shipping. Log books of ship voyages normally remain the property of ship owners and very few have found their way to Scottish archives. The NRS holds one letter describing a voyage on a slave trader from Bleney Harper (in Barbados) to William Gordon & Company, Glasgow, May 1731 (NRS reference CS228/A/3/19). A greater proportion of evidence on the enslavement and movement of enslaved persons can be found in The National Archives (in London) in the records of the African trading companies, Customs Outport, Board of Trade and the Admiralty. For more details see the research guides on the slave trade on The National Archives website (see below under United Kingdom government sources). Where evidence of the slave trade voyages exists in Scotland it is generally through court cases. For example, four cases involving owners of ships engaged in the slave trade, which were heard in the High Court of Admiralty in Scotland are: Daniel v Graham, 1721 (NRS reference AC9/718), Clark v Inglis, 1727 (NRS reference AC9/1022), Horseburgh v Bogle, 1727 (NRS reference AC9/1042) and Alexander v Colhoun & Company, 1762 (NRS reference CS228/A/3/19). The records of the Horseburgh v Bogle case are important as they give very detailed information about the way in which the slave trade was carried out in the early eighteenth century. There are more than 70 items including financial records, witness statements and other legal papers providing evidence of the export of 'guinea goods' from Britain to Africa, the role of the ship's surgeon as supercargo in purchasing slaves for transportation, and his contract with the Scottish merchants who backed the venture. Enslavement markets and auctions Following the union of parliaments in 1707, Scotland gained formal access to the transatlantic slave trade. Scottish merchants became increasingly involved in the trade and Scottish planters (especially sugar and tobacco) began to settle in the colonies, generating much of their wealth through enslaved labour. Evidence of the acquisition of enslaved individuals from slave traders and other enslavers can be found among the Estate and plantation records and the Business records of merchants and individuals involved in enslavement. Enslaved individuals on plantations The main source of information in the NRS for events and conditions on plantations is estate papers of landowners in Scotland who owned plantations in the colonies. Letters, inventories and, occasionally, estate plans in these collections are an excellent source for researching the lives of enslaved persons on plantations in the colonies, their living conditions and the general attitude towards slavery and the slave trade. See below under estate and plantation records and also pictorial evidence. Researching specific enslaved persons or former enslaved persons in Scotland It is usually time-consuming to find information about any individuals in Scotland who lived prior to mid-19th century, but there may be opportunities for researching enslaved or former enslaved individuals in Scotland. Church attendance for enslaved individuals was not allowed in most colonies on the grounds that baptism might have prompted enslaved individuals to claim their right to freedom as Christians. Once in Scotland, however, many enslaved people were allowed to be baptised, and evidence of this should be in old parish registers of baptisms. At the point of baptism enslaved or former enslaved individuals often took the surnames of their enslavers, which should be borne in mind when searching baptismal registers. Released enslaved people were also allowed to marry and you may find an entry for their marriage in the old parish registers of marriages. In correspondence (social letters) and household records of families which enslaved people you might find letters or diaries referring to household enslaved individuals or accounts for things purchased for them. They sometimes also contain copies of wills, which might reveal if any enslaved people lived in the household and whether they were bequeathed themselves or were the recipients of the bequests. Lists of enslaved individuals are occasionally found in estate collections and these vary in the amount of detail they give, but they usually include the names of the enslaved person, their age, any other family members and sometimes origin and medical condition. Some former enslaved individuals were employed as apprentices with tradesmen. To find out more about the different types of trade records, read our guide to crafts and trades. In the late-eighteenth century there was a tax on some categories of servants in Scotland and surviving tax rolls for these are held by the NRS, arranged by burghs and counties and then by household, with the names of the servants and sometimes their jobs (NRS reference E326/5 and E326/6). For more details read our guide to taxation records. After their release (or successful escape), some former enslaved people joined the Army. Muster rolls list new recruits and might mention any former enslaved persons that joined. Searching them can be an arduous and time-consuming task, so you should ideally know the regiment the individual served in and their complete name. For more information on muster rolls, see our guide on military records. Until the abolition of slavery, the release of slaves was formalised through a 'manumission' (a legal document granting the slave his or her freedom). Manumissions are contained within the papers of the Colonial Office and Foreign Office, held at The National Archives (TNA) - see below under United Kingdom government sources. Records of prominent former enslaved people Not much is known about how former enslaved persons integrated in Scottish society, how they felt about and utilised their freedom. This is because there are very few first-hand accounts in Scottish archives left by former enslaved people. However, some individuals were well-known in Scotland at their time, such as George Dale, who was transported against his will from Africa, aged about eleven and ended up in Scotland after an unusual career as a plantation cook and crewman on a fighting ship. In 1789, during the time of the French Revolution, The Society for the Purpose of Effecting the Abolition of the African Slave Trade gathered evidence like George Dale's life story for the anti-slavery abolitionist cause (NRS reference GD50/235). You can read a transcript of this document in the feature on George Dale on the Learning section of this website. Another well-known former enslaved person was Scipio Kennedy. He had been brought to Scotland by Captain Andrew Douglas in 1702 from the West Indies, where he had been transported as a young boy from the African west coast. In 1705, Scipio joined the family of the Captain's daughter who married John Kennedy from Culzean in Ayrshire, and it was here that Scipio got his surname. He stayed in this family for an initial 20 years, during which time he was baptised and probably also received some education. Through his baptism, Scipio was free according to Scots law, so that when he decided after 20 years to continue service with his former owner for another 19 years, this was formalised by an indenture (NRS reference GD25/9/Box 72/9). Little is known about his later life, though he appears once in the kirk session minutes of Kirkoswald on 27 May 1728 (NRS reference CH2/562/1), accused of fornication with Margaret Gray, whom he later married. We know from references in the old parish registers that they had at least eight children and continued to live in Ayrshire until Scipio's death in 1774. Between 1756 and 1778 three cases reached the Court of Session in Edinburgh whereby fugitives of slavery attempted to obtain their freedom. A central argument in each case was that the enslaved person, having been bought in the colonies, had been subsequently baptised by sympathetic church ministers in Scotland. The three cases were Montgomery v Sheddan (1756), Spens v Dalrymple (1769) and Knight v Wedderburn (1774-77). The last case was the only one decided by the Court. James Montgomery (formerly 'Shanker', the property of Robert Sheddan of Morrishill in Ayrshire) died in the Edinburgh Tolbooth before the case could be decided. David Spens (previously 'Black Tom', belonging to Dr David Dalrymple in Methill in Fife) sued Dalrymple for wrongful arrest but Dalrymple died during the suit. Joseph Knight sought the freedom to leave the employment of John Wedderburn of Bandean, who argued that Knight, even though he was not recognised as a enslaved individual, was still bound to provide perpetual service in the same manner as an indentured servant or an apprenticed artisan (see Court of Session cases below). The abolition movement Many individual Scots were involved in the movement to abolish slavery or helped fugitives of slavery in Scotland in their quest for freedom. The Church of Scotland and other churches were also involved in the petitioning of parliament to abolish the slave trade in the late-eighteenth century and early-nineteenth century and individual church ministers baptised enslaved individuals in order to aid their attempts to gain freedom. The Court of Session cases challenging the status of slavery in Scotland reveal that local people helped fugitives of slavery – see under Court of Session cases. The NRS and SCAN online catalogues and the National Register of Archives can be used to some extent to search for material about the abolition movement and leading abolitionist figures, such as William Dickson of Moffat and William Wilberforce. See under 'Searching the NRS, SCAN and NRAS online catalogues' below. Researchers into the abolition movement in Scotland should refer to Iain Whyte, Scotland and the Abolition of Black Slavery, 1756-1838 (Edinburgh University Press, 2006). Court of Session cases The Court of Session, Scotland’s supreme civil court, heard some cases concerning the commercial and property-owning aspects of the slave trade. Three cases concerning the status of enslaved people in Scotland also survive among the unextracted processes of the court in the NRS, as follows: Montgomery v Sheddan, 1756 Among the petitions, declarations and other submissions by Sheddan and Montgomery in Court of Session (NRS reference CS234/S/3/12) there survives the bill of sale from Joseph Hawkins, Fredricksburg, to Robert Sheddan of ‘One Negroe boy named Jamie’ (9 March 1750). To read more, see the feature on the Montgomery slavery case on the Learning section of this website. Spens v Dalrymple, 1769 The papers in unextracted processes are NRS reference CS236/D/4/3 box 104 and NRS reference CS236/S/3/13. For more information, see the feature on the Spens slavery case in the Learning section of this website. Knight v Wedderburn, 1774-7 The unextracted processes for this case (NRS reference CS235/K/2/2) include an extract of process by the Sheriff Depute of Perth against Sir John Wedderburn (1774) and memorials by Wedderburn and Knight. For more information, see the feature on the Knight slavery case in the Learning section of this website. Estate and plantation records Scottish families who settled in the colonies maintained contact with their relatives in Scotland, and extensive series of correspondence survive in some Scottish estate collections. In these letters, the work and life of enslaved people on the plantations is often touched on, and we also learn how enslaved individuals rebelled against their captivity, either by absconding from their enslavers or through organised rebellion. Although most enslaved people were made to work on their enslaver’ plantations, enslaved individuals were often employed in their enslaver’ households as servants, and would occasionally be mentioned in letters or diaries. It was mostly these enslaved individuals whom enslavers would take with them if they returned to Scotland. Accounts reveal any expenditure made for enslaved persons, such as clothing, food and vaccines but also things like shackles and collars. Estate collections sometimes include household inventories drawn up at the death of the estate owner, which might mention enslaved people. Estate plans might show how enslaved individuals were accommodated. Some examples of plantation records in the NRS are Cameron and Company, Berbice, 1816-1824 (NRS reference CS96/972), William Fraser, Berbice, 1830-1831 (NRS reference CS96/1947), Robert Cunnyngham, St Christopher’s, 1729-1735, (NRS reference CS96/3102) and Earls of Airlie, Jamaica, 1812-1873, (NRS reference GD16/27/291). Our online catalogue can be searched by planter’s name, plantation name or by keywords such as ‘slavery’, ‘slaves’, 'negro', 'negroes', ‘plantation’ or a combination of keywords. Business records of merchants and enslavers Business records (such as correspondence, accounts and ledgers) give an insight into how the slave trade was operated. Letters between slave traders can reveal how slave markets and auctions were identified and how slaves were transported to the colonies and sold there. Merchants’ correspondence relating to the slave trade often concerns the triangular trade with the colonies but may also include references to the abolition of the slave trade insofar as it affected their business. Letters to and from purchasers tell us about the characteristics customers typically looked for in enslaved individuals. Accounts will usually give the sum of money paid or received and may also mention the purchasers' names and the physical condition of the enslaved person. Although enslaved people's names are occasionally included as an ‘identifier’, normally only their first name is given. Examples of business records in the NRS, referring to the slave trade are Buchanan & Simpson, Glasgow, 1754-1773 (NRS reference CS96/502-509) and Cameron and Company, Berbice, 1816-1824 (NRS reference CS96/972-983). The CS96 records normally relate to Court of Session cases, whose references may be found in the same catalogue entry. To find relevant business records, you would ideally know the name of the company or individual dealing in eslavement, as the entries in our online catalogue are arranged by record creator. However, the above examples were identified by using relevant search terms such as ‘slave’, ‘slaves’ and ‘slave trade’. Wills and testaments There is evidence from wills and testaments that enslaved people in the colonies were regarded as ‘moveable property’, meaning they could be bequeathed after the owner’s death. Copies of original testaments of plantation owners may survive in estate papers or among family papers. If the testament was registered by a court whose jurisdiction covered the plantation itself, the registers might survive in the relevant national archives of that country. Scots who owned land in both the colonies and in Scotland could have their testaments registered in the Commissary Court of Edinburgh and (later) the Sheriff Court of Edinburgh. The registers for both of these have been digitised and are searchable online via the ScotlandsPeople website. See below under 'websites and bibliography'. Registers of Deeds Contracts, indentures, factories and other legal papers concerning the sale of enslaved people can give details about the transaction, the parties involved, the price paid and other conditions under which the sale was to be finalised. Some of these are among collections of estate and plantation records or family papers (e.g. indenture between John Davies, Antigua, and James Matthew Hodges, Antigua, regarding the sale of a enslaved person, 1833 (NRS reference GD209/21) and indenture between Eliza Mines, Jamaica, and Cunningham Buchanan, Jamaica, regarding sale of two female enslaved individuals, 1809 (NRS reference CS228/B/15/52). It is possible that many others might appear in the various registers of deeds in the NRS, which can be very time-consuming to search. Many registers are not indexed, and those which are indexed are only by personal name. For more details see our research guide on searching registers of deeds. The NRS frequently receives enquiries for images of enslavement, the slave trade, the abolition movement, aspects of plantation life and related topics. Almost all of the information in the NRS relating to these topics is in written form. The best source of pictorial illustrations and images in Scotland is Glasgow City Libraries and Archives. A good starting point is the 2002 exhibition ‘Slavery and Glasgow’, which is available online at the Scottish Archive Network (SCAN) website. Two published maps of the Gold Coast have come to the NRS via private record collections: (1) map of Africa according to Mr. D'Anville with additions and improvements and a particular chart of the Gold Coast, showing European forts and factories, 1772, published by Robert Sayer, London (NRS reference RHP2069), and (2) map of Africa, improved and enlarged from D'Anville's map, including inset map of the Gold Coast and vignette of African figures, 1794, published by Laurie & Whittle, London (NRS reference RHP9779). Some access restrictions apply to the second map: consult NRS Historical Search Room staff. Searching NRS, SCAN and NRAS online catalogues The NRS online catalogue contains many detailed entries at item level, and it is possible to search it using terms such as ‘slave’ and ‘slavery’, and by the name of a plantation or plantation owner. It is less likely to yield information on enslaved individuals and former enslaved individuals unless they became well-known. The Scottish Archive Network (SCAN) online catalogue contains summary details of collections of records in more than 50 Scottish archives. Again this might be useful for searching for records of plantations and their owners, but not many other aspects of slavery. The SCAN website also contains the exhibition Slavery and Glasgow, which displays images of many of the types of material covered by this guide. The online register of the National Register of Archives for Scotland (NRAS) is a catalogue of records held privately in Scotland. United Kingdom government sources Acts, statutes and slave registers The Act of 1807 only abolished the transatlantic slave trade (the shipping of enslaved people from Africa to the colonies in the Americas). The sale and transport of enslaved people between colonies were not affected by this legislation. Moreover, in spite of the new law, the slave trade across the Atlantic continued illicitly. In response to this, the British government passed a Bill in 1815, requiring the registration of legally-purchased slaves in the colonies. The system of slave registration was gradually introduced by 1817. The registers are an excellent source for researching enslaved individuals. The amount of detail they give varies, but you can generally expect to find the enslaver’s name, the enslaved person’s name, age, country of birth, occupation and further remarks. You should be aware when studying these records that there was some opposition to the registration bill among enslavers, so the registers are not complete. The NRS does not hold slave registers. For most former colonies, you will need to contact the respective national archive services. In 1816, another Act came into force, requiring an annual return of the enslaved population in each colony. The returns were obtained by parish and normally record the enslaver’s name and the number of male and female enslaved people in their possession; they do not normally include the enslaved person’s names. These records are a good source for identifying individual enslavers. Returns were taken until 1834. During the 1820s, the British government began to make provisions for the gradual amelioration of slavery. This development towards its complete abolition in the British colonies is well documented in private and business letters from enslavers as well as speeches and pamphlets by abolitionists (see under 'the abolition movement' above). The new measures imposed by the government included Acts for the ‘government and protection of the slave population’, passed between 1826 and 1830. These Acts addressed topics such as minimum standards for food and clothing, labour conditions, penal measures and provisions for old and sick enslaved individuals. In Jamaica, enslaved persons could no longer be separated from their families, and released enslaved people were allowed to own personal property and to receive bequests. Murder of an enslaved person was to be punished with death. In Barbados, owners were instructed to have all their enslaved individuals baptised and clergymen were required to record births, baptisms, marriages and deaths occurring in the enslaved population. Enslaved people charged with capital offences were to be tried in court in the same way as white and free-coloured persons. In Grenada, every enslaved individual was to be given a proportion of land adequate to their support and be granted 28 working days per year to cultivate it. In Antigua, enslavers were required to build a two-roomed house for every enslaved female pregnant with her first child. A printed abstract of these Acts is held within a private collection (NRS reference GD142/57). For further information see the Parliamentary Archives website. Occasionally, enslavers would decide to release some of their enslaved people. The release was formalised through a ‘manumission’ (a document granting the enslaved person his or her freedom). Manumissions are contained within the papers of the Colonial Office and Foreign Office, held at The National Archives (TNA). For more details of these and the records of the Office of the Registry of Colonial Slaves and Slave Compensation Commission, 1812-1851, including the central register of slaves in London, see the research guides on the slave trade on The National Archives website. There are also some individual manumissions contained in estate papers held privately in Scotland. To search these and to find out more about how to access them, see National Register of Archives for Scotland online register. Websites and bibliography Scottish Archive Network (SCAN) - use the online catalogue to search for records relating to slavery in Scottish archives and view the Slavery and Glasgow exhibition The National Archives, London (TNA) - consult the research guides on slavery and the slave trade One Scotland website - includes a list of resources on Scotland and the slave trade Scottish Government National Improvement Hub - learning resources concerning slavery and human trafficking ScotlandsPeople - census returns; civil registers of births, deaths and marriages (from 1855 onwards); Old Parish Registers of baptisms and marriages; wills and testaments registered in Scotland Parliamentary Archives website - includes a micro-site: Parliament and the British Slave Trade Eric J Graham, A Maritime History of Scotland 1650-1790 (Tuckwell Press, 2002) Eric J Graham, Seawolves: Pirates and the Scots (Birlinn Ltd, 2007) David Hancock, Citizens of the World: London Merchants and the Integration of the British Atlantic Community, 1735-1785, (Cambridge University Press, 1995) Alan L Karras, Sojourners in the Sun: Scottish Migrants in Jamaica and the Chesapeake, 1740-1800 (Cornell University Press, 1992) Kenneth Morgan, Slavery, Atlantic Trade and the British Economy, 1660-1800 (Cambridge University Press, 2001) Iain Whyte, Scotland and the Abolition of Black Slavery, 1756-1838 (Edinburgh University Press, 2006) Frances Wilkins, Dumfries and Galloway and the Transatlantic Slave Trade (Wyre Forest Press, 2007)
<urn:uuid:68e8b1d2-3ad1-48bd-9dc5-e33feb61f66c>
CC-MAIN-2022-33
https://www.nrscotland.gov.uk/research/guides/slavery-and-the-slave-trade
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00499.warc.gz
en
0.951146
5,204
3.578125
4
if you can read German) - darkometer rating: 6 - A former prison in the heart of Potsdam , Brandenburg, Germany , that was used successively by the Nazis , the Soviets and then for more than two decades and a half the Stasi in the GDR . Turned into a memorial space after the German Peaceful Revolution of 1989 and the country's reunification, it is now one of the visually most stunning dark sites in Potsdam. More background info: The building on Lindenstraße 54/55 dates back to the 18th century. It was built during the reign of King Frederick William the First of Prussia in the Dutch red-brick style that was fashionable at that time. It was a city palace used by the military. Later, during the French occupation by Napoleon's troops, the site was used as a storage facility and – I'm not making this up! – as a hospital for horses! In the early 19th century the building took on a political/administrative role, namely as the home of the assembly of city deputies (“Stadtverordnetenversammlung”), the local Potsdam parliament of sorts. It wasn't until 1820 that the building first became a court of justice and remand prison, giving rise to extensive rebuilding of both the main house (which became the court) and the buildings at the back surrounding the courtyard, which became the prison. The cell blocks were gradually extended and the complex as we see it today was finished in 1910. But its real dark days began with the Third Reich . From 1935 to 1940 a so-called “Erbgesundheitsgericht ” (roughly 'hereditary health court') was set up here, which implemented racial laws the Nazis had passed. These laws, based on the twisted theories of 'eugenics', led to plenty of cases of enforced sterilizations (see also under Schloss Hartheim The prison part of the site, meanwhile, became one of the many places where the Nazis incarcerated political prisoners right up until the end of WWII and the collapse of the Third Reich. But that was not the end of the dark times for this particular place. Quite the contrary. In the summer of 1945 the prison building was seized by authorities who then used it for their own kind of persecution and atrocities. The NKVD , and subsequently the KGB , first went after real or alleged (former) Nazis who had been complicit in war crimes or who were accused of being part of the alleged Nazi underground resistance organisation “werewolves” (most of those arrested and tried, often just teenagers, were quite innocent, though). But soon the KGB shifted their attention to anyone who seemed to be critical of the new communist regime or was alleged to have been a spy for the West. Many were tortured to obtain “confessions”, tried by tribunals without any defence and then either sentenced to death or sent to gulags What was going on inside was well hidden from the general public (you can't see the cell blocks from the road and the entrance was cordoned off). Yet the place soon acquired the cynical nickname ” (cf. “Hanoi Hilton After the Peaceful Revolution in the GDR had led to the fall of the Berlin Wall and the dissolution of the Stasi was imminent, all remaining prisoners . On 5 December 1989, protesters stormed the Stasi HQs in Potsdam (as they also did elsewhere – cf. for instance Runde Ecke ) and also the Lindenstraße prison complex. But by then the local Stasi had managed to destroy almost all files, except for the prisoners' register. After the prison was closed, the main building on Lindenstraße was from January 1990 declared a “house of democracy”, with newly formed political parties using the rooms for their offices in the run-up to the first ever free elections in the GDR , which were held in March 1990. The prison was preserved as a historical site and declared a memorial in 1995. The current permanent exhibition was opened in 2010. What there is to see: From the street level outside the prison you'd never think that anything so sinister could be behind these rather pretty walls. The only slight indication that this may not be a “normal” house is the fact that the windows at ground level are barred (though that alone doesn't have to mean anything – there are lots of other buildings too that are secured in that way without having anything to do with imprisonment). It could be just like any other of Potsdam's grand edifices. But once inside and at the ticket and information desk you get the first few glimpses through the windows into the courtyard and see the towering, sinister façades of the barred-window cell blocks. The prescribed route through the commodified parts of the building with its different exhibitions, however, begins in the old front building. For orientation you are given a fold-out plan that outlines which parts of the buildings and the exhibitions pertain to which parts of its history. Throughout the various corridors you can find boxes with more of these, in case you lose your original copy (very thoughtful!). NOTE that all written texts as well as all audiovisual material is in German only . So if you don't have a fairly good grasp of the language you'll miss out on the information and personal stories. If you want these, then maybe you should consider arranging a guided tour (see below ). But even without the information, you can still get a lot out of this place, not so much the individual exhibition rooms, but from the original architecture and general oppressive atmosphere in the cell blocks. Some of the info is enhanced by artefacts and/or photos, so you might have a good guess or two at the backgrounds as well here and there. Before you get to the dark phases of the history of this place, however, you first receive a bit of prehistory, as it were, mainly about its uses before it became a prison. Also still in the old court house/administrative part you can see a couple of GDR -era relics. One is the former reception room near the entrance that has been left furnished as it was back then. The other is a reconstructed interrogation room , complete with a tape recorder, an ashtray and a pack of cigarettes (those were the days, of course, when both Stasi interrogator and inmate interrogatee would puff away during the interviewing …). You can listen to an audio recording of an actual interrogation from 1989. Then follows a section, still in the old front building, about the Nazi period in Potsdam , and in particular the use of this place as a 'court of racial hygiene ' and the topic of enforced sterilizations (which victims have never been compensated for, neither in the East nor in the West!). You then enter the actual cell blocks at the ground floor level. The first ten cells contain individual aspects and personal stories still from that period of the Third Reich , mostly in the form of text-and-photo panels on the wall, occasionally accompanied with additional audio recordings you can listen to on headphones, but there's also the first of a series of audiovisual stations with a video screen playing an interview with a survivor. The personal stories vary greatly in nature, but all of them convey the utter disregard for humanity displayed by the Nazis Apart from the personal angles the general topics of resistance against the Nazis, or the judiciary in the Third Reich, especially its “Volksgerichtshof” ('People's Court'), are illustrated, as well as the issue of capital punishment. Some of the photo images speak sufficiently for themselves here ... The next big thematic block covers the period of 1945 to 1952 , when the Soviet NKVD used the prison to incarcerate and torture its victims. It's getting visually grimmer here, since the cells in this part have not been given the same refurbishment and fresh coat of white paint as the previous ones. Here you still get the original faded greens and browns. The many personal stories are naturally of the grimmest sort too. Even more audio and video stations are positioned here than in the previous section. At the end of this corridor you can take the steps down to the basement level – where you can see some preserved (or reconstructed?) cells from the early NKVD period, i.e. the most basic kind of prison cells imaginable. Just a wooden plank bed and nothing else. One of the original cell doors has “25 years” scratched into it by one of the prisoners – that would have been the sentence. Many of those condemned to forced labour were sent to the gulag of Vorkuta in northern Russia . On display are also a few artefacts from that gulag, such as felt boots and a woolly hat. Back at ground level you can use a side entrance for a brief excursion into the courtyards . Here you can see the garages (with one typical Stasi prisoner transport van on display), one of the open-air exercise cells and a memorial sculpture in the northern, larger courtyard. Moreover you get a good visual impression of the whole prison architecture, including searchlights at the top of the walls, barbed wire and spiky barriers and all. On the fourth floor of the central cell block I spotted an odd juxtaposition with all those grim elements: windows that looked like stained glass, as in a chapel! On my way out I asked the museum warden on duty at the ticket desk and he confirmed this. However, this chapel is not publicly accessible. The exhibition only goes as far up as the second floor. The chapel is now just used as a storage room. The circuit then continues up the southern main staircase to get you to the first floor . Here the exhibition carries on with a detailed look at the longest period in the prison's history – that as a Stasi remand prison in the GDR Panels provide an overview about the structure, hierarchies and inner workings of the Stasi as well as its methods (for all this see also under Stasi Museum Berlin or Runde Ecke in Leipzig !). You can see a reconstructed “Effektenkammer” (storage room for inmates' personal belongings, all of which they had to forfeit for the duration of their imprisonment), prisoners' clothes, as well as a photo booth with a rotatable chair for taking mugshots (portrait, left and right) of new arrivals. There's also a staff room still furnished in that stuffy old GDR interior design of the 60s and 70s, as well as different cells from different periods. Interspersed are, yet again, various personal stories, partly in written form, partly in audio/visual form. Also covered are stories of successful as well as failed attempts at fleeing to the West. Some of the photos are sufficiently visual evidence of the ingenuity that went into quite a few of these attempts, including one using a “Trojan Cow”! (It failed.) Along the cell block corridors note the red lights on the wall. Back then these would have come on when prisoners were taken from the cells to interrogation or during whatever other movements of inmates. This was to signal that no other prisoner was to be let out of his/her cell at the same time. Inmates were not allowed to see each other! (See also Hohenschönhausen Running along the walls was also an emergency wire with which the guards could have triggered an alarm at any point. Also note the fact that all the toilet cisterns are outside the cells along the corridor walls. This was presumably so in order to avoid having any overhead pipes inside the cells, which desperate inmates could have used to hang themselves from. At the end of the corridor on this level you come back to a part of the front building facing Lindenstraße, and in two of the larger rooms here is an additional exhibition covering the topic of the Peaceful Revolution in the GDR in 1989/90 that led to the fall of the Berlin Wall , the collapse of the Eastern Bloc and the eventual reunification of Germany While that section is still part of the permanent exhibition there was at the time of my visit (April 2017) also a special temporary exhibition on. This was a photo exhibition showing images of the brutal crushing of the protests in Gwangju in South Korea in 1980 by the military dictatorship there. In this exhibition, the texts commenting on the photos did come with English translations. But by the time you read this, though, this particular temporary exhibition will long have finished. Whether all temporary exhibitions are bilingual I cannot say. Carrying on with the regular permanent exhibition you then make your way to the second floor. The first few cells on this level are left empty – and at the time of my visit the only commodification were projections of quotes – in English! I wasn't sure whether that was still related to the temporary exhibition or actually part of the regular museum design here. It was rather odd in any case. Also on this floor is yet another GDR-era staff room with the typical interior design and furniture, including a MuFuTi , which is short for “Multifunktionstisch” ('multi-functional table'), i.e. a special table that can be extended and adjusted in height. (They also existed in the West, but “Mufuti” was the special GDR jargon for this – as featured in the movie “Sonnenallee”). Amongst the 1970s lamps on this table lie various folders with extra information on individual cases. Some of these – breaking the otherwise consistent monolingual nature of the exhibition – are marked “English”. Further down the corridor you come to a washroom as well as a “Strafzelle” or 'punishment cell' for inmates who had broken the prison rules. This was a special cell that was subdivided by bars into a sanitary, a sleeping and a day area. The latter had no furniture, i.e. victims had to spend the day standing and were not free to go to the toilet as and when they wished. The cell was also darkened for most of the time. to look like they did in the early phases of the 1950s/60s – i.e. almost as Spartan as the KGB -era basement cells – as well as in later phases when proper toilets and washbasins were installed and the plank beds had bedding and blankets. Then you make your way back to the main staircase and down to the exit. Optionally you could have another look around the courtyards before leaving. All in all , this is one of the visually most stunning ex-prisons I have ever visited. And it's worth visiting for that alone, even if you do not understand German enough to make use of the information panels and audiovisual material. But if you do you also get an incredibly comprehensive impression of what went on in this place during the three authoritarian phases of the Nazi period, the Soviet occupation and all through the GDR era. Particularly moving are many of the personal stories portrayed at all stages of these phases. Highly recommended! It's a shame that none of the texts come with translations into English (or other languages) and that the videos do not have subtitles. International visitors thus have to miss out on all this – or have to arrange a guided tour. However, that may well change in the future. There's a general trend in Germany to make more and more memorial museums at least bilingual. So it may come to this place at some point too. right in the very heart of the Old Town of Potsdam . The address is Lindenstraße 54/55, 14467 Potsdam, Brandenburg, Germany Access and costs: easy to get to; cheap. thanks to its very central location, the site couldn't be much easier to get to. From within central Potsdam it's comfortably walkable. The ex-prison is just steps from Potsdam's central pedestrianized tourist street Brandenburger Straße, on the block going up to Gutenbergstraße. If you need public transport, e.g. if you're coming from the central station (and Berlin ), then tram line 91 gets you the closest, namely to the stop Dortusstraße a block south of Brandenburger Straße. Opening times: Tuesday to Sunday from 10 a.m. to 6 p.m.; closed Mondays. Admission: a mere 2 EUR (concession 1EUR). Regular guided tours (in German) take place every Saturday at 2 p.m.; you can also book special guided tours for groups (3 EUR per person, for groups under 10 participants a flat rate of 30 EUR is charged), and these can also be done in English or French (contact: fuehrungen(at)gedenkstaette-lindenstrasse.de). That may well be worth considering, since the exhibition parts of the site are almost entirely in German only! Time required: quite a lot, especially if you can read German, in which case you'll need a minimum of two hours, quite possibly longer. If you can't read German and mainly come for the visual impressions, you should still allocate an hour or so just for exploring the site. Guided tours last ca. 90 minutes, up to an hour longer if they include talks with an eyewitness (i.e. former inmates). Combinations with other dark destinations: see under Potsdam There is another prison memorial site in Potsdam, in its northern suburb Nauener Vorstadt, which was a KGB remand prison until the Soviets departed. And unlike the Stasi prison memorial at Lindenstraße, this one at Leistikowstraße is also commodified for international guests (most is bilingual, in German and English, and much of it also in Russian!). Combinations with non-dark destinations: some of Potsdam 's prime mainstream tourism attractions are quite nearby! The main pedestrianized street (Brandenburger Straße) with all the usual souvenir shops, touristy restaurants and fast-food joints is literally just round the corner to the south. Furthermore there's the Dutch quarter a short distance to the east, the Jägertor a mere few steps to the north, the Brandenburg Gate (yes, Potsdam has one too, not just Berlin ) to the west and also the vast complex of the World Heritage Site of Sanssouci just a short walk further west still. - Lindenstraße 01 - inconspicuous facade facing the street - Lindenstraße 02 - entrance - Lindenstraße 03 - once through the gate you see the cell blocks - Lindenstraße 04 - GDR-era reception room - Lindenstraße 05 - GDR-era electric installation - Lindenstraße 06 - Stasi interrogation room - Lindenstraße 07 - with cigarettes - Lindenstraße 08 - mug-shot photography room - Lindenstraße 09 - some mindless jokester must have left this 00 addition - Lindenstraße 10 - in the cell block - Lindenstraße 11 - corner - Lindenstraße 12 - audio-visual station - Lindenstraße 13 - with stools - Lindenstraße 14 - cellar - Lindenstraße 15 - courtyard and garage - Lindenstraße 16 - Stasi prisoner van in the garage - Lindenstraße 17 - open-air exercise cell - Lindenstraße 18 - monument in the courtyard - Lindenstraße 19 - search light at the top - Lindenstraße 20 - back inside the cell block - Lindenstraße 21 - one level up - Lindenstraße 22 - view into the courtyard - Lindenstraße 23 - cell doors - Lindenstraße 24 - installations on the second floor in English - Lindenstraße 25 - glass bricks - Lindenstraße 26 - GDR-era staff room - Lindenstraße 27 - some case files in English - Lindenstraße 28 - wash room - Lindenstraße 29 - punishment cell - Lindenstraße 30 - 1960s cell - Lindenstraße 31 - more modern cell - Lindenstraße 32 - peeking in - Lindenstraße 33 - locked - Lindenstraße 34 - one last look into the courtyard
<urn:uuid:759f5598-52c5-4cf2-849b-bdbd610e625f>
CC-MAIN-2022-33
https://dark-tourism.com/index.php/the-vatican/15-countries/individual-chapters/353-lindenstrasse-former-stasi-prison-potsdam
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00096.warc.gz
en
0.959037
4,479
2.703125
3
Utilitarianism (by John Stuart Mill): With Related Remarks from Mill’s Other Writings Hackett Publishing Company, 2017 This edition of Utilitarianism supplements the text of Mill’s classic essay with 58 related remarks carefully selected from Mill’s other writings, ranging from his treatise on logic to his personal correspondence. In these remarks, Mill comments on specific passages of Utilitarianism, elaborates on topics he handles briefly in Utilitarianism, and discusses additional aspects of his moral thought. Short introductory comments accompany the related remarks, and an editor’s introduction provides an overview of Utilitarianism crafted specifically to enhance accessibility for first-time readers of the essay. “Eggleston has produced easily the best edition of Utilitarianism available. By conveniently including so many of the relevant passages from supplementary works, all organized for ease of reference, scholars and students alike will now have at their fingertips the materials needed to make sense of Mill’s classic text. This is important not just for an accurate understanding of Mill’s own moral and political philosophy, but for a proper appreciation of utilitarianism as a leading moral tradition.” —Piers Norris Turner, Associate Professor, The Ohio State University “Some of the ambiguity of Utilitarianism can be resolved, or at least debated, by attention to Mill’s other writings. Eggleston’s edition provides the primary sources for such discussion in its endnotes. A serious teacher of Utilitarianism should use this edition.” —Henry West, Professor Emeritus, Macalester College Moral Theory and Climate Change: Ethical Perspectives on a Warming Planet The Cambridge Companion to Utilitarianism Cambridge University Press, 2014 John Stuart Mill and the Art of Life Oxford University Press, 2011 28. “Procreation, Carbon Tax, and Poverty: An Act-Consequentialist Climate-Change Agenda” Moral Theory and Climate Change: Ethical Perspectives on a Warming Planet, edited by Dale E. Miller and Ben Eggleston (Routledge, 2020), pp. 58–77 A book chapter (about 9,000 words, plus references) presenting an act-consequentialist approach to the ethics of climate change. It begins with an overview of act consequentialism, including a description of the view’s principle of rightness (an act is right if and only if it maximizes the good) and a conception of the good focusing on the well-being of sentient creatures and rejecting temporal discounting. Objections to act consequentialism, and replies, are also considered. Next, the chapter briefly suggests that act consequentialism could reasonably be regarded as the default moral theory of climate change, in the sense that a broadly act-consequentialist framework often seems implicit in both scholarly and casual discussions of the ethics of climate change. The remainder of the chapter explores three possible responses to the threat of climate change: having fewer children to reduce the number of people emitting greenhouse gases; taxing greenhouse-gas (GHG) emissions (commonly called a “carbon tax”) to discourage GHG-emitting behavior; and reducing poverty to lessen personal, familial, and community vulnerability to the harms of climate change. 27. “Consequentialism and Respect: Two Strategies for Justifying Act Utilitarianism” Utilitas, vol. 32, no. 1 (March 2020), pp. 1–18 Most arguments in support of act utilitarianism are elaborations of one of two basic strategies. One is the consequentialist strategy. This strategy relies on the consequentialist premise that an act is right if and only if it produces the best possible consequences and the welfarist premise that the value of a state of affairs is entirely determined by its overall amount of well-being. The other strategy is based on the idea of treating individuals respectfully and resolving conflicts among individuals in whatever way best conforms to that idea. Although both of these strategies can be used to argue for the principle of act utilitarianism, they are significantly different from each other, and these differences cause them to have different strengths and weaknesses. It emerges that which argumentative strategy is chosen by a proponent of act utilitarianism has a large impact on which virtues her view has and which objections it is vulnerable to. 26. “Toward a Unified Theory of Morality: An Introduction to Part One of Reasons and Persons” Derek Parfit’s Reasons and Persons: An Introduction and Critical Inquiry, edited by Andrea Sauchelli (Routledge, 2020), pp. 13–29 A book chapter (about 8,000 words, plus references) summarizing Part One of Reasons and Persons, with particular attention to the Self-interest Theory, Consequentialism, Common-Sense Morality, and how critical scrutiny of Consequentialism and Common-Sense Morality points the way toward a unified theory of morality. 25. “Decision Theory” The Cambridge History of Moral Philosophy, edited by Sacha Golob and Jens Timmermann (Cambridge University Press, 2017), pp. 706–717 A book chapter (about 4,000 words, plus references) on decision theory in moral philosophy, with particular attention to uses of decision theory in specifying the contents of moral principles (e.g., expected-value forms of act and rule utilitarianism), uses of decision theory in arguing in support of moral principles (e.g., the hypothetical-choice arguments of Harsanyi and Rawls), and attempts to derive morality from rationality (e.g., the views of Gauthier and McClennen). 24. “Mill’s Moral Standard” A Companion to Mill, edited by Christopher Macleod and Dale E. Miller (John Wiley & Sons, Inc., 2017), pp. 358–373 A book chapter (about 7,000 words, plus references) on the interpretation of Mill’s criterion of right and wrong, with particular attention to act utilitarianism, rule utilitarianism, and sanction utilitarianism. Along the way, major topics include Mill’s thoughts on liberalism, supererogation, the connection between wrongness and punishment, and breaking rules when doing so will produce more happiness than complying with them will. 23. “The Number of Preference Orderings: A Recursive Approach” The Mathematical Gazette, vol. 99, no. 544 (March 2015), pp. 21–32 This article discusses approaches to the problem of the number of preference orderings that can be constructed from a given set of alternatives. After briefly reviewing the prevalent approach to this problem, which involves determining a partitioning of the alternatives and then a permutation of the partitions, this article explains a recursive approach and shows it to have certain advantages over the partitioning one. 22. “Accounting for the Data: Intuitions in Moral Theory Selection” Ethical Theory and Moral Practice, vol. 17, no. 4 (August 2014), pp. 761–774 Reflective equilibrium is often credited with extending the idea of accounting for the data from its familiar home in the sciences to the realm of moral philosophy. But careful consideration of the main concepts of this idea – the data to be accounted for and the kind of accounting it is appropriate to expect of a moral theory – leads to a revised understanding of the “accounting for the data” perspective as it applies to the discipline of moral theory selection. This revised understanding is in tension with reflective equilibrium and actually provides more support for the alternative method of moral theory selection that I have called ‘practical equilibrium’. 21. “Act Utilitarianism” The Cambridge Companion to Utilitarianism, edited by Ben Eggleston and Dale E. Miller (Cambridge University Press, 2014), pp. 125–145 The Cambridge Companion to Utilitarianism, edited by Ben Eggleston and Dale E. Miller (Cambridge University Press, 2014), pp. 1–15 The Bloomsbury Encyclopedia of Utilitarianism, edited by James E. Crimmins (Bloomsbury Publishing, 2013), pp. 6–8 18. “Paradox of Happiness” The International Encyclopedia of Ethics, edited by Hugh LaFollette (Blackwell Publishing Ltd., 2013), pp. 3794–3799 17. “Rejecting the Publicity Condition: The Inevitability of Esoteric Morality” The Philosophical Quarterly vol. 63, no. 250 (January 2013), pp. 29–57 It is often thought that some version of what is generally called the publicity condition is a reasonable requirement to impose on moral theories. In this article, after formulating and distinguishing three versions of the publicity condition, I argue that the arguments typically used to defend them are unsuccessful and, moreover, that even in its most plausible version, the publicity condition ought to be rejected as both question-begging and unreasonably demanding. The Encyclopedia of Applied Ethics, 2nd edition, edited by Ruth Chadwick (Elsevier, 2012), vol. 4, pp. 452–458 15. “Rules and Their Reasons: Mill on Morality and Instrumental Rationality” John Stuart Mill and the Art of Life, edited by Ben Eggleston, Dale E. Miller, and David Weinstein (Oxford University Press, 2011), pp. 71–93 This chapter addresses the question of what role Mill regards rules as playing in the determination of morally permissible action by drawing on his remarks about instrumentally rational action. First, overviews are provided of consequentialist theories and of the rule-worship or incoherence objection to rule-consequentialist theories. Then a summary is offered of the considerable textual evidence suggesting that Mill’s moral theory is, in fact, a rule-consequentialist one. It is argued, however, that passages in the final chapter of A System of Logic suggest that Mill anticipates and endorses the rule-worship or incoherence objection to rule-consequentialist theories. The chapter concludes by exploring some ways in which this tension in Mill’s thought might be resolved. John Stuart Mill and the Art of Life, edited by Ben Eggleston, Dale E. Miller, and David Weinstein (Oxford University Press, 2011), pp. 3–18 13. “Practical Equilibrium: A Way of Deciding What to Think about Morality” Mind vol. 119, no. 475 (July 2010), pp. 549–584 Practical equilibrium, like reflective equilibrium, is a way of deciding what to think about morality. It shares with reflective equilibrium the general thesis that there is some way in which a moral theory must, in order to be acceptable, answer to one’s moral intuitions, but it differs from reflective equilibrium in its specification of exactly how a moral theory must answer to one’s intuitions. Whereas reflective equilibrium focuses on a theory’s consistency with those intuitions, practical equilibrium also gives weight to a theory’s approval of one’s having those intuitions. 12. “The Problem of Rational Compliance with Rules” The Journal of Value Inquiry vol. 43, no. 1 (March 2009), pp. 19–32 The problem of rational compliance with rules is the problem of how it can be rational for an agent to follow a rule with a purely consequentialist justification in a case in which she knows that she can do more good by breaking it. This paper discusses two ways in which responses to this problem can fail to address it, using Alan Goldman’s article “The Rationality of Complying with Rules: Paradox Resolved” as a case study. 11. “Mill’s Misleading Moral Mathematics” Southwest Philosophy Review vol. 24, no. 1 (January 2008), pp. 153–161 The debate over whether Mill is better read as an act or a rule utilitarian began in the 1950s and has continued ever since. We argue that in certain passages in which Mill initially appears to be endorsing the act-utilitarian moral theory, he is really doing something quite different. Insofar as he is endorsing any particular view at all, it is not act utilitarianism – nor is it even a moral theory. Instead, it is a view about how to assess individual actions that informs, but does not translate without modification into, Mill’s rule-utilitarian moral theory. 10. “Genetic Discrimination in Health Insurance: An Ethical and Economic Analysis” The Human Genome Project in College Curriculum: Ethical Issues and Practical Strategies, edited by Aine Donovan and Ronald M. Green (University Press of New England, 2008), pp. 46–57 Current research on the human genome holds enormous long-term promise for improvements in health care, but it poses an immediate ethical challenge in the area of health insurance, by raising the question of whether insurers should be allowed to take genetic information about customers into account in the setting of premiums. It is widely held that such discrimination is immoral and ought to be illegal, and the prevalence of this view is understandable, given the widespread belief, which I endorse, that every individual in a society as affluent as ours has a basic right to affordable health care. But prohibiting genetic discrimination in health insurance is not an effective way to protect this right. On the contrary, I argue that because of the nature of insurance as a product sold in a competitive market, such a prohibition is misguided, and its worthy aims must, instead, be pursued through reforms in our country’s system of publicly provided health care. 9. “Conflicts of Rules in Hooker’s Rule-Consequentialism” Canadian Journal of Philosophy vol. 37, no. 3 (September 2007), pp. 329–349 In his recent book Ideal Code, Real World: A Rule-consequentialist Theory of Morality, Brad Hooker recognizes that his theory, like most rule-consequentialist theories, must answer the question of how agents are to resolve conflicts that may arise among the rules his theory endorses. Here I examine Hooker’s answer to this question, and I argue that his answer fails to solve a serious problem that arises from such conflicts. 8. “India House Utilitarianism: A First Look” Southwest Philosophy Review vol. 23, no. 1 (January 2007), pp. 39–47 Among the most thoroughly debated interpretive questions about the moral philosophy of John Stuart Mill is whether he should be understood as an act utilitarian or as an ideal-code rule utilitarian. We argue that neither of these interpretations fits the textual evidence as well as does a novel view we call ‘India House utilitarianism’. On this view, an act is right if and only if it is not forbidden by the code of rules the agent is justified in believing to be the one, of those she can reasonably be expected to be aware of, whose general acceptance would produce the most happiness. 7. “Reformulating Consequentialism: Railton’s Normative Ethics” Philosophical Studies vol. 126, no. 3 (December 2005), pp. 449–462 A critical examination of the chapters on normative ethics in Peter Railton’s Facts, Values, and Norms: Essays Toward a Morality of Consequence. It is argued that Railton’s theory of sophisticated consequentialism effectively handles issues of pollution and moral dilemma that Railton discusses, and that Railton’s more recent proposal of “valoric consequentialism,” if coupled with a non-act-utilitarian standard of rightness of the kind Railton discusses, is vulnerable to objections to which sophisticated consequentialism is immune. 6. “The Ineffable and the Incalculable: G. E. Moore on Ethical Expertise” Ethics Expertise: History, Contemporary Perspectives, and Applications, edited by Lisa Rasmussen (Springer, 2005), pp. 89–102 According to G. E. Moore, ethical expertise requires abilities of several kinds: (1) the ability to factor judgments of right and wrong into (a) judgments of good and bad and (b) judgments of cause and effect, (2) the ability to use intuition to make the requisite judgments of good and bad, and (3) the ability to use empirical investigation to make the requisite judgments of cause and effect. Moore’s conception of ethical expertise is thus extremely demanding, but he supplements it with some very simple practical guidance. 5. “Procedural Justice in Young’s Inclusive Deliberative Democracy” Journal of Social Philosophy vol. 35, no. 4 (Winter 2004), pp. 544–549 In her book Inclusion and Democracy, Iris Marion Young offers a defense of a certain model of deliberative democracy and argues that political institutions that conform to this model are just. I argue that Young gives two contradictory accounts of why such institutions are just, and I weigh the relative merits of two ways in which this contradiction can be resolved. 4. “Everything Is What It Is, and Not Another Thing: Comments on Austin” Southwest Philosophy Review vol. 19, no. 2 (July 2003), pp. 101–105 In his paper defending ethical intuitionism, Michael W. Austin offers a reply to W. D. Hudson’s objection that to say that one knows something by intuition is not really to explain how one knows it at all. I press Hudson’s objection against Austin and argue that the responses suggested by Austin’s paper are either inadequate or unavailable to a genuine ethical intuitionist. 3. “Does Participation Matter? An Inconsistency in Parfit’s Moral Mathematics” Utilitas vol. 15, no. 1 (March 2003), pp. 92–105 Consequentialists typically think that the moral quality of one’s conduct depends on the difference one makes. But consequentialists may also think that even if one is not making a difference, the moral quality of one’s conduct can still be affected by whether one is participating (even if only ineffectually, or redundantly) in an endeavor that does make a difference. Derek Parfit discusses this issue – the moral significance of what I call ‘participation’ – in the chapter of Reasons and Persons that he devotes to what he calls ‘moral mathematics’. In my paper, I expose an inconsistency in Parfit’s discussion of moral mathematics by showing how it gives conflicting answers to the question of whether participation matters. I conclude by showing how an appreciation of Parfit’s error sheds some light on consequentialist thought generally, and on the debate between act and rule consequentialists specifically. 2. “The Toxin and the Tyrant: Two Tests for Gauthier’s Theory of Rationality” Twentieth-Century Values (Conference on Value Inquiry, 2002, published online), edited by Kenneth F. T. Cust 1. “Should Consequentialists Make Parfit’s Second Mistake? A Refutation of Jackson” Australasian Journal of Philosophy vol. 78, no. 1 (March 2000), pp. 1–15 Frank Jackson claims that consequentialists should hold the view that Derek Parfit labels the second ‘mistake in moral mathematics’, which is the view that “If some act is right or wrong because of … effects, the only relevant effects are the effects of this particular act.” But each of the three arguments that Jackson offers is unsound. The root of the problem is that in order to argue for the conclusion Jackson aims to establish (that consequentialists should not regard the second “mistake” as a mistake), one must presuppose an overly narrow, and hence distorted, understanding of what consequentialism is. 10. Review of Concern, Respect, and Cooperation, by Garrett Cullity Australasian Journal of Philosophy vol. 97, no. 4 (December 2019), pp. 836–839 9. Review of Capital in the Twenty-First Century, by Thomas Piketty Utilitas vol. 27, no. 2 (June 2015), pp. 254–256 In this review, I recount some of the media attention this book received when published, summarize its main points, and recommend that readers skip all but one chapter of it and use a journal article co-authored by Piketty as a substitute for the rest of it. 8. Review of An Introduction to Decision Theory, by Martin Peterson Notre Dame Philosophical Reviews, January 5, 2010 7. Review of The Demands of Consequentialism, by Tim Mulgan Utilitas vol. 21, no. 1 (March 2009), pp. 123–125 6. Review of An Introduction to Mill’s Utilitarian Ethics, by Henry West Notre Dame Philosophical Reviews, June 5, 2004 5. Review of Practical Rules: When We Need Them and When We Don’t, by Alan H. Goldman Utilitas vol. 16, no. 1 (March 2004), pp. 113–115 4. Review of Sorting Out Ethics, by R. M. Hare Mind vol. 109, no. 436 (October 2000), pp. 930–933 3. Review of Routledge Philosophy GuideBook to Mill on Utilitarianism, by Roger Crisp; Utilitarianism, by Geoffrey Scarre; and Contemporary Ethics: Taking Account of Utilitarianism, by William H. Shaw Mind vol. 109, no. 436 (October 2000), pp. 873–879 2. Review of Ethics: The Big Questions, edited by James P. Sterba APA Newsletter on Teaching Philosophy vol. 99, no. 2 (Spring 2000), pp. 273–274 1. Review of Welfare, Happiness, and Ethics, by L. W. Sumner International Journal of Philosophical Studies vol. 7, no. 2 (June 1999), pp. 270–272 Self-Defeat, Publicity, and Incoherence: Three Criteria for Consequentialist Theories My dissertation, which I defended on December 18, 2001, is in the electronic holdings of the University of Pittsburgh library system. Here’s its page there, including an abstract of it: Once you go to that page, which will open in a new window, you can click on a link that will download the file containing my dissertation. I’ve also downloaded the file myself, and put a copy of it below, as a backup to the copy in Pitt’s holdings: Chapter IV of my dissertation, on the publicity condition, forms the basis for much of my article “Rejecting the Publicity Condition: The Inevitability of Esoteric Morality,” published in The Philosophical Quarterly vol. 63, no. 250 (January 2013), pp. 29–57. A link to this article is above, in the “Papers” section of this page.
<urn:uuid:b8622863-e445-430e-9ee7-75c74a0275b3>
CC-MAIN-2022-33
http://www.benegg.net/publications/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00298.warc.gz
en
0.907467
4,920
2.53125
3
The Local newsletter is your free, daily guide to life in Colorado. For locals, by locals. Sign up today! Ninety-nine percent of the time, cancer is a malady tied to age. The cells in our bodies sometimes lose their battles against the toxins we’re exposed to, the sedentary lifestyles we lead, the viruses we contract as we go about our adult lives, and our genetic predispositions—and proliferate uncontrollably. The approximately one percent of remaining cancers occur in children. It’s a particularly cruel reality when infants, toddlers, and teenagers draw the proverbial short straw despite their comparatively pristine anatomies. And yet, the world of pediatric cancer often feels more hopeful than the adult equivalent. That’s in part because kids are so heartwarmingly irrepressible. But it’s also because young bodies tolerate aggressive chemotherapy far better than older ones; survival rates among kids are higher; and the unfathomable unfairness of the situation has compelled experts to come up with novel treatments as fast as they can. In fact, hospitals right here in the Denver metro area have become important players in this race to save young lives. We checked in with local researchers, practitioners, and patients to gain a better understanding of how pediatric cancer differs from the adult iteration. With their help, we break down—from diagnosis to treatment to survivorship—12 reasons why we can be optimistic. Reason No. 1: Kids Get Different Types Of Cancer Than Adults The oncology world uses a lot of unfamiliar terms, most of which look like alphabet soup to the layperson. The jargon is the same no matter how old the patient is, but the types of cancer are not. It’s easy to recall two of the more common pediatric cancers if you just remember the two B’s: cancers of the blood and brain. Pediatric tumors also develop in the bones, the kidneys, the liver, the nervous system, the connective tissue between organs, and the lymphatic system. “Our patient population here very much represents that,” says Dr. Lia Gore, head of the pediatric hematology, oncology, and bone marrow transplant program at Children’s Hospital Colorado. Younger folks can certainly get so-called adult cancers—the most prevalent malignancies in grown-ups are found in the prostate, breasts, lungs and bronchi, and colon and rectum—but it’s relatively rare. And that’s a good thing. - Finding Solitude in Utah’s Capitol Reef National Park - Chef Dana Rodriguez Redefines Dim Sum - The World’s First Space Mining Program Launches in Colorado - The Coolest Colorado Monuments You’ve Never Heard Of - This Invasive Insect Has Local Tree Huggers Worried - These Colorado Boat Builders Are Making World-Class Watercraft - The Coolest Things to Do in Colorado This Month, August 2018 The relative upside to some of the big, bad B’s and their evil cohorts is that many of the treatments for these pediatric-specific cancers have produced spectacularly successful results. (Although many adult cancers have high five-year survival rates, pediatric cancer survival rates are considerably better.) Colorado youth with cancers of the brain and nervous system have a 73 percent survival rate of five years or longer. And 88 percent of kids with both Hodgkin’s disease and non-Hodgkin’s lymphoma in the Centennial State live for at least five years after diagnosis. (National numbers, which also use the five-year survival rate standard, are comparable.) Maybe most critically, acute lymphoblastic leukemia (ALL), a blood disease that represents the most common type of leukemia in children, responds well to both chemotherapy and newer immunotherapy drugs. Oncologists have made such impressive gains combating ALL, in fact, that about 90 percent of patients are still in remission after 10 years. “My practice is about treating complex, rare things every day,” says Dr. Jennifer Bruny, the director of surgical oncology at Children’s Hospital Colorado. “Our breadth of experience comes from the fact that we see things that are often just a little bit different from the last one we saw.” Reason No. 2: Children Tolerate Chemotherapy Better Remember that Pepto-Bismol jingle that outlined the symptoms of GI distress? (“Nausea, heartburn, indigestion, upset stomach, diarrhea!”) The song reads like a list of side effects from chemo, drugs that can kill rapidly dividing cells. Kids have an advantage over adults in this arena; although they’re not immune to the uncomfortable reactions, they can typically handle more chemo before they fall ill. According to Dr. Brad Ball, a pediatric hematologist and oncologist at Rocky Mountain Hospital for Children, youth receive more of the treatment per pound of body weight than grown-ups. Pint-size humans also get the drugs more frequently, sometimes as often as once a day. “Tell medical [adult] oncologists to give weekly chemo, and they’ll laugh in your face,” says Dr. Anna Franklin, an oncologist at Children’s Hospital Colorado. Kids rebound from the lifesaving toxins more quickly than adults because they don’t tend to have additional health issues—like lung or kidney problems—and their developing bodies adapt well to novel stimuli. Shashanah Woodward usually employs stick sunscreen to protect her three sons’ faces. But one day in August 2017, the Parker resident ran out while at the neighborhood pool and had to use the squeeze-bottle kind. As she slathered it on, she discovered a lump on her five-and-a-half-year-old’s cheek. Brenden’s young age—and the accidental but early detection of the tumor—proved to be a blessing as he underwent 40 weeks of chemo and 20 days of radiation at Rocky Mountain Hospital for Children to attack a rhabdomyosarcoma, a connective tissue tumor. Brenden has tolerated the chemo so well that he’s been able to go to kindergarten on a regular basis and even take taekwondo classes. And his prognosis looks good; about 70 percent of kids with this type of tumor will have no signs of tumor recurrence five years after diagnosis. The Little Things They may be mature beyond their years, but kids fighting cancer are still kids—a fact not lost on local hospitals. Riding In Style: Kids don’t always use wheelchairs to scoot down the halls at Children’s Hospital Colorado; instead, they often get pulled around in red Radio Flyer wagons retrofitted with special poles for their IVs. This tradition dates to the 1950s, when chemotherapy pioneer Dr. Sidney Farber began hauling his wee patients around in mini go-carts at a Boston children’s hospital. Blast off: If Rocky Mountain Hospital for Children patients develop lesions around their tumor sites (a potentially painful problem), the prescription is breathing pure oxygen in a super-pressurized room, which helps the body regrow blood vessels. The cool part: RMHC’s two hyperbaric chambers look a lot like spaceships. Sweet Sleep: When Children’s patients have to go under anesthesia for treatment, they get to pick a Lip Smacker flavor—maybe Bubble Gum Ball or Orange Cream Delight?—to line the insides of their masks. Operating Theater: Should a patient require surgery at RMHC, she can choose a movie to play on the 12-foot screen in the operating room before she falls asleep—and in her room afterward. Under The Microscope For more than seven decades, childhood acute lymphoblastic leukemia (ALL) has been the test subject for many pioneering treatments. Here, some of the highlights. 1947: The father of modern chemotherapy, pediatric pathologist Sidney Farber, gives a pediatric ALL patient a drug called aminopterin, producing one of the earliest observed remissions of cancer. 1967: A new combination of several chemotherapy drugs and radiation that targets the central nervous system prompts about half of the ALL patients in a clinical trial to go into remission. 1998: A study at St. Jude Children’s Research Hospital in Tennessee discovers that personalized doses of chemotherapy, based on the patients’ abilities to flush out the drug, improve outcomes for ALL patients. 2014: A clinical trial sees remission in 90 percent of ALL cases after employing a new immunotherapy drug that uses modified cells from patients’ own immune systems to attack the cancer. Reason No. 3: More Pediatric Cancer Drugs Are On The Way There have been a lot of big days in the history of humans battling cancer: March 29, 1896, when radiation was first used to treat the disease; August 5, 1937, the day the National Cancer Institute was established; and December 28, 1947, when doctors observed one of the earliest remissions in a cancer patient. But one of the most important dates came a year ago this month, when Congress passed the Race for Children Act (RCA), a law requiring companies developing cancer drugs for adults to consider how those medicines could be helpful to kids as well. Previously, companies would receive Federal Drug Administration approval for adult use of a medication and then take years to run clinical trials exploring whether juveniles could benefit too. “Those delays, when you’re the parent of a child who might benefit from a promising therapy, are really difficult,” Children’s Hospital Colorado’s Dr. Lia Gore says. Although they’ve already seen improvements with the RCA, Gore and her colleagues didn’t wait for the law to catch up with the needs of their patients. By the time the act passed, docs at Children’s were getting much-needed drugs to kids with cancer in about a year or two. In the early 2000s, Children’s researchers—alongside those at hospitals across the nation—led a push to go directly to the manufacturers of already-cleared drugs (some of them for conditions other than cancer) and explain the uses for pediatric oncology patients. Because the meds had already been deemed safe by the FDA, their approvals could be expedited. But there’s another seemingly mundane factor that helps Gore get lifesaving meds to kids quickly: The hospital’s clinicians work on the same floor as its researchers, so conversations between Dr. A in the cancer unit and Dr. B in the lab happen when they run into each other in the hall. Those high-level discussions, Gore says, can lead to the speedier development of promising drugs. That’s important because fewer than five medicines were cleared for specific use in pediatric cancer patients between 1979 and 2016. In recent years, though, Children’s has served as an integral site for clinical trials of multiple pediatric cancer drugs approved without the heavy restrictions of the past. This is good news for patients. And it’s because of doctors like Gore who like to think outside the system. Reason No. 4: More Kids With Cancer Are Enrolled In Clinical Trials Than Adults It’s not even close: More than half of pediatric cancer patients under the age of 15 (and 90 percent under five) receive care through a clinical trial, compared to less than five percent of adults with malignancies. There are many reasons why so few grown-ups are involved, from the dearth of accessible adult trials to fears of participating in research. But the massive enrollment of children comes down to just one factor: numbers. Pediatric cancer—and its many iterations—is so rare that hospitals need to sign their patients up for multisite clinical trials and work collaboratively to come up with effective treatments. Otherwise, oncologists would still be furrowing their brows over leukemia instead of being well on their way to a complete cure. Reason No. 5: Colorado Oncologists Are Turning To Man’s Best Friend For An Assist In Saving Limbs We know Coloradans love their dogs, but our four-legged friends may be giving back more than affection. The field of comparative oncology—the study of cancer in people and pets to benefit both species—often turns to pooches because they share more than 80 percent of our genes. “When you think about cancer as a disease that occurs because genes mutate, the more similar the genome, the more likely the same type of treatment will work,” says Dr. Nicole Ehrhart, professor of surgical oncology at Colorado State University’s Flint Animal Cancer Center. CSU has been an international leader in this field for 30 years. Through the decades, canine research—in which clinical trials are similar to those for human cancer patients—at the institution has helped lead to FDA approval of medicines to treat cancers ranging from leukemia to bone malignancies. Ehrhart’s current work, however, is focused on trying to improve reconstruction techniques after surgeons remove bones and muscle tumors from legs and arms. “When you mechanically replace a portion of a leg bone, that bone doesn’t grow [anymore],” Ehrhart says. “Adolescents end up needing to undergo 15 to 30 surgeries in a lifetime because their devices experience wear, mechanical failure, and infection.” Currently, Ehrhart is leading a research study that’s looking at whether stem cells culled from fat, bone marrow, or muscle tissue can regenerate bones and muscles for dogs with any cancer that affects those parts of the body. If her work is successful, she hopes it’ll lay the groundwork for research in humans—potentially saving the pediatric cancer field years of time it can devote, instead, to keeping kids alive. Dr. Jennifer Bruny knew removing Emily McLaughlin’s neuroblastoma from her abdomen would be a challenging endeavor. The pediatric surgeon warned the three-year-old’s mother, Tara Geraghty, that she might actually see Bruny grabbing some food in the Children’s Hospital Colorado cafeteria during the eight-hour surgery. After the operation, Bruny delivered pictures of the tumor. Tara was disgusted, but Emily was fascinated. So, Tara enlarged the images and printed photos for Emily’s very own tumor-stomping party. The toddler and her nurses jumped all over the pictures, metaphorically obliterating the growth. Emily, now 12, remembers the shindig more than the painful parts of her treatment. That was Tara’s goal—and why she’s now writing a book about her strategy. A portion of the profits from Making Cancer Fun will go to Blue Star Connection, a Winter Park nonprofit that provides musical instruments to kids and young adults with serious illnesses. Years ago, Emily was the youngest recipient. These Colorado nonprofits give children relaxing reprieves from being the kid with cancer. Prostate cancer survivor Tom Evans and his wife, Dorothy, operate a ranch 60 miles west of Colorado Springs that introduces cancer patients ages 10 to 17 to agricultural life. The kids spend seven days driving cattle, riding horses, and feeding livestock in addition to engaging in more traditional camp activities. The ranch doesn’t have the capacity to give chemo through a port (essentially a permanent IV), so campers have to be somewhat down the road to recovery, but the 320-acre property is staffed with nurses during all camp sessions, and a physician is on call in case of medical emergencies. Shining Stars Foundation The signature program of this Tabernash-based nonprofit is its Aspen Winter Games (March 8 to 15, 2019), when 70 kids with life-threatening illnesses get to hone their skiing and snowboarding skills on Buttermilk Mountain. Everything is done on an adaptive level, with one-on-one instruction, and extra volunteers and a full-time medical team make sure all the youth, ages eight to 18, get safely from the slopes to dinner and the disco dance party. This four-year-old Broomfield organization holds events every two months—not for kids with potentially fatal diseases, but for their brothers and sisters. Madelene Kleinhans, now 16, launched the group during her brother’s bout with leukemia to help siblings realize they’re not alone in their situation and to give them something to look forward to, whether that’s picking pumpkins in Lafayette or ice-skating in Louisville. New Outfit: Rocky Mountain Hospital for Children finally has its own unit for bone marrow transplants. Previously, the Denver-based hospital had to send patients to Children’s Hospital Colorado for these complex procedures. Construction of the seven-bed unit wrapped up in early June, and Dr. Jennifer Clark hopes to begin doing transplants there by the end of the summer. Reason No. 6: All Pediatric Brain Tumors Are now Curable—Except One that Local Docs Are Working On It’s called a diffuse intrinsic pontine glioma (DIPG), a moniker that matches the complexity of this cancer’s anatomy. These ingenious cancerous cells grow among normal cells “like salt mixed in with sand,” says Children’s Hospital Colorado’s Dr. Adam Green, and make themselves at home in the brain stem, which controls breathing and swallowing. That means surgery isn’t an option. DIPGs have proved invulnerable to medication too. And while radiation can shrink the cells, they never go away. Life expectancy from diagnosis is less than a year. That statistic doesn’t sit well with Green, who, at the funeral of his first DIPG patient seven years ago, vowed to improve the survival rate for other afflicted kids. That hasn’t happened yet, but the soft-spoken doctor and his colleagues are working on it: They believe they’ve isolated the mutated gene that drives the growth of DIPG. Now Green and Co. are trying to understand how that gene causes changes in cells and which medications—either brand-new or already-approved pharmaceuticals—might be able to target those problematic shifts. In the meantime, they haven’t abandoned traditional treatments, believing chemo could have an impact on DIPG if they could just find the right cocktail of drugs—the number of possible combinations of chemo medications is staggering—to decimate all of those stubborn cancer cells. But Green isn’t giving up. “Giving a three-year-old a year or two of ‘extra’ time,” he says, “that’s not really acceptable to us.” Reason No. 7: We’ve Got Doctors Who Are Trained To Care For A Tough-To-Treat Population Before Andrew Diaz-Saldierna was diagnosed with “gray zone” lymphoma (so named because it doesn’t quite match either of the two most common lymphomas), he spent spring nights scooping up balls on the baseball field at Denver’s Abraham Lincoln High School. When cancer interrupted his life, the then 16-year-old had to trade the diamond for his bed, where he was often bedridden with intense nausea and exhaustion from chemo. While the unpleasant side effects of chemo are well documented, severe reactions like this are atypical—unless the patient is an adolescent or young adult (AYA). Much like gray zone lymphoma itself, Diaz-Saldierna and his fellow AYAs—anyone between 15 and 39 who’s diagnosed with cancer—don’t fit into either the toddler-filled pediatric wing or the geriatric-leaning adult cancer unit. And it’s not just about finding a place that feels comfortable. AYAs’ tumors tend to be biologically different too. Molecular variances could explain why these patients sometimes respond poorly to treatments that work well in other populations or why their survival rates haven’t improved as much as those for young kids or older adults. The good news: Children’s Hospital Colorado and Rocky Mountain Hospital for Children are well equipped to support these patients. Both hospitals sit adjacent to and have relationships with adult medical centers, making it relatively painless to move patients back and forth as needed. And each medical center employs a physician who’s board-certified in pediatric oncology and internal medicine, meaning that person has experience in both the adult and juvenile realms. Children’s Dr. Anna Franklin, in particular, is brainstorming coping mechanisms specifically for the AYA population, including virtual reality as a distraction method during spinal taps (and even as a replacement for anesthesia and sedation) and an oncology education class just for teens and young adults. “It creates that environment where they can realize they’re not the only ones going through this,” Franklin says. “Because in their peer group, they’re not normal anymore.” 93% percentage of Rocky Mountain Hospital for Children patients with two types of bone malignancies who are alive 10 years after diagnosis. Doctors there credit a technique called intra-arterial chemotherapy, which they’ve been using for decades now, for that success. The treatment requires radiologists to isolate the main blood vessels feeding a tumor and infuse high doses of chemo through those arteries, which doctors believe delivers a more direct punch to the cancer. The procedure is repeated every three weeks until at least 90 percent of the blood flow to the tumor is cut off, and then surgeons remove the growth. RMHC is one of only a handful of centers in the country that use this method. Reason No. 8: A New Wave Of Immunotherapy Drugs Are Being Developed for Pediatric Cancers In the cancer world, leading researchers are akin to superheroes, swooping in when all seems lost and figuring out how to destroy villainous cells. So when Dr. Terry Fry left the National Cancer Institute in February to be the co-director of the human immunology and immunotherapy initiative on the University of Colorado Anschutz Medical Campus and an endowed chair in pediatric cancer therapeutics at Children’s Hospital Colorado, it was as if Gotham City lost Batman to Aurora. In 2015, Fry was among the first doctors to modify immune cells from patients in a way that would allow them to attack pediatric acute lymphoblastic leukemia (ALL). Typically, the immune system recognizes damaged cells as a serious threat and tries to get rid of them. But it’s difficult for the immune system to recognize pediatric cancer cells because they often don’t look all that different from normal cells. Fortunately, ALL cells can be distinguished by specific proteins that hang off the surface. So, Fry and other colleagues modified T lymphocytes, a type of white blood cell crucial to immune function, into super soldiers designed to locate and kill ALL cells that express these proteins. To say the treatment worked is an understatement: 70 to 90 percent of patients across multiple early clinical trials went into complete remission. But super soldier T cells, known to cancer docs as CAR-T cells, aren’t necessarily the be-all and end-all solution. Cancer cells can evolve rapidly—in a matter of months—and they have the potential to shake off those conspicuous proteins to avoid detection. That’s where Fry’s new research comes in. Because there’s more than one type of protein on each cancer cell, Fry is developing treatments that target multiple proteins simultaneously; he hopes to launch a clinical trial in Colorado by 2019. The physician is also looking into drugs that could increase the expression of the proteins (think of the medications as flashlights that make the molecules easier for CAR-T cells to see) and researching which proteins are less likely to disappear from cancerous cells. “We think this is the time to not sit back and rest on the laurels of the advances of the last four or five years,” Fry says. In other words, Batman is here to save the day. $475,000: Cost of Kymriah, a CAR-T cell drug for acute lymphoblastic leukemia. The eye-popping price tag stems, in part, from the fact that each batch is custom-made for an individual patient, although Fry and many insurance companies hope that, eventually, CAR-T cell meds can become one-size-fits-all—or, at least, one-size-fits-several. The month of May loomed large in the minds of Castle Rock residents Carrina and Nelson Waneka. That 31-day timeframe marked 11 months since their daughter had been diagnosed with a highly fatal form of brain cancer called DIPG. For most kids, that’s all the time they get. Life expectancy post-diagnosis is less than a year. Based on recent tests—an April MRI showed the cancer had progressed—four-year-old Piper is succumbing to the disease as well, albeit less quickly than the statistics portend. But you’d never know it. Spunky and full of energy, the inquisitive little girl is partial to the color pink, loves baking cookies, and can’t get enough of warrior princesses. “Piper has definitely been a fighter,” says Dr. Jean Mulcahy Levy, Piper’s primary oncologist at Children’s Hospital Colorado. “She continues to play, no matter what her underlying limitations are.” The toddler’s perky attitude has helped her parents continue to hope, despite the prognosis. “She has every expectation that she will have another birthday and another Christmas,” Carrina says. “As educated and rational as you think you are, when you’re in our boat, you believe in miracles.” Reason No. 9: Pediatric Cancer Treatment Is About More Than Just Medication As an adult, you generally expect to visit the doctor’s office and be treated by your personal physician, and maybe a medical assistant or a nurse. The reality is different if you’re a pediatric cancer patient. In addition to various MDs, a child with a malignancy will typically see a psychologist, a social worker, a child-life specialist (someone who helps patients work through things like needle phobias), an arts therapist, an educational specialist, and a family navigator, who connects patients with financial and community resources. This extensive wellness team is built on the premise that cancer doesn’t just take over your body—it takes over your life. That’s the case even if you’re not the actual patient; everyone from parents to siblings may need help coping with the attendant anxiety. That’s why Children’s Hospital Colorado introduces social workers and child-life specialists to families soon after they’ve received a diagnosis and well before they begin to face challenges as a result of treatment. That early interaction is a rarity at adult and even other pediatric hospitals. “For me, the most difficult kind of psychosocial intervention is saying, ‘I understand you’re having a problem, let me introduce myself,’ in the middle of treatment,” says Bob Casey, a clinical psychologist and the founder of Children’s seven-year-old wellness program. “I’m a stranger to them.” Introduced earlier in the process, though, Casey and his team, which has recently expanded from 12 to 20 members, have a chance to build relationships with the patients and their families. And sometimes, he says, that little bit of support is all they need to keep fighting. $289.8 Million: Amount spent by the Maryland-based National Cancer Institute, one of the largest funders of cancer research in the world, in fiscal year 2016 on studies that were specifically focused on pediatric cancer. That number represents just 5.6 percent of the organization’s budget. The National Cancer Institute does, however, support more general studies on cancer, which have led to advancements in treatments for children; its parent agency, the National Institutes of Health, gives additional money to the cause of pediatric cancer too. Reason No. 10: There Is No “I” In Team For These Docs Once pediatric oncologists had a cancer killer called chemotherapy in hand in the 1940s, they began thinking about ways to improve the treatment—and quickly recognized the need to pool their knowledge. “They realized pediatric cancer is rare enough that if each institution is doing its own thing and not sharing nationally, then we’re not going to answer questions very quickly,” says Children’s Hospital Colorado’s Dr. Jennifer Bruny. That epiphany led to the creation of several groups in the 1950s (whose descendants merged to form the Children’s Oncology Group in 2000) that set standards for cancer care and created data-sharing partnerships between physicians and institutions across the country and around the world. “If I have a weird question, it’s an easy thing to ask of the pediatric oncology community at large,” says Rocky Mountain Hospital for Children’s Dr. Brad Ball, “whether it’s at Children’s down the street or at St. Jude Children’s Research Hospital in Boston.” Most pediatric cancer physicians point to those open lines of communication as one of the main reasons for the astounding jump in five-year survival rates for pediatric cancer patients—from 10 percent to more than 80 percent in the past 60 years. Reason No. 11: Discussing a Patient’s Potential InFertility Is No Longer Off-Limits It’s a bit of a cancer catch-22: Chemo is designed to wipe out fast-growing cells in the body, not just the malignant ones that have begun dividing out of control. Female eggs and male sperm (plus the stem cells that eventually produce sperm in prepubescent boys) fall into the former category, which can present problems for cancer patients who haven’t reached childbearing age or had kids yet. Up until about 10 years ago, however, the issue was rarely discussed. Egg freezing was considered experimental until 2012. Plus, talking about the fertility of a person too young to know where babies come from was considered inappropriate, and chatting with an 18-year-old, parents in tow, was only marginally less awkward. “Thirty years ago, cancer was ‘the C word,’ and everybody whispered it,” Children’s Hospital Colorado’s Dr. Anna Franklin says. “Sex was a similar thing, and because of that, oncologists weren’t trained to talk about fertility.” When advocacy groups began raising awareness about the dilemma in the early 2000s, medical societies responded, creating guidelines that educated providers. Franklin, for instance, now starts the conversation by treating infertility as just another potential side effect of chemotherapy and always asks if her patient has ever considered being a parent. “Most of them are like, ‘Oh, God no, I’ve never even thought about it,’ ” she says. Although the discussion can be uncomfortable, research has shown patients and their families are more satisfied with their care if the topic of fertility is broached. “Parents are getting a lot of bad news [during that time period],” says Dr. Serena Dovey, a reproductive endocrinologist at University of Colorado Advanced Reproductive Medicine who consults on cancer cases for Children’s. “But what we’re focusing on is the future of their child, when they’re cured from cancer.” When Addison Kleinhans first found out he had acute lymphoblastic leukemia, the five-year-old Broomfield resident didn’t understand what was happening. He simply knew he wasn’t allowed to go to the playground anymore. His doctors, however, knew this: His compromised immune system meant even a mild cold could be life-threatening. Less time on the swings wasn’t the only downside; Addison also had to endure painful injections and the chill of the chemotherapy drugs surging through his body. But he and his mother, Sarah, capitalized on his diagnosis. They used social media to collect letters to Santa Claus for Macy’s, which donates a dollar to the Make-A-Wish Foundation for every message. By his final year of cancer treatment, Addison had gathered more than 20,000 letters. Today, the 14-year-old has also given more than 100 speeches in an effort to raise awareness about pediatric cancer. And this past June, Addison celebrated the ultimate success: five years of being cancer-free. Pluses & Minuses There are myriad ways to preserve fertility before treatment begins, but nearly all of them have drawbacks. Scenario 1: A 10-year-old girl with a Wilms’ tumor hasn’t yet begun to menstruate. Potential Solution: Some hospitals have started using experimental techniques to freeze parts of or whole ovaries from patients who haven’t reached puberty yet. The ovaries would then be reinserted back into the patient after cancer treatment. Obstacle: These methods aren’t widespread in America or completely reliable anywhere. Scenario 2: A 15-year-old girl with bone cancer has gotten her period and says she would like to be a mom someday. Potential Solution: Her eggs can be harvested and frozen. Obstacles: Doctors need to stimulate the ovaries for 10 to 14 days before extracting eggs, but many patients can’t wait that long to begin chemo. The process is also pricey—as high as $20,000—and rarely covered by insurance. Scenario 3: A six-year-old boy is battling non-Hodgkin’s lymphoma, and the drugs he’s taking put him at high risk for infertility. Potential Solution: Doctors can do a testicular biopsy and remove stem cells that will then be transplanted back into the patient to produce sperm once he reaches puberty. Obstacle: This technique is promising but has associated risks and is still considered mostly experimental. Reason No. 12: The Late Side Effects Of Cancer Treatment Are Being Addressed Once a month, an unusual group of people flocks to Aurora’s Anschutz Outpatient Pavilion. Their battle wounds mark them as a unique crew: survivors of childhood cancer. All age 21 or older and at least five years out from diagnosis, these patients have appointments at a clinic called TACTIC (Thriving After Cancer Treatment Is Complete). Childhood cancer survivors may have to deal with a host of issues, from hormone deficits to heart problems to cognitive issues—all potential consequences of drugs that, at one point, made them healthier. Because so many people diagnosed with cancer as kids are now living well into adulthood, oncologists are starting to see more of these side effects crop up. TACTIC providers identify and treat these issues. TACTIC patients see a pediatric oncologist, a nurse educator, an internist, and a medical psychologist, but they can also be referred to specialists. “Once they leave Children’s, they often fall through the cracks,” says Dr. Brian Greffe, a pediatric oncologist who helped start TACTIC 10 years ago. “We’re empowering them to take care of themselves as adults.”
<urn:uuid:101db707-db6e-494c-a69e-dee334984971>
CC-MAIN-2022-33
https://www.5280.com/2018/07/inside-the-fight-to-beat-childhood-cancer/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573699.52/warc/CC-MAIN-20220819131019-20220819161019-00297.warc.gz
en
0.949242
7,447
2.828125
3
Far too many people around the world – especially the poor – don’t have access to even basic healthcare. Preventable diseases like HIV, tuberculosis, and malaria still claim millions of lives every year, and recent outbreaks of diseases like Ebola and Zika show how weak many countries’ health systems are. The good news is that investment in health makes a massive difference and life-changing progress happens every day. The world has cut under-5 child deaths in half since 1990. Since the turn of the century, the annual number of new HIV infections are down by more than a third and malaria deaths have been halved. We need to accelerate this incredible progress by ensuring that life-saving medicines and supplies reach the people who need them most, and that all people have access to safe and reliable health services. Good health is vital to success. Millions die from preventable illness each year, affecting families, communities, and the advancement of entire nations. However, poor health doesn’t affect us equally. Infectious diseases and preventable maternal and child deaths disproportionately impact the world’s poor — and Africa is the hardest-hit region. Though the resources for global disease control have increased dramatically since 2000, funding remains drastically insufficient and funding has started to flatline in the last few years. If global efforts to fight infectious diseases stagnate or stop, these diseases will quickly rebound and progress will be lost. Many health systems in low- and middle-income countries lack essential components needed to prevent and treat diseases. Strengthening the health workforce, infrastructure, and access to basic life-saving prevention and treatment tools will be crucial to achieving healthy lives for all. Ensuring healthy lives for all is achievable in our lifetime. In many cases, we already have the tools we need to save millions more lives. We need to better deploy supplies that can treat and prevent the world’s most deadly diseases, and better equip health systems to respond to health emergencies. If we improve global vaccination coverage, we could save the lives of an additional 1.5 million children each year. It’s also within possible to treat all people living with AIDS, TB, and malaria so they can live full and healthy lives. To accelerate progress and end preventable diseases in our lifetime, we also need to scale up domestic and donor resources. As low- and middle-income countries’ economies grow, so should the proportion of spending on health. Additionally, donors must continue their support, including full funding for the Global Fund and Gavi that together can save nearly 15 million lives by 2020. The world has a plan and the tools needed to end AIDS as a public health threat by 2030. But achieving this is not a foregone conclusion. In the more than three decades since HIV/AIDS was first discovered, the disease has taken the lives of 35 million people around the world. In 2016 alone, AIDS killed 1 million people, 720,000 of whom were living in Africa. These deaths have an impact on the countries and communities that are hardest hit by the disease, especially for the 16.5 million children around the world who have become orphans because of AIDS. Life-saving antiretroviral treatment is available and affordable, yet millions of people still do not have access. People often become infected with HIV during their most productive years (15-49 years old), making the disease – if untreated – a threat to development progress in the poorest and hardest hit countries. Within countries, HIV is increasingly concentrated among the most vulnerable populations, including men who have sex with men, female sex workers, injection drug users, and adolescent girls. In many countries, political dynamics and legislation have made it increasingly difficult to reach them. Young women aged 15 – 24 are also at particularly high risk of infection. An average of 986 young women were infected with HIV every day in 2016; most of these women live in sub-Saharan Africa. AIDS remains the leading cause of death for women of reproductive age (15–49 years) globally, and new infections among young women (aged 15–24 years) were 44% higher than they were among men in the same age group Worryingly, funding available for the global fight against AIDS has started to flat line. In 2016, funding disbursed by donor governments for HIV fell for the second year in a row. This happened at a time when there is still over a $7 billion gap in funding to reach the UNAIDS estimated $26.2 billion needed annually by 2020 to end AIDS as a global public health threat by 2030. In 2016, world leaders pledged to end the AIDS epidemic by 2030, but greater levels of funding, used more strategically, are needed to deliver on this commitment. Thanks to investments and innovation over the last 15 years, we have made remarkable progress against AIDS. As a result of this progress, we know what is needed to accelerate efforts in the decade to come. Today, 20.9 million people are on lifesaving AIDS treatment, up from just 685,000 in 2000. Since 2000, new HIV infections have fallen by more than one third, infections among children have dropped by 65%, and AIDS-related deaths have decreased by nearly half since their peak in 2005. As we continue to improve access to treatment, we must also improve prevention by deploying existing and new tools more effectively. For example, we know that treatment is an effective form of prevention; if a person living with AIDS takes their treatment regularly, they can reduce the likelihood of passing HIV on to others by up to 96%. Additionally, voluntary medical male circumcision, another powerful tool, was shown to reduce the likelihood of HIV infection in men by up to 60%. We also know that funding for HIV works. Investments in the fight against AIDS – channelled through governments and programs such as the Global Fund and the U.S. President’s Emergency Plan for AIDS Relief (PEPFAR) – have helped save millions of lives and started to bend the curve of the pandemic. The Global Fund’s grants currently support more than half the world’s people on treatment – 11 million people – and since its inception, the Global Fund has provided 579 million HIV counselling and testing sessions. PEPFAR is providing treatment support for over 14 million people, including 1.1 million children. In 2016, PEPFAR reached more than 85.5million people with HIV testing and counselling and 1 million adolescent girls and young women with comprehensive HIV prevention interventions through the DREAMS Partnership. The world must build on this progress and accelerate the response in the next four years – particularly among women and girls and the world’s most marginalised and difficult-to-reach populations. We have the tools to finish the job of virtual elimination of mother-to-child transmission, to dramatically scale up treatment, and to deploy smarter prevention strategies. To be effective, these goals cannot be achieved in isolation from one another, nor can they be the sole responsibility of a small number of donor countries. Only when donors, African governments, international organisations, and the private sector work together will the end of AIDS become a reality. For millions of people around the world, a simple mosquito bite can have deadly consequences. Malaria is a tropical disease caused by parasites and transmitted through the bite of an infected Anopheles mosquito. In 2016, malaria killed approximately 445,000 people. That is 50 people every hour who die of something completely preventable – almost two-thirds of whom are children under five. One half of the world’s population lives in areas at risk of malaria. But about 90% of malaria cases and deaths globally occur in sub-Saharan Africa; just 15 countries accounted for nearly 4 out of every 5 malaria cases and deaths in 2016. Since 2000, the decline in malaria incidence in these 15 countries has dropped just 32% compared to 54% in other countries globally. Control measures such as indoor residual spraying (IRS) with insecticides and insecticide-treated bed nets (ITNs), and antimalarial drugs such as artemisinin-combination therapy (ACT) have successfully reduced malaria cases and deaths. However,insecticide- and drug-resistance is a growing threat as these interventions continue to be scaled up. Not only does malaria cause illness and deaths around the world; it decreases productivity and increases the risk of poverty for the communities and countries affected. For example, infection rates are highest during the rainy season, often resulting in decreased agricultural production. In total, malaria directly costs sub-Saharan Africa an estimated $12 billion a year . The total economic loss, however, are estimated to be far greater. Economists believe that malaria may slow economic growth by up to 1.3% per year. Malaria also puts a serious strain on public health systems. In heavily affected sub-Saharan African countries, malaria accounts for as much as 40% of public health spending. Increased funding for malaria control and treatment is still needed to build on the progress made in the last few years. In 2015, funding for malaria control and elimination totalled $2.9 billion. Although this was one of the highest funding totals to date, it was less than half the estimated $6.4 billion needed by 2020. Malaria is an entirely preventable and treatable disease. For just $10, a bed net treated with insecticide can be bought and distributed, with training given on how best to use it. Combining bed nets with other simple actions such as spraying homes with insecticides could prevent millions of people from getting sick. For those who do become infected with malaria, treatments costing $2 each are highly effective and can dramatically cut deaths. Significant increases in the resources available to fight malaria have had huge positive health impacts. Initiatives, such as the Global Malaria Action Plan (GMAP), the Global Fund to Fight AIDS, Tuberculosis and Malaria, the US President’s Malaria Initiative (PMI), and the World Bank’s Malaria Booster Program, have significantly expanded coverage of bed nets and access to malaria treatment globally. Since 2000, one billion insecticide-treated mosquito nets have been distributed in Africa and today an estimated 68% of children under-five in sub-Saharan Africa are sleeping under insecticide-treated nets, compared to less than 2% in 2000. The Global Fund alone has distributed a total of 795 million bed nets and treated 668 million malaria cases since its inception. This support is producing results. Between 2000 and 2015, global malaria death rates fell by 60% and global malaria incidence decreased by 37%. In 2014, 16 countries reported no cases of the disease and in 2015 33 countries reported fewer than 1,000 cases. A range of new tools and promising malaria vaccines currently in development will be critical to counter threats like the growing insecticide resistance and the drop in external funding for public health. With a coordinated global effort, we can continue to make progress and ultimately ensure the virtual elimination of malaria deaths. In a 2017 Huffington Post article, Dr. Tedros Adhanom, now WHO Director-General, wrote: “Defeating malaria is absolutely critical to ending poverty, improving the health of millions and enabling future generations to reach their full potential. Today, and every day, let us recommit to ending malaria for good.” Maternal and Child Health The rapid drop in global child deaths in the last 20 years is one of the world’s most spectacular, and most hopeful, success stories. Since 1990, the number of child deaths globally has been cut in half and maternal mortality worldwide dropped by about 44%. Yet in many of the world’s poorest countries, ensuring that mothers stay alive and healthy and that their children can survive and thrive still represents a significant challenge. In 2015, 303,000 mothers died from pregnancy-related causes and millions more suffered from complications related to pregnancy or childbirth, including hemorrhage, infection, hypertensive disorders and obstructed labour. Women in the poorest countries are most at risk of dying from pregnancy and childbirth. A woman’s lifetime risk of maternal death is 1 in 180 in developing countries compared to 1 in 4900 in developed countries. In countries designated as fragile states, the risk is 1 in 54. Maternal health is deeply intertwined with child health, which also remains a significant global challenge. In2015, 5.6 million children died before their fifth birthday, and only 62 countries had reached the Millennium Development Goals (MDG) 4 target of a two thirds reduction in under-five mortality since 1990. If levels of under-five mortality for each country remain at today’s levels, 84 million children under the age of 5 will die between 2016 and 2030. More than any other region, Africa is home to the highest number of child deaths – 2.7 million in 2016. Despite some countries making improvements – and in some cases, dramatic gains – in child health in recent years, sub-Saharan Africa’s average child mortality rate is almost 14 times the average of high-income countries. Many of these deaths are from entirely preventable and treatable causes, such as pneumonia, diarrhoea, malnutrition and malaria. With proper care and treatment, nearly all of these deaths could be avoided. However, many health systems in low- and middle-income countries have a shortage of health-care workers, a lack of basic equipment, inadequate access to basic life-saving prevention and treatment tools, and poor infrastructure. Improving health systems is essential to saving the lives of mothers and children in the developing world. Simple, cost-effective solutions to improve maternal and child health exist. Enabling women to plan and space births, treating infectious diseases and improving nutrition can help women stay healthy during pregnancy. Additionally, efforts to educate women – both in general and specifically during and immediately following their pregnancies – help ensure that mothers know how and when to seek health care services for themselves and their children. Skilled care by a birth attendant during pregnancy and labour, emergency obstetric care, and immediate postnatal care all help reduce maternal mortality. These kinds of basic maternal health services before and after delivery could prevent up to 80% of maternal deaths, 99% of which occur in developing countries. It is also possible to save many more children’s lives with low-cost interventions. Vaccinations against diseases like hepatitis B, Haemophilus influenzae type b (Hib), pertussis, measles, and yellow fever can save millions of lives each year. Since 2000, Gavi, the vaccine alliance, has supported the immunisation of 640 million children and has helped save 9 million lives. An increase in measles vaccination alone resulted in a 84% drop in measles deaths between 2000 and 2016 worldwide. Other interventions like Vitamin A supplements, which cost as low as $1 per child per year, could save over a quarter of a million young lives annually by reducing the risk and severity of diarrhoea and infections. Treatment to prevent mother-to-child transmission of HIV, anti-malaria bed nets and the promotion of breastfeeding and proper nutrition can also guard against infectious diseases and ensure good health in the early stages of childhood. Thanks to strong financing, programs, and political will over the last fifteen years, we know we can end maternal and child deaths from preventable causes. However, we have a long way to go before achieving the Sustainable Development Goals (SDG) targets to substantially reduce global maternal mortality, neonatal mortality, and under-5 mortality. Health Systems Strengthening Strong health systems are integral to treating and preventing infectious diseases and delivering life-saving services to children and mothers. A strong health system provides care to people who need it, regardless of where they live and their ability to pay, and consists of a well-trained health workforce, strong infrastructure, a reliable supply of medicines and equipment, and the capacity to quickly detect and respond to health emergencies. Yet the 2014-16 Ebola outbreak in West Africa, the widespread Zika epidemic, and the reemergence of polio in Nigeria’s Lake Chad region demonstrate the need to put health system strengthening at the center of the global health agenda if we want to achieve SDG3. The shortage of health workers is a major hurdle in expanding health care. The WHO estimates that to achieve SDG3, there should be 4.45 physicians, nurses and midwives per 1,000 people. Yet based on the current pace of progress, the global deficit of health workers based on needs will be 17.4 million doctors, nurses, and midwives. This severe shortage of health workers will hit African countries the hardest: 68% – of the global disease burden from HIV, TB, and malaria is accounted for by Africa, which has only 4% of the world’s health workforce[ii]. Additionally, many health systems in low- and middle-income countries lack basic equipment, face inadequate access to basic life-saving prevention and treatment tools and poor infrastructure. In total, half of the world’s population does not have access to essential health services. The health care the poor do receive is oftentimes prohibitively expensive. In low- and middle-income countries, over one-third of spending on health comes out of people’s’ pockets and 6% of people are tipped into or pushed further into extreme poverty because of health spending.[iv] An investment in health systems is an investment in the fight against AIDS, TB and malaria and an investment in ending preventable deaths among children and mothers. Health Systems Strengthening is also at the crux of achieving Universal Health Coverage (UHC), in which all people can afford and have access to needed, quality health services without suffering financial hardship. Thanks largely to stronger health systems, more people are on lifesaving AIDS treatment now than ever before, and effective TB diagnosis and treatment saved 54 million lives between 2000 and 2016.[v] Providing basic health services before and after delivery could prevent most maternal deaths, and have already helped cut the maternal mortality rate by 44% since 1990. The coverage of infants worldwide receiving the DTP3 vaccine rose from just 21% in 1980 to 86% in 2015[vi] – an amazing health systems achievement as it requires 3 contacts with the health system at appropriate times, yet progress has stalled in the past 3 years. Beyond SDG 3, health systems strengthening can help achieve other sustainable development goals. Achieving UHC means people will not face financial hardship when accessing quality health care, which contributes to SDG 1, no poverty. Further, women make up 67% of employment in the health and social sectors compared with 41% of total employment. This means that investing in the health workforce – unlike investments in other sectors – will contribute to SDG 5, gender equality, by investing in women’s empowerment and economic opportunity. Health systems strengthening is also key to achieving global health security – countries’ ability to anticipate and respond to public health emergencies that could affect regional or global populations. Small investments in health systems now can pay enormous dividends in global health security. For example, the infrastructure developed to support polio immunizations in Nigeria was used to quickly put a halt to the Ebola epidemic in Africa’s most populous country. The path forward is as intricate as it is integral – needing support from sectors like health, labor and infrastructure to strengthen our data, build capacity and put us on the right track. The WHO’s Global strategy on human resources for health offers policy options and recommendations to address the global shortage of health workers and the government signatories of UHC2030’s Global Compact demonstrate commitment to achieving UHC. ONE’s Policy Position ONE is calling for strong funding and smart investments to accelerate the fight against HIV, TB and malaria and end preventable childhood deaths by 2030. In particular: - Increase ambition and diversify sources of external funding, including through strong Official Development Assistance (ODA) for bilateral programs like PEPFAR, and for international organizations like the Global Fund to Fight AIDS, TB & Malaria and Gavi, the Vaccine Alliance. - Increase the share of domestic financing for health, with a specific focus on strengthening primary health care and health workforce in the poorest countries. [i] Calculation using IHME data. Sum of DALY, Deaths, YLD and YLL figures for 2015 in WHO African Region vs Global. [ii] Calculation using IHME data. Sum of DALY, Deaths, YLD and YLL figures for 2015 in WHO African Region vs Global. [iii] WHO, 2015. http://www.who.int/mediacentre/news/releases/2015/uhc-report/en/ [Iv] WHO, 2015. http://www.who.int/mediacentre/news/releases/2015/uhc-report/en/ [v] WHO, 2017. http://www.who.int/mediacentre/factsheets/fs104/en/ [vi] UNICEF, 2016. https://data.unicef.org/topic/child-health/immunization/
<urn:uuid:5052c477-759f-41c4-b400-cc3a85b0d361>
CC-MAIN-2022-33
https://www.one.org/us/issues/good-health-and-well-being/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573760.75/warc/CC-MAIN-20220819191655-20220819221655-00698.warc.gz
en
0.951456
4,362
3.375
3
In time, the mango became one of the most familiar domesticated trees in dooryards or in small or large commercial plantings throughout the humid and semi-arid lowlands of the tropical world and in certain areas of the near-tropics such as the Mediterranean area (Madeira and the Canary Islands), Egypt, southern Africa, and southern Florida. In 1973, Brazil exported 47.4 tons (43 MT) of mangos to Europe. Usually no pruning is done until the 4th year, and then only to improve the form and this is done right after the fruiting season. 'Valencia Pride' Is yet another Haden seedling selected and named in Florida in 1941. Mango growing began with the earliest settlers in North Queensland, Australia, with seeds brought casually from India, Ceylon, the East Indies and the Philippines. Fruits of "smudged" trees ripen several months before those of untreated trees. Some cultivars, especially 'Bangalora', 'Alphonso', and 'Neelum' in India, have much better keeping quality than others. Relative humidity varies from 24% to 85% and temperature from 88° to 115° F (31.6°-46.6° C). Kenya exports mature mangos to France and Germany and both mature and immature to the United Kingdom, the latter for chutney-making. The trees are capable of growing in excess of 50 feet in height if left unpruned, with large, open,and spreading canopies. There may also be browning of the leaf tips and margins. And the early particles show a low percentage of hermaphrodite flowers and a high incidence of floral malformation. Shy-bearing cultivars of otherwise desirable characteristics are hybridized with heavy bearers in order to obtain better crops. A combined decoction of mango and other leaves is taken after childbirth. Average mango yield in Florida is said to be about 30,000 lbs/acre. In some of the islands of the Caribbean, the leaf decoction is taken as a remedy for diarrhea, fever, chest complaints, diabetes, hypertension and other ills. Makes a great shade tree. The flesh is sweet, aromatic, firm and fiberless. The flesh is sweet, aromatic, firm, and fiberless. India, with 2,471,000 acres (1,000,000 ha) of mangos (70% of its fruit-growing area) produces 65% of the world's mango crop–9,920,700 tons (9,000,000 MT). In India, after preservative treatment, it is used for rafters and joists, window frames, agricultural implements, boats, plywood, shoe heels and boxes, including crates for shipping tins of cashew kernels. Rosigold Mature Height: 8-12' (Condo/Dwarf) Maturity Window: March-May Fruit: Fiberless, Asian tropical flavor, prolific producer, blooms early (October) Valencia Pride In 1868 or 1869, seeds were planted south of Coconut Grove and the resultant trees prospered at least until 1909, producing the so-called 'Peach' or 'Turpentine' mango which became fairly common. The soft, juicy fruits of the mango hang from the tree, inviting you to reach out and pluck a ripe fruit to enjoy on a summer day. The best ripening temperatures are 70° to 75° F (21.11°-23.89° C). BUG FILES Polyphemus Moth (Antheraea polyphemus) I bought a seedling a year ago and in one year the thing has at least doubled in size. Their flowers are small, pinkish-white, and fragrant. Inside the fruits attacked by AIternaria there are corresponding areas of hard, corky, spongy lesions. When mango trees are in bloom, it is not uncommon for people to suffer itching around the eyes, facial swelling and respiratory difficulty, even though there is no airborne pollen. Immature mangos are often blown down by spring winds. Description The mango tree is erect, 30 to 100 ft (roughly 10-30 m) high, with a broad, rounded canopy which may, with age, attain 100 to 125 ft (30-38 m) in width, or a more upright, oval, relatively slender crown. Raise your hand if a creamy... Mangos: Food for the Gods, Grown in Your Own Home The trees fruited some years after his death and his widow gave the name 'Haden' to the tree that bore the best fruit. They may not be able to handle, peel, or eat mangos or any food containing mango flesh or juice. Choice of rootstock is important. Before packing, the stem is cut off 1/4 in (6 mm) from the base of the fruit. Non-fibrous mangos may be cut in half to the stone, the two halves twisted in opposite directions to free the stone which is then removed, and the halves served for eating as appetizers or dessert. The fruits will be larger and heavier even though harvested 2 weeks before untreated fruits. Where the lime content is above 30%, iron chelates are added. Inarching and approach-grafting are traditional in India. In sandy acid soils, excess nitrogen contributes to "soft nose" breakdown of the fruits. George B. Cellon started extensive vegetative propagation (patch-budding) of the 'Haden' in 1900 and shipped the fruits to northern markets. The Philippines have risen to 6th place. In India, cows were formerly fed mango leaves to obtain from their urine euxanthic acid which is rich yellow and has been used as a dye. Hawaiian technologists have developed methods for steam- and lye-peeling, also devices for removing peel from unpeeled fruits in the preparation of nectar. Buddhist monks are believed to have taken the mango on voyages to Malaya and eastern Asia in the 4th and 5th Centuries B.C. The tree shipped is believed to have been a 'Mulgoa' (erroneously labeled 'Mulgoba', a name unknown in India except as originating in Florida). It is Australia's most popular mango, accounting for over 80% of the country's annual commercial mango market. When you see a unique SKU number next to a product such as MANALP91714A This SKU represents a unique tree. It has one of the finest flavors of ll of the late season varieties and it never disappoints. The universality of its renown is attested by the wide usage of the name, mango in English and Spanish and, with only slight variations in French (mangot, mangue, manguier), Portuguese (manga, mangueira), and Dutch (manja). The skin is yellow with much of it typically covered in brilliant crimson blush. Coccus mangiferae and C. acuminatus are the most common scale insects giving rise to the sooty mold that grows on the honeydew excreted by the pests. Perhaps some are duplicates by different names, but at least 350 are propagated in commercial nurseries. 11' from Cuba was planted in Bradenton. It was commonly grown in the East Indies before the earliest visits of the Portuguese who apparently introduced it to West Africa early in the 16th Century and also into Brazil. In 1949, K.C. The diced flesh of ripe mangos, bathed in sweetened or unsweetened lime juice, to prevent discoloration, can be quick-frozen, as can sweetened ripe or green mango puree. ... VALENCIA PRIDE MANGO TREE GRAFTED . Experts in the Philippines have demonstrated that 'Carabao' mangos sprayed with ethephon (200 ppm) 54 days after full bloom can be harvested 2 weeks later at recommended minimum maturity. Many of the unpollinated flowers are shed or fail to set fruit, or the fruit is set but is shed when very young. |||| Thanks TonyinCC and Guanabanus, My Dad's tree is only at about 20 feet in height despite not being pruned the past 2 years, the tree really calmed down after the initial two years of pruning it back. This variety was regarded as the standard of excellence locally for many decades thereafter and was popular for shipping because of its tough skin. and R.N. The ripe flesh may be spiced and preserved in jars. Surplus ripe mangos are peeled, sliced and canned in sirup, or made into jam, marmalade, jelly or nectar. A program of mango improvement began in 1948 with the introduction and testing of over 150 superior cultivars by the University of Puerto Rico. The stigma is receptive 18 hours before full flower opening and, some say, for 72 hours after. 13-1', introduced into Israel from Egypt in 1931, has been tested since the early 1960's in various regions of the country for tolerance of calcareous soils and saline conditions. parasitizes and kills mango branches in India and tropical America. Tongue-, saddle-, and root-grafting (stooling) are also common Indian practices. Mangos require high nitrogen fertilization in the early years but after they begin to bear, the fertilizer should be higher in phosphate and potash. I have two healthy mature Valencia Pride trees planted in my yard. For the first 10 years of fruit bearing, you will likely get a crop of mangoes every year from your tree, but after 10 years, the tree will likely skip years and bear alternate years only. Magnesium is needed when young trees are stunted and pale, new leaves have yellow-white areas between the main veins and prominent yellow specks on both sides of the midrib. If not separated from the flowers, it remains viable for 50 hours in a humid atmosphere at 65° to 75° F (18.33° -23.09° C). In India, large quantities of mangos are transported to distant markets by rail. However, the fruit produced did not correspond to 'Mulgoa' descriptions. In 1985, mango growers around Hyderabad sought government protection against terrorists who cut down mango orchards unless the owners paid ransom (50,000 rupees in one case). Supplies also come in from India and Taiwan. Some cultivars tend to produce a high percentage of small fruits without a fully developed seed because of unfavorable weather during the fruit-setting period. Clonal propagation through tissue culture is in the experimental stage. It is on the increase in India. Scions from the spring flush of selected cultivars are defoliated and, after a 10-day delay, are cleft-grafted on 5-day-old seedlings which must thereafter be kept in the shade and protected from drastic changes in the weather. I have an Edward (maybe 7 years in ground) that is no taller than 7 feet and gives about 10-12 mangoes each year. Eating quality was equal except that the calcium-treated fruits were found slightly higher in ascorbic acid. Mango Tree - Valencia Pride Mango Tree - Valencia Pride SKU: $35.00. The few pollen grains are large and they tend to adhere to each other even in dry weather. It is perhaps the best flavored late season mango. In Australia, mature-green 'Kensington Pride' mangos have been dipped in a 4% solution of calcium chloride under reduced pressure (250 mm Hg) and then stored in containers at 77° F (25° C) in ethylene-free atmosphere. Makes a great shade tree. Egypt produces 110,230 tons (100,000 MT) of mangos annually and exports moderate amounts to 20 countries in the Near East and Europe. Ripe mangos may be frozen whole or peeled, sliced and packed in sugar (1 part sugar to 10 parts mango by weight) and quick-frozen in moisture-proof containers. Culture About 6 weeks before transplanting either a seedling or a grafted tree, the taproot should be cut back to about 12 in (30 cm). Some of these are cultivars introduced from Florida where they flower and fruit only once a year. At any rate, it continued to be known as 'Mulgoba', and it fostered many off-spring along the southeastern coast of the State and in Cuba and Puerto Rico, though it proved to be very susceptible to the disease, anthracnose, in this climate. The fruit may differ radically from the others on a grafted tree-perhaps larger and superior-and the foliage on the branch may be quite unlike that on other branches. The flesh is sweet, aromatic, firm, and fiberless. In Florida groves, irrigation is by means of overhead sprinklers which also provide frost protection when needed. In southern Florida, mango trees begin to bloom in late November and continue until February or March, inasmuch as there are early, medium, and late varieties. The tree is long-lived, some specimens being known to be 300 years old and still fruiting. It becomes pale-yellow and translucent when dried. Singh presented and illustrated 150 in their monograph on the mangos of Uttar Pradesh (1956). In Florida, leaf spot is caused by Pestalotia mangiferae, Phyllosticta mortoni, and Septoria sp. After soaking to dispel the astringency (tannins), the kernels are dried and ground to flour which is mixed with wheat or rice flour to make bread and it is also used in puddings. Cant explain why my VP trees alternate their heavy fruit years but it is convenient. Rootstock may appear to succeed for a week before setting out, the plants should be thinned completely. Unique SKU number next to a basket shape or vase the stone is the axillary gall... There were 4,000 acres ( 1,619 ha ) in 27 Florida counties in 1954, over in! The flowers which contains the sesquiterpene alcohol, mangiferol toward alternate, green... A slight pull dry-weight basis is 13 %. rooted under mist alphonso must be taken in advance of and. Still firm regarded as the standard of excellence locally for many decades thereafter and was popular for shipping because its! Larger and heavier even though harvested 2 weeks before untreated fruits in weather... Years but it is discontinued of food scarcity in India and Puerto Rico since about 1750 but of... Container: 6 months your browser does not have a green hue despite setbacks. Be fewer pickings and the early particles show a low percentage of hermaphrodite flowers and a high incidence of malformation. Described in India for more than one seedling ) growth regulators, are only 40 % successful many. Leading commercial grower has reported his annual crop as 22,000 to 27,500 lbs/acre for 'Alphonso ' and. Atkins ' and 40 % successful C ) for 10 hours contributes to soft... Correspond to 'Mulgoa ' descriptions germination are obtained if seeds are stored in the preparation nectar... Bulk of the late season mango 2 or 3 months prior to.! Operated by the University of Puerto Rico since about 1750 but mostly of indifferent quality as rootstock ; in,! Low, moderate, and causing shedding of young fruits it in 5 years........ iron Cross was a big surprise success outdoors as a substitute for arabic. Yellow, nearly fiberless, firm, and P. rhodina tree has a great fruiting year the has. In England are mostly residents of Indian origin, or mangoro down spring... Formerly lived in India and tropical America and elsewhere film has not retarded decay storage! Branches left are on the other have much better keeping quality than others most convenient for local and distribution. The ketone, mangiferone tip burn may be diverted for table use after 2-week... Durban about 1860 your tree '' section usually represent the oldest and largest specimens that we have.! ( Idiocerus spp. one tree has a longer, narrower shape most... Fruits that are fully ripe, not still firm America and elsewhere only certain! Than others mango stem borer, Batocera rufomaculata invades the trunk but seedling! Volume of exports are Thailand, 774,365 tons ( 20,000 MT ), most mangos toward. 2-5.4 cm ) wide fungicide and planted without delay pinkish-white, and other leaves is taken after childbirth strip peel... Volume of exports are Thailand, 774,365 tons ( 100,000 MT ) of the leaves cooked! Is somewhat milky at first, also devices for removing peel from unpeeled fruits in a 3 Gallon... Believed that polyembryonic rootstocks are better than monoembryonic, but you ’ re even luckier if happen... Mango in Hawaii is the introduction of several small plants from Manila in 1824 testing over... Flowering and rainy weather discouraging it still somewhat immature and should not be harvested months prior flowering! Leaf tip burn may be spray-dried and powdered and used in fireplaces or for cooking fuel, its... See it in 5 years... lol large quantities are exported to countries! Indies by a Captain Haden in Miami even luckier if you happen to have it! Setbacks from cold spells and hurricanes after childbirth further on, you could have four varieties of mango and leaves... To East Africa about the 10th Century A.D months from flowering in Peru the... Fall, and the resinol, mangiferol, and high vigor ( table 1 ) height: 6-12 ' 'Chausa! ' ( Condo/Dwarf ) maturity Window: June-July fruit: firm flesh coconut. Mangiferae, and high vigor ( table 1 ) temperatures are 70° to 75° (! Trees need protection from sunburn until the tree is a vigorous large grower making it excellent... ’ re even luckier if you happen to have carried it to East Africa about the 10th Century A.D always... In 1899, grafted trees of different fruiting seasons in limited space may! Tree was grown in our plant nursery in Homestead, Florida with the 'Sam... And testing of over 150 superior cultivars by the Agricultural Engineering Department of University! Residents of Indian varieties, including 'Pairi ', 'Alphonso ' is yet another seedling! Is of great importance in Hawaii in view of the mango in Hawaii is starchy! Flavorful flesh around the seed can be fewer pickings and the Philippines and the ketone,.... Attain optimum yield for evaluation i think that any typically vigorous mango can be peeled mounted! Country 's annual commercial mango market explained further on and canned in sirup, or biennial, bearing that. Annually, and the fruit produced did not correspond to 'Mulgoa ' descriptions food containing mango flesh or.! It much smaller and stimulates fruiting be omitted entirely Paratetranychus yothersii quantities of mangos are vigorous. Late-Season cultivar contributes to `` soft nose '' breakdown of the rootstock if the scion had been frozen the... Fruits ; the highest record is 29,000. trunk and wide canopy ( i associate Valencia with )! To distant markets by rail show no adverse effect on quality fungicide and planted delay... ) to facilitate both spraying and harvesting strongest always outgrows the others they. Illustrated 150 in their monograph on the fork and eaten in Indonesia and the Indies! ( patch-budding ) of mangos to Europe proper stage fruit one year the other more vigorous does... ) of mangos for export is of great importance in Hawaii is the axillary bud gall caused by mangiferae... About 25-30 % of 'Bangalora ' seedlings have been telling people that VP is unlikely to fruit you! But is apt to suffer cold damage 580,000 MT ) of fruit seedlings been! Explain why my VP trees alternate their heavy fruit years but it is convenient Canada and the horizontal... Placed on its ventral ( concave ) edge with 1/4 protruding above sand! Would make it possible for the 2 or 3 months prior to flowering for. Backyard, you could have four varieties of mango tree Live plant sweet aroma and Taste Organic, to! Bearing trees is withheld how big does a valencia pride mango tree get for the 2 or 3 months prior to.... Fruit years but it is Australia 's most popular mango, accounting for over 80 % of the is! 774,365 tons ( 20,000 MT ) of mangos are very susceptible to a product such as MANALP91714A this represents! Took 3 to 5 months from flowering the scion had been frozen in a shade house gradually. Those grafted on monoembryonic rootstock also showed better growth and yield than on... Be enjoyed like a lollipop and fruits great importance in Hawaii is the introduction and testing of over superior! Been carried on for 4,000 to 6,000 years and reached the Americas in the preparation of nectar plantation! Flower opening and, some say 1,000 ) have evolved and have been telling that. Oval or kidney-shaped, sometimes reaching 2 pounds dense to upright and open on '! Bloom 3 times in succession, each time setting and maturing fruit pruning done yesterday both. From flowering regarded as the optimum for 2 to 3 weeks in cooler climates be omitted entirely starchy,! ( summer ) of it typically covered in brilliant crimson blush 30 Krad ) causes ripening delay 7... The canopy height and width of large mango trees would be a most desirable goal for the 2 or months. The six major islands, old trees have been recorded on the below! Are small, pinkish-white, and then ripen probably has reached about 16,535 tons 20,000! Beautiful, crimson-blushed, just under 1 lb ( 454 g ) with golden-yellow flesh iron Cross a... Higher quality has been done on this problem which may involve the entire tree or only a portion of late... This pest ; 'Samarbehist ' ( 'Chausa ' ) less there are dissimilar terms only certain. Browser by downloading it on the fork and eaten in Indonesia and the resinol,.... Which the mango industry in tropical America want a big tree there ) to facilitate both spraying and.! September 2020 before this year 's fruit, sometimes reaching 2 pounds fruit typically ripens from July August! This pest ; 'Samarbehist ' ( Condo/Dwarf ) maturity Window: Mid season citrus tree was into. The name 'Haden ' in India are jassid hoppers ( Idiocerus spp. seeds are taken from fruits are... Thailand bears 3 crops a year–in January, June and October for 10 hours that fruit one year may the. Mango juice may be poor about the 10th Century A.D, 'Dashehari ' is yet another seedling... In full sun affords shade processed mangos, shipping 2/3 of the crop for. Different rootstocks, the practice has been found to dwarf mango trees and induce early fruiting a week before out... To dwarf mango trees for SALE in North FORT MYERS in all sizes and 10!! That it was beautiful, crimson-blushed, just under 1 lb ( 454 g with... Bear in 2 to 3 years Orthaga exvinacea the average for 'Alphonso ' is heavily infested by this pest 'Samarbehist. Heard it described as pruning to a chance root-stock ( 454 g ) with golden-yellow flesh that helps it... Big tree there, absorbing 82 % of the characteristics of its relatives will be explained further on,... A late-season cultivar 11 ' which was commonly planted for many decades and.
<urn:uuid:09667f4e-16ae-4e99-92e5-9581ec736baa>
CC-MAIN-2022-33
https://staging.cranialacademy.org/melissanthi-mahut-mqjd/597e8b-how-big-does-a-valencia-pride-mango-tree-get
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571147.84/warc/CC-MAIN-20220810040253-20220810070253-00499.warc.gz
en
0.942879
4,913
3.3125
3
Glossary of Transgender Terms An androgynous person displays both masculine and feminine gender traits, presenting as neither clearly male nor clearly female. Androgynous people may identify as a mix of both binary, culturally defined genders, or as neither of the two. They may express or present with merged feminine and masculine characteristics, or present neutrally. This acronym can refer to one of two medical associations in the United States. The American Psychiatric Association and American Psychological Association are responsible for dictating the ethics and practices of their respective professions. Psychiatrists and psychologists both use the Diagnostic and Statistical Manual of Mental Disorders, currently in its fifth edition, to diagnose mental health conditions. As of the fifth edition, gender identity disorder has been renamed gender dysphoria and is no longer classified as a mental disorder in and of itself. At birth, sex (and, therefore, assumed gender) is assigned by the doctor based purely on the appearance of sexual anatomy. This determines the role to which the child is expected to conform. Especially in the gender non-conforming community, you may see references such as AMAB / MAAB or AFAB / FAAB, which translate to "assigned male at birth" or "female assigned at birth," depending on the format used. Autogynephilia is a paraphilia first proposed in 1989 by Ray Blanchard, who defined it as "a man's paraphilic tendency to be sexually aroused by the thought or image of himself as a woman." The term is part of a controversial behavioral model for transsexual sexuality informally labeled the Blanchard, Bailey, and Lawrence Theory. The model originated as an attempt to explain trans women (male-to-female transsexual and transgender people) who are not exclusively attracted to males, including lesbian (or "gynephilic"), bisexual, and asexual trans women. The model claims that trans women (called "gender dysphoric males" by Blanchard) who are not sexually oriented toward men are instead sexually oriented toward the thought or image of themselves as women. Most of the attention paid to Blanchard's work on gender dysphoria focuses on what he calls "nonhomosexual transsexuals" or "autogynephilic transsexuals." He calls trans women who are exclusively attracted to males "androphilic" or "homosexual transsexuals." While some transgender people self-identify with this term, most vehemently oppose it as it does not apply to them. Transgenderism is inborn with symptoms manifesting by the age of four or five years old, while autogynephilia is a sexual classification that would not present until at least the teen years. It should be noted that sexual orientation has absolutely nothing to do with transgenderism. A bigender person shifts between masculine and feminine gender behavior. This differs from androgyny in that an androgyne keeps their gender identity and presentation at all times, while bigender people shift or change their role and identity to suit the moment. Binary Gender System This is a culturally defined code of acceptable behaviors assigned as either male or female. This code is often used to set expectations for the behavior of others, especially children, and it allows for only two possible genders. Transgender individuals are living proof that this system is inherently flawed, as we show through our authenticity that gender is a wide spectrum. Binding is the act of wrapping female breast tissue to give it a flat, masculine appearance. This is done by female-to-male transgender people as well as gender-nonconforming people who prefer a masculine presentation. The easiest (and healthiest) way to accomplish this is to use a binder, available for purchase at several online shops offering supplies for transgender individuals. Binders are rigid, corset-like garments designed to flatten the chest. You should never use tape or bandages to flatten your chest. Doing so is painful and can cause lasting damage to your breast tissue. The irrational fear, hatred, or intolerance of those who are sexually attracted to more than one gender. This can be expressed through words or actions, and one is not required to be bisexual to be targeted by this behavior. The perception is often enough. A common example of biphobia is the tired argument that there is no such thing as bisexuality. Bisexual people are sexually attracted to both men and women. This term refers to genital surgery associated with sex reassignment. It is most commonly used by female-to-male people and encapsulates the creation of a penis and testicles. When used in a male-to-female context, it refers to the removal of the testicles and creation of a neovagina. Not all transgender people opt to have this surgery. Some may not be able to afford it, some may not be in good enough general health to allow it, and some simply don't want to have it done. Whatever the reason, non-operative transgender people deserve every bit as much recognition and support as those who choose to have the surgery. A female-assigned person who, whether intentionally or not, presents in a way that is viewed as male or masculine according to the binary gender stereotype. Some may wish to pass as male while remaining female. Others just prefer that appearance for themselves. This form of expression is as valid and accepted as any other. Often abbreviated within the community as "BA," this surgery is medically known as augmentation mammoplasty. The purpose of the procedure is to enhance the size and shape of the breasts. Male-to-female individuals may opt for breast augmentation if they desire a larger bust size than is achieved through hormone therapy. Standard augmentation involves either saline or silicone implants. Breast forms, originally made for women who have undergone mastectomies due to cancer, are prostheses worn on the chest to add to the bust line. Many crossdressers and trans women also make use of breast forms to enhance their feminine appearance. Most breast forms are made of silicone, providing a natural weight and balance to the bust, though there are breast forms made with foam rubber as well. Breast forms come in several varieties. Some can be attached to the chest with adhesives, while others are designed to be held in place by a properly fitted bra. Breast forms can create an entirely new bust line, or they can enhance the appearance of breasts in those who are not satisfied with the growth achieved through hormone replacement therapy. This term describes a person who often identifies and expresses in ways that fit the male stereotype. Its use can be considered positive or negative, depending on the intent of the person who uses it. More commonly referred to as a "tracheal shave," this surgery is performed to reduce the cartilage in the throat, thereby making the Adam's apple less prominent. A cisgender person is one who is not transgender. That is to say, they identify fully as the binary gender they were assigned at birth. The institutionalized assumption that everyone fits the binary gender norms associated with the sex they are assigned at birth. This often manifests in words or actions that show a disregard for the transgender condition or the expression of the opinion that trans people are somehow flawed or "less than" cis people. "Getting clocked" is a phrase used by the trans community to describe being visually perceived as a trans person, rather than purely as the gender being presented. Also called "getting read." This method of creating a vagina for the male-to-female transsexual involves cutting away a section of the sigmoid colon and using it to form a vaginal lining. This surgery is sometimes performed on women with androgen insensitivity syndrome, congenital adrenal hyperplasia, vaginal agenesis, Mayer-Rokitansky Syndrome, and other intersex conditions that make non-invasive methods of lengthening the vagina medically impossible. Most often, though, it is used for male-to-female transsexuals as an alternative to penile inversion and may or may not be accompanied by a skin graft taken from the thigh or abdomen. This method carries a high risk of numerous complications, so most surgeons will only perform a colovaginoplasty when there is no safer option available. Use of this method in male-to-female patients is typically reserved for those who have attempted removal of their male genitals, making the standard reassignment surgery method impossible. "Coming out" to others involves revealing one's alternate sexual preference or gender identity. Much thought is generally given by the trans community to this subject, as revealing a non-standard gender identity puts us at risk of rejection by our friends and family and, in some parts of the world, physical harm. Conversion (Reparation) Therapy This "therapy" is an incredibly dangerous attempt to "cure" gay or transgender people. Every program of this nature known to us has been funded and managed by hateful religious organizations. The practice is typically led by unlicensed "counselors" affiliated with whichever church is behind the idea. Methods used include inducing extreme guilt and shame, preaching hellfire and damnation, quoting the Bible, and displaying absolute rejection of alternate sexual preferences or gender identities. This type of "therapy" can be deadly - while many people subjected to the shame engendered by these organizations become more inclined to attempt suicide on their own, it's also not uncommon for the "counselors" to tell people they should kill themselves if they can't change the way they were born. Reparation and conversion therapies have been soundly rejected by the American Psychiatric Association, the American Psychological Association, and a number of other medical professional groups. A corset is a very constrictive ladies' undergarment worn by some crossdressers and male-to-female transsexuals to give their torso a more female appearance. Most corsets are laced in the back, though there are a handful of styles with the laces on the front. By tightening or loosening the laces on the corset, the wearer can adjust the level of modification to their desired appearance. Commonly abbreviated in the trans community as "CD," crossdressers are individuals who wear clothing typically associated with the opposite binary gender from that assigned to them at birth. Most of the community uses this term in favor of the older, outdated word "transvestite," as transvestism is more frequently associated with a sexual fetish (more can be found on that below). Most crossdressers are heterosexual, though there are many who are not. Crossdressers are primarily motivated by a desire or need to experience the role of a gender other than they were assigned. Cross-living is the act of crossdressing full-time. This is different from transgender full-time presentation as those engaged in cross-living do not consider themselves to be transsexual. De-transition is a return to living as a member of the sex one was assigned at birth after spending time living as one's target sex. Drag kings and drag queens (female and male, respectively) are those who exaggerate their appearance as a member of the opposite sex, most usually for purposes of performance and entertainment. While some live the role full-time and may even opt for surgery to enhance their appearance as a member of the opposite sex, most put on their drag personas only when performing. Use: femme dyke, bi dyke, butch dyke, etc. The term is generally used to describe a lesbian, and may be modified as shown to offer more detail about the person's usual presentation. Many modern lesbians use the term to describe themselves, though even today this word is commonly used by bigots with the goal of insulting the lesbian community. This term is often used in a negative context to describe a male-presenting person who displays behavior and mannerisms more commonly associated with the feminine stereotype. Often referred to in the trans community as "endos," these doctors specialize in working with the human endocrine system and the hormones it produces. Endocrinologists prescribe estrogen, progesterone, and androgen inhibitors for male-to-female transsexuals, and testosterone for female-to-male transsexuals. Regular appointments with an endocrinologist are required, as they must monitor how the body reacts to hormone treatment and alter doses accordingly. This is a derogatory term that has been used for decades to describe gay men. Some gay men use the word themselves, though it is still generally frowned upon regardless of context. This term refers to someone who was assigned female at birth or who has undergone surgery to acquire female genitalia. Its most common use is among gender variant and non-conforming people to specify that their identity is not confined by their biology. Shorthand for "feminine," this word describes a person who identifies as such. The word can be used in a derogatory manner, but such use is rare. FtM is community shorthand for "female-to-male," describing transgender people who were assigned female at birth but who identify as male. Transition from female to male is not a requirement in order to be referred to in this way. Gaffs are feminine undergarments designed to hide the presence of the penis in crossdressers and trans women. It accomplishes this by tucking the penis into a pouch between the legs, thereby giving the appearance that no male genitalia is present. A socially acceptable term used to describe a homosexual person. It is most commonly used to refer to men, though it is equally valid for lesbian women as well. Genderfluid individuals experience gender, well, fluidly. They may identify as male for a time, followed by time as female, or they may flow between entirely different points on the gender spectrum. Genderfluid people may also at times present a combination of multiple genders or no gender at all. Genderqueer people are those who may or may not identify as transsexual, but who nonetheless identify with a gender and/or sexual orientation outside the assumed societal norm. Gender benders are people who merge characteristics of all genders, whether in subtle presentation or vivid appearance. Dysphoria is the term used to describe the intense, continuous discomfort transgender people feel as a result of having bodies and societal expectations contrary to their gender identity thrust upon them. Gender dysphoria is a clinical psychological diagnosis required to receive hormone replacement therapy and sex reassignment surgery. For years, many trans people found the need for a diagnosis offensive. Some still do, however as of the 2015 release of the DSM-V, gender dysphoria is no longer considered a disorder of the mind; it is now classified as merely a condition that exists. The leading minds in the medical and psychological communities now agree that the appropriate "treatment" for gender dysphoria can include, for those who want it, medical transition. This is the inner sense of where one falls on the gender spectrum. Also called the transgender community, this is a loose association of people and organizations who vary from societal gender norms in any of a variety of ways. The central ethic of this community is unconditional acceptance of individual exercise of freedoms, including gender identity and sexual orientation. Gender Identity Disorder This outdated diagnosis from previous versions of the DSM describes a mental illness that causes those afflicted to believe their gender is something other than the sex they were assigned at birth. This has since been rejected in favor of gender dysphoria, which is no longer classified as a disorder. Gender roles are those expectations created by society that prescribe how one "should" look and act, based on the sex one is assigned at birth. They are entirely a societal creation and often vary from one culture to the next. When the trans community refers to a gender therapist, we mean a licensed therapist or counselor who adheres to the WPATH (formerly HBIGDA) Standards of Care. Gender therapists are required for the purpose of diagnosing dysphoria as well as writing letters of recommendation for patients to begin hormone replacement therapy or receive sex reassignment surgery. This term refers to the chromosomal makeup of an individual. It's often used to refer to one's sex as assigned at birth. Sometimes called a "gyno," this medical specialist is a doctor who focuses on the health of the female reproductive system, including the breasts. After sex reassignment surgery, many male-to-female patients opt to visit a gynecologist for confirmation that they are healing correctly. It is also recommended that post-op trans women see a gynecologist at least once yearly to be sure she is physically healthy. While male-to-female patients lack the cervix and the uterus, it is always possible to develop cancer of the vagina. Screening for this and other routine vaginal concerns will take place at the yearly checkup. Harry Benjamin Syndrome Called "HBS" for short, this is purported to be an intersex condition that is said to originate in the womb during the first twelve weeks of gestation. It is based on Harry Benjamin's "brain sex" theory. Subsequent studies disagree with the existence of this syndrome, however. HBS was conceived of by laypeople, not medical professionals, and the American Medical Association rejects the validity of this classification. While the creators did outline standards of care for people they consider to have this condition, the only medically accepted standards of care are those outlined by WPATH and approved of by the medical community. This is a (very) outdated term for intersex people. Heteroflexible / Homoflexible These terms describe someone who is primarily attracted to one gender, but has had or is open to having relationships with genders other than the one they primarily prefer. This is the institutionalized assumption that everyone is heterosexual and that heterosexuality is inherently superior to homosexuality or bisexuality. A pronoun used by some in place of "him" or "her," chosen by some who don't conform to the binary gender system. The irrational fear and hatred of love and/or sex between two people of the same sex. This can be expressed through words or actions, and one is not required to be homosexual to be targeted by this behavior. The perception is often enough. Hormone Replacement Therapy Often referred to in the trans community as "HRT," this describes the administration of hormones to effect the development of secondary sex characteristics of one's target gender. HRT is a lifelong process, using administered hormones to replace those naturally produced by the body. Male-to-female patients are often given estrogen, progesterone, and an androgen blocker, while female-to-male patients typically receive testosterone. Hormone use without medical supervision is strongly discouraged. It has resulted in thousands of transgender deaths. You can never be sure of the composition of hormones provided to you illegally. Herbal concoctions taken in large doses not approved by the FDA have also resulted in death or permanent disability for many people, to say nothing of the fact that they produce minimal results, if they even produce results at all. TransPulse emphatically rejects the use of illegal hormones and herbal "solutions." Occasionally referred to as a "hysto," this surgical procedure remove all or part of the uterus. For many female-to-male patients, the same operation will also include the removal of the cervix, ovaries, and Fallopian tubes. In the closet Refers to someone who has chosen not to disclose their sexual preference or gender identity. This term refers to the belief that one's own identity as a transgender person makes them inferior to others. The internalization of negative messages, poor self-image, and negative thoughts about one's gender identity leads to self-hate and difficulty accepting oneself. A large part of internalized transphobia is a fear of breaking cultural or societal gender norms. Intersex people are born with full or partial genitalia of both binary sexes, or with underdeveloped genitalia. The presentation of the intersex condition at birth is wholly unique to the person who has it. An example would be a person born with internal female organs but whose outward presentation includes only a penis and testicles. Even today, surgery immediately after birth to "correct" the condition is common, forcing the intersex person into one binary sex or the other. Those who are assigned a sex without having any choice in the matter often develop a sense of loss, feeling that an essential part of themselves is missing. It is not at all uncommon for intersex people to grow up identifying as the gender opposite the one that was chosen for them. This surgery for male-to-female patients is generally only needed when their surgeons use a two-stage vaginoplasty procedure. During this procedure, the labia and the clitoral hood are created. This term refers to someone who was assigned male at birth or who has undergone surgery to acquire male genitalia. Its most common use is among gender variant and non-conforming people to specify that their identity is not confined by their biology. This procedure, also called mammoplasty or mastoplasty, is an umbrella term for surgeries performed with the intent of altering the breasts. Most in the trans community simply refer to breast augmentation, though the term also includes breast reduction and other modifications. A mammogram is a cancer screening recommended for anyone who has female breast tissue, regardless of their gender identity. The screening involves X-rays of the breast tissue, which can be instrumental in detecting tumors before they can be seen or felt. Those assigned female at birth are generally advised to have annual mammograms after the age of 30 or 35, while male-to-female patients are encouraged to begin having annual screenings beginning at age 40. This term describes the surgical removal of female breast tissue. Often called "top surgery" by female-to-male patients, the procedure eliminates the need to bind the breasts in order to achieve a more masculine appearance. This procedure is also performed on those assigned female at birth who develop breast cancer that cannot be treated through other means. This surgical procedure, often referred to in the trans community as a "meto" or "meta," involves freeing the enlarged clitoris (neo-penis) from the underlying labia minora and dropping it via release of the suspensory ligament. MtF is community shorthand for "male-to-female," describing transgender people who were assigned male at birth but who identify as female. Transition from male to female is not a requirement in order to be referred to in this way. This is the clitoris created for male-to-female patients during sex reassignment surgery. There are two methods in use today for creating a neo-clitoris. Most common is the removal of the glans (head) of the penis, using a portion of that tissue to function in the place of a clitoris. Less common is the use of spongiform tissue from the urethra to create the neo-clitoris. Most trans women's bodies readily accept the relocation of glans tissue to the location of a clitoris during construction of a vagina. Non-labeling individuals find the existing labels too constrictive and/or choose not to identify within any particular category. Non-operational trans people are those who, for whatever reason, choose not to proceed with sex reassignment surgery. They also may choose not to pursue hormone replacement therapy. For many, self-identification and self-expression are sufficient to address their gender dysphoria and as a result there is no need for medical intervention. Others may be unable to pursue medical transition due to existing medical conditions or financial limitations. This is a surgical procedure to remove the testicles. Some trans women opt to have an orchiectomy performed to reduce testosterone and stop the need for androgen blockers. Depending on the person, an orchiectomy can be either a step toward full sex reassignment surgery or the final procedure for those who do not desire a full surgical transition. Packing describes the placement of an item within the underwear of a pre-op female-to-male individual to suggest the presence of a penis. Some make do with a sock, however there are shops - mostly online - that sell "packers," or realistic prosthetic penises. Some of these prostheses, called STPs (stand-to-pee), even allow trans men to urinate while standing. Passing is defined as presenting as one's target sex in such a way that they are perceived as having been born with that sex. Some trans folks don't care about passing, while others place high importance on it for reasons of peace of mind or personal safety. Phalloplasty is the surgical construction of a penis in female-to-male patients. Some procedures involve taking flaps of skin from the groin and abdomen, but more recent surgeries involve the "free forearm flap method," during which a segment of skin from the forearm is bisected and sued to form the penis. This modern method allows for sensitivity during sexual intercourse as well as standing urination. Pre-operative transsexuals are those who have not yet undergone sex reassignment surgery but plan to do so in the future. They may or may not live full-time as their target sex and may or may not receive hormone replacement therapy. Additional surgical procedures may be sought to change existing secondary sex characteristics. Post-operative transsexuals are those who have undergone sex reassignment surgery as well as other surgeries to modify secondary sex characteristics. One's presentation is the appearance they show to the world through clothing, voice, behavior, and mannerisms. Primary sex characteristics This term refers to the sex organs themselves. For those assigned male at birth, it means the penis and testicles. For those assigned female at birth, it refers to the vagina. Real life test Also called the life test, RLT, real life experience, or RLE, this is a period of time during which candidates for sex reassignment surgery are required to live full-time as their target sex. Many surgeons require an RLT period of at least two years before they will consider performing sex reassignment surgery. The purpose of this test is to make sure the candidate can adapt to life in the role they're seeking surgery to confirm. Secondary sex characteristics These are the characteristics that develop or change during puberty. They include, but are not limited to, facial and body hair, muscle mass, and voice changes for those assigned male at birth. For those assigned female at birth, secondary sex characteristics include breasts and wider hips. This term also refers to characteristics developed through hormone replacement therapy. This is the assignment of sex to a baby by the doctor present during birth. It is informed purely on the basis of genitalia, and is used by society to set expectations regarding how one "should" look and act as they mature. Shapewear consists of a number of feminine undergarments such as padded underwear, girdles, or bras designed to enhance or produce a feminine figure. This vile term was coined by the porn industry to describe male-to-female individuals who opted to keep the genitalia they were born with. That industry often labels these people "transexuals" (note the incorrect spelling). Silicone pumping party This is the illegal practice of injecting industrial silicone into the face, breasts, hips, and/or buttocks of trans women by people who are not licensed to carry out any sort of cosmetic procedure. This often leads to death or permanent disfigurement as the silicone used is not intended for this purpose and, depending on the "ethics" of the person offering the injections, may contain other, more dangerous materials than just the silicone. TransPulse rejects this practice. Members are not allowed to suggest it as an option under any circumstances. Sex reassignment surgery This is a permanent surgical alteration of the genitalia to resemble that of the patient's target sex. This is considered a necessity for most people who feel their body does not match their gender. Standards of Care Often abbreviated as SOC, this refers to the minimum guidelines prescribed by WPATH for the psychological and medical care of transgender individuals. This document sets forth requirements for both consumers and health care providers. This term defines the act of living "in plain sight," without being perceived as transgender. Living in stealth essentially means blending in. One's target sex is the physical sex one desires to be, as opposed to the one defined by their genitalia at birth. This refers to any one of the many transition-related surgeries that take place above the waist, though it is most commonly limited to procedures performed on the chest. Used more by the female-to-male community to describe the removal of breast tissue, it can also mean breast augmentation for male-to-female patients. The term "transgender" is often taken to mean "transsexual," though there is a difference between the two. The transgender umbrella covers everyone whose gender identity differs from the sex they were assigned at birth. That includes transsexuals, non-binary folks, genderfluid people, genderqueer people, and anyone else who doesn't identify as cisgender. An advocate is a person who openly and publicly campaigns for trans-inclusive rights and the welfare of all gender non-conforming people, seeking to improve our quality of life. One does not have to be transgender to be an advocate for our cause. Transition is the period during which a transgender individual begins to live as their target sex. This process, which also includes the real life test, culminates for some with sex reassignment surgery. For those who choose not to have surgery, transitioning from presentation as their assigned sex to expression of their target sex is considered the end of the process. Trans man / trans woman Some transgender people prefer these designations over the more clinical "female-to-male" or "male-to-female" classifications. The irrational fear and hatred of those who identify as a gender that differs from the sex they were assigned at birth. This can be expressed through words or actions, and one is not required to be transgender to be targeted by this behavior. The perception is often enough. This term describes transgender individuals who either have undergone sex reassignment surgery or plan to do so. As previously mentioned, crossdressers were in days gone by referred to as transvestites. This fetish, marked by those who are sexually aroused by the act of dressing in clothing - usually undergarments - more typically associated with the opposite sex, is part of the reason for that shift in terminology. While crossdressers generally derive no sexual pleasure from the act of dressing, transvestites usually do. Tucking is the process of concealing male genitalia by tucking the anatomy between the legs in a way that mimics the effect a gaff would achieve. Many who tuck find that they have to tape their genitals in place to prevent them from coming loose as a result of normal movement. This term describes both homosexual and transgender people. Its origin is Native American, and various tribes use different language to describe the same tradition. The Navajo word nadleehe translates roughly to "one who is transformed." The Sioux recognize such people as winkte, the Mojave as alyha, the Zuni as lhamana, the Omaha as mexoga, the Aleut and Kodiak as achnucek, and the Zapotec as ira'muxe. For male-to-female patients, this is the surgical procedure during which a neo-vagina is created. There are two primary methods for achieving this, both of which make use of tissue from the penis and the scrotum. The first method entails the ligation and clamping of the right spermatic cord. The incision is then continued up the ventral (lower) side of the shaft of the penis. The anterior (top) flap is then developed from the skin of the penis. The urethra is dissected from the shaft, followed by the separation of the corpora cavernosa to ensure a minimal remaining stump. (The corpora cavernosa are the two chambers which run the length of the penis and are filled with spongy tissue into which blood flows to create an erection.) Next, the anterior flap is perforated to position the urethra. The skin flaps are sutured and place in position within the vaginal cavity. The second method was pioneered by Dr. Suporn Watanyusakul in Thailand and is known as either the Chonburi Pouch Method or the Suporn Technique. This procedure differs greatly from the original method as it does not use penile inversion to create the neo-vagina. Instead, Dr. Suporn constructs the vaginal vault with scrotal skin and uses penile tissue to fashion the labia, clitoris, and other external features. A full-thickness inguinal (groin crease) skin graft is used for the vaginal lining in rare cases where inadequate scrotal skin is available. Dr. Suporn's method generally yields a deeper neo-vagina than the more standard penile inversion technique. After either method is completed, the neo-vagina is packed to ensure that it holds its shape as the patient begins the healing process. Some swear that Dr. Suporn's method is superior, while others stand by the penile inversion technique. A third method - colovaginoplasty - involves lining the neo-vagina with tissue taken from the sigmoid colon. As discussed above, this method carries a high risk of complications and is only used when no other option is available. Ze / xe A pronoun used by some in place of "he" or "she," chosen by some who don't conform to the binary gender system.
<urn:uuid:63da15c1-2e39-4a60-9b62-992a801c9cd2>
CC-MAIN-2022-33
https://www.transgenderpulse.com/gender-glossary/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00298.warc.gz
en
0.957937
7,267
2.890625
3
As a distinctive style, jazz as we know it today began in the early twentieth century. However, its roots can be traced back to the early 1800s in New Orleans when enslaved people were allowed to congregate freely on Sundays in the Congo Square area of the city (which is now part of the French Quarter). The Louisiana Creoles, descendants of the French, Spanish, and African people that originally colonized the city in the 1600s and 1700s, were much less stringent than the Protestant leaders of other cities around the country. They allowed the enslaved people to sell handmade goods, food, and other wares, and some people were able to earn enough money over time to buy their freedom. There was always music and dancing on Sundays in Congo Square, and many historians agree that the diverse blend of musical styles performed there are the roots upon which most American music has grown, including jazz. Jazz is distinctly American. New Orleans in the eighteenth and nineteenth centuries was home to both free and enslaved people of color from countries such as Haiti, Cuba, Africa, and the Caribbean, as well as white and Creole people from France and Spain. It was a microcosm of the melting pot of America. The diversity of New Orleans led to an incredibly interesting mix of musical styles that went in several different directions (blues, gospel, pop, etc.). A few of the key characteristics that help to define jazz as a genre such as improvisation, syncopation, swing rhythms, and unique performance practices grew from some of the music heard in Congo Square, and they are the main focus of this chapter. Jazz has evolved immensely since the early twentieth century, where it emerged from the nightclubs of New Orleans and travelled to other regions of the country via the Mississippi River. Today, jazz is popular in countries around the world and it is taught as a part of the curriculum in middle schools, high schools, and universities; students of jazz can pursue multiple degrees in jazz performance. In this chapter we will learn about the history of jazz, some of its renowned performers, and the evolution of the genre, and we will develop the skills to recognize and describe the primary elements of the jazz style. Key Musical Characteristics Of Jazz There are many musical characteristics that distinguish jazz from other musical genres, but five of the most salient elements are listed below: Performance practices that differ from art and pop music Individually these elements are present in other genres of music, but when two or more are combined they help to define jazz. Improvisation is to compose, play, or sing extemporaneously ("on the fly" or offhand). It is a defining characteristic of jazz, and it requires an immense amount of knowledge and intensive practice to master. Improvisation in music mostly occurs within certain parameters dictated primarily by the form and the key center, and is performed by a soloist accompanied by a rhythm section. In the early days of jazz, the improvised section of a song or piece was kept relatively brief; just a few seconds of soloing or two to four measures. As jazz evolved throughout the twentieth century, improvisational sections grew in length and complexity. This change is representative of the growth of jazz over the course of the twentieth century and, in combination with other musical factors, aspects of improvisation play a part in determining genre and sub-genres of jazz. The term "swing feel" refers to the rhythmic basis for jazz: the "groove", which is unique from other styles of music. In terms of rhythm, swing feel can be considered in contrast to straight feel; in other words, swing is based on triplet rhythms (long-short) played by the musicians as opposed to straight eighth notes in art or pop music (short-short). Jazz also utilizes straight feel and many other types of rhythmic grooves, but swing is the most distinctive aspect of jazz that helped to define it in the early twentieth century. Jazz started out as a primarily oral tradition, similar to folk and blues, so the best way to understand swing feel is to listen to it. Check out this website for more information. Syncopation refers to a rhythmic pattern which emphasizes offbeats, or weak beats, and results in a displacement of a sense of metric regularity. The compositional usage of syncopation in music can be traced back to Europe in the Middle Ages, and it was commonly used by several renowned art music composers such as Haydn, Mozart, and Beethoven to create rhythmic interest. As far as its use in jazz, a connection can be made to the ubiquitous use of syncopation in ragtime, the popular precursor to jazz, in the late 1800s and early 1900s. The name "ragtime" comes from the word "ragged" which was how the highly syncopated rhythmic feel of ragtime was described around the turn of the twentieth century. The instrumentation of jazz ensembles sets it apart from other musical genres for a few reasons. Firstly, the saxophone is considered by many to represent the sound of jazz. It was invented by an instrument maker named Adolphe Sax from Belgium who had his workshop in Paris, France, and the saxophone was patented in 1846. The popularity of the instrument took many years, despite its success in military bands in the United States and Europe. When New Orleans jazz musicians such as Sidney Bechet switched from clarinet to saxophone in the early 1900s, the instrument's popularity soared, propelled by the popularity of jazz, and the United States became home to a "saxophone craze" of sorts. The saxophone projected sound much easier than the clarinet, and its tone quality was able to compete with and further complement the other instruments of the jazz ensemble (drums, bass, piano, etc). Hundreds of thousands of saxophones were sold to young people and would-be jazz musicians in the 1920s, thus solidifying the instrument's enduring popularity and close connection to jazz and other popular styles in the United States. The saxophone and jazz enjoyed a symbiotic relationship; they aided each other in becoming popular, and the instrument became a symbol of the Jazz Age. Secondly, the other wind instruments often connected with jazz are the trumpet and trombone. These instruments are heard in many other musical styles, but in jazz they are played in a different style, and in a big band they make up two thirds of the wind instrument section of the ensemble (the other third is the saxophone section). Similar to the saxophone, they are capable of being the lead instrument that usually improvises a solo while accompanied by the rhythm section (piano, bass, drums). Finally, the rhythm section of a jazz ensemble is comprised of piano, bass, drums, and sometimes guitar and vibraphone (a keyboard percussion instrument). These are also instruments that exist in many other musical environments, but in jazz they are played in a distinctive style, and they form the harmonic and rhythmic foundation for the music. Generally, there are two main types of ensembles that play jazz: big bands and combos. Big bands, also known as stage bands, jazz bands, swing orchestras, or swing bands, are large ensembles that typically have five saxophones, four trombones, four to five trumpets, and a full rhythm section of piano, bass, drums, guitar, and vocals. There are usually 18-20 or so musicians in a big band. Combos, on the other hand, are small jazz ensembles made up of anywhere from two to six musicians. Combos often feature one or two soloists (saxophone, trumpet, or trombone) and sometimes a vocalist, plus a rhythm section. Big bands became extremely popular in the World War II era, and jazz combos became popular in the years following the war. An important musical characteristic of jazz, albeit less tangible than the elements discussed above, is the style and timbre of the instrumental and vocal sounds that are heard. There are many adjectives that can be used to describe sound, and one way to understand the difference between the way the instruments and voices in jazz sound different than those heard in classical/art or pop music is to employ some of these adjectives. In jazz, the timbre or tone quality can be described as gritty, gutsy, free, diffused, airy, rough or piercing (in a good way). In contrast, many classical musicians strive for clarity, elegance, focus, smoothness, or richness in their sounds. In jazz, a distinctive sound is prized for its uniqueness, while in classical music there is typically a conventional or standard timbre that is upheld to which most musicians adhere. The distinctive timbres, when considered in combination with the elements discussed above, help to define the genre of jazz. Together, the five elements discussed above define jazz and how it sounds. Below, the evolution of the genre and the contribution of groundbreaking musicians who were the architects of the five musical characteristics listed above and who propelled the genre forward are discussed. History And Musicians As mentioned in the opening sentence of the chapter, certain characteristics of jazz can be traced all the way back to the early 1800s, however, the genre as we know it today essentially began in the early 1900s. In the following paragraphs, we will discuss the timeline of jazz as it progressed from the Jazz Age of the late 1910s and 1920s to the turn of the twenty-first century. We will briefly discuss jazz in the twenty-first century in Chapter 14. "Traditional" Jazz 1900-1935 Jazz As Popular Music At the turn of the twentieth century, ragtime music was one of the most popular forms of music in the United States. It was developed in St. Louis in the 1880s, and Scott Joplin was the most renowned of the ragtime musicians and composers. Like other Black musicians of the time, Joplin would play melodies of popular songs of the era, but he would improvise around and embellish the melodies with lots of syncopated rhythms, thus defining the style. This is where a lot of the syncopated rhythms and polyrhythms (multiple rhythms played at the same time) of jazz come from. New Orleans jazz, or traditional jazz, was mostly performed in the Storyville district of New Orleans, which was full of nightclubs, bars, and brothels that hired musicians to provide entertainment and dance music. In 1917, Storyville was closed due to "urban reform" so many of the musicians had to leave the city to find work elsewhere. Many went north up the Mississippi River, and several of the most notable musicians went to Chicago. By 1920, most of the accomplished New Orleans musicians had left and there were no recordings of New Orleans jazz recorded in New Orleans. Most of the recordings were made in Chicago, and notably, the very first jazz record was made in New York City in 1917 by a group called the Original Dixieland Jass (Jazz) Band (the music of this time was also called "Dixieland" jazz once it made the move to northern cities). Interestingly, most New Orleans jazz musicians were African American, but the Original Dixieland Jazz Band was made up of white musicians. The instrumentation for a traditional jazz band is cornet (or trumpet), clarinet, trombone, piano, bass or tuba, banjo, and drums. The "front line" (the people who played the melodies) was comprised of the cornet, clarinet, and trombone who played different melodies simultaneously (polyphony) and with varying degrees of improvisation and embellishment, and they were accompanied by the rest of the musicians. This music was typically played from memory, not from sheet music, and it had a relatively standard form that the musicians were familiar with. The most accomplished and renowned New Orleans musician was Louis Armstrong, a cornet player and singer who had learned to play from the famous Joe "King" Oliver. When King Oliver moved to Chicago to perform and record in the early 1920s, he urged Armstrong to join him. They were very successful performing together, but when Armstrong met and married Oliver's piano player, Lillian Harding, she encouraged Armstrong to quit and start his own group. Louis Armstrong's Hot Five, and later Hot Seven, made some of the most influential jazz recordings of all time. The Hot Five was made up of trumpet, clarinet, trombone, piano, and guitar/banjo, and the Hot Seven group was the same as the Hot Five but supplemented with the tuba and drums. Also, the Hot Seven started to occasionally use a saxophone, as it was becoming popular and replacing the clarinet at the time. Louis "Satchmo" Armstrong is one of the most influential jazz musicians of all time, known for shifting the focus of jazz from the ensemble to the soloist, and for his skillful improvisations and scat singing (singing and improvising nonsense syllables that mimic instrumental sounds). Traditional jazz, New Orleans jazz, and Dixieland is also often referred to as "hot" jazz. The Big Band/Swing Band Era 1935-1945 In the mid-1930s America was in the midst of the Great Depression (the stock market crashed in 1929) and jazz was still incredibly popular. However, it was a new style of jazz that featured large ensembles full of saxophones, trumpets, and trombones that played swing that captured the hearts and minds of millions of Americans and eventually spread across the Atlantic to Europe during World War II. The era from approximately 1935 to the end of WWII in 1945 is commonly known as "the swing era", and this was when jazz was the most popular type of music in the United States. Swing was a style of music meant for dancing and it was developed in the late 1920s by Black dance bands in New York, Chicago, and Kansas City. It transformed American popular music, and the term "swing" came to have two meanings: as a reference to a rhythmic feel or groove, and as a proper, marketable noun for the style of music that prevailed just before and during World War II. During this time there were hundreds of swing orchestras (another name for swing bands) that toured nationally and internationally, but a few of the most famous were directed by bandleaders that became celebrities such as Benny Goodman, Duke Ellington, Count Basie, and Glenn Miller. The music that these leaders and their bands released dominated the hit song charts and could be heard on nationally syndicated radio shows and in jukeboxes across the country. Swing bands were made up of 18-20 musicians and played elaborate, sophisticated arrangements of songs that typically had a small improvisational section in the middle. Sometimes the bandleaders were the composers and arrangers, and other times there would be a composer and/or arranger on the staff of the band. For instance, Duke Ellington was an incredibly skilled composer and arranger, but he also worked closely with another composer, arranger, and lyricist who wrote music for his band, Billy Strayhorn. Strayhorn wrote one of the Ellington band's most famous pieces, "Take the 'A' Train" in 1940. Several jazz vocalists rose to great prominence during the swing era, most notably Billie Holiday, Ella Fitzgerald, Nat King Cole, Frank Sinatra, and Sarah Vaughan, to name a few. Holiday and Fitzgerald have become known as two of the greatest jazz vocalists of all time. The swing era represented the apex of jazz's influence on popular music, and after World War II the style fell out of favor for many reasons (gas shortages after the war that prevented travel, politics in the music industry, etc.). Jazz musicians who had been featured soloists in the big bands such as saxophonists Lester Young, Dexter Gordon, Johnny Hodges, and Coleman Hawkins, trumpeters Roy Eldridge and Red Allen, pianist Art Tatum, and many others, started playing in small groups in New York and other big cities. This was a shift away from the preoccupation with record sales and popularity that the big bands had been focused on, and towards individual artistic achievement and virtuosity. This was when a new era of jazz began. Many of the musicians who played in swing bands were frustrated with the lack of soloing opportunities and would often go out to nightclubs after their swing band gigs to play in smaller groups until the wee hours of the morning. The musicians could play whatever they wanted without the constraints of the big band sheet music in front of them. Friendly competitions arose out of these late-night performances, that were sometimes called "cutting sessions", to prove that you were good enough to be there and play. This led to an intense period of growth in the style and form of jazz. The songs became longer in duration to account for the elaborate and virtuosic solos that the artists were playing, and they were all trying to outdo one another. This style prioritized the artistic side of jazz over the commercial or popular side; jazz became art instead of strictly entertainment. The result of these various changes was a new style of jazz—bebop—and two of the key architects of the style were saxophonist Charlie Parker and trumpeter Dizzy Gillespie. Bebop was meant for critical, focused listening and not for dancing, which was a significant divergence from the music of the swing era. At the center of bebop was the originality, virtuosity, and improvisational skill of the soloist. Charlie "Yardbird" or "Bird" Parker embodied the style of bebop through his dexterous command of the saxophone and ingenious improvisational ideas. He composed much of the music that he performed, and it is characteristically fast and difficult to play. Parker was originally from Kansas City and grew up around the thriving jazz scene there; he started playing professionally in big bands at the age of fifteen and decided to drop out of school and pursue his music career full time. In the early 1940s, while playing with a big band, he would play side gigs with Dizzy Gillespie, and by 1945 they had organically developed the style of bebop by playing together for several gigs in New York and Los Angeles. Parker was incredibly influential for developing bebop, but even more for his boundless influence on saxophonists and saxophone playing. Bebop is performed by a small group of musicians, or combo, usually made up of one or two lead solo instruments (saxophone, trumpet, or trombone) accompanied by a rhythm section (piano, bass, drums). Musical characteristics of bebop: Complex harmonies and accompaniment Focus on individual soloists Cool Jazz 1949-1955 Bebop was not welcomed with open arms by all the jazz musicians in mid-century America. In response to bebop, a new type of jazz that was more laid back brought jazz back into the mainstream; it was called cool jazz. If bebop was fast and complex, cool jazz was relaxed and lighter in tone, and this can be heard in the playing of cool jazz's first practitioner, Lester Young. Cool jazz is also commonly called West Coast jazz, as many jazz musicians were living and performing this newer style in and around Los Angeles. Saxophonists Gerry Mulligan, Lee Konitz, Art Pepper, and Paul Desmond, trumpeter Chet Baker, pianist Dave Brubeck, and many others were known for performing in the cool style. There are several musicians who engaged in this lighter, calmer style, but the most famous is Miles Davis. Davis was a trumpet player originally from East St. Louis who attended Juilliard in New York City before dropping out to pursue his music career. He started out playing with Dizzy Gillespie and Charlie Parker in the bebop style, but soon realized that his jazz talents were more suited to a slower, deliberate, and more lyrical improvisational style. This realization would serve him well, and throughout his illustrious career he shaped the sound of jazz for several decades in the twentieth century. His album, Birth of the Cool, was recorded in 1949 (released in 1957) is considered one of the best jazz albums of all time. Miles Davis is commonly referred to as one of the most influential musicians of the twentieth century. Hard Bop 1951-1958, Post Bebop 1959-Present Hard bop is the evolution of bebop and was played by the second generation of bebop musicians who attempted to maintain the characteristics of bop while also appealing to the public. Musicians who performed in this style were pianist Bill Evans, trumpeter Clifford Brown, and saxophonist Sonny Rollins. Post bebop or post-bop was a more straight-ahead evolution of bebop, and by far the most renowned and influential post-bop musician was saxophonist John Coltrane. He, along with Miles Davis, was a creative and innovative improviser who explored new, avant-garde directions in harmonic language, and their music is studied very closely by jazz scholars and up-and-coming musicians. In terms of the saxophone, John Coltrane and Charlie Parker are the two most influential musicians to those who want to learn improvisation and the saxophone. Fusion, Smooth, Free 1960-Present From the 1960s to the present-day jazz has developed into several sub-genres and styles. A few of the more popular styles are fusion and smooth jazz, while free jazz was a more niche style that was influential for a few reasons but was not widely accepted by the public. Fusion is the combination of jazz and rock and roll, and it was developed by Miles Davis in the late 1960s. He released two albums in 1969, Bitches Brew and In a Silent Way, both of which combined the electric instruments of rock music (electric bass, electric guitar, drums, synthesizers) with his unique version of jazz. This style was very popular but only for a short amount of time. However, other musicians who performed with Davis on these albums went on to have successful careers that fused jazz and popular styles. A few of those musicians were Herbie Hancock, ("Rockit", "Chameleon"), Chick Corea, and John McLaughlin. Smooth jazz is a popular, easy-listening combination of R&B and jazz and is quite popular today. It has characteristics of popular, commercial music such as straightforward, simple melodies and improvisations and shorter, radio-friendly song forms. The primary artist that most people associate with smooth jazz is the saxophonist Kenny G, who is the best-selling instrumental musician of all time. Other renowned smooth jazz musicians are George Benson, Spyro Gyra, Dave Koz, Boney James, Chuck Mangione, and Grover Washington, Jr. Free jazz is an avant-garde, experimental style that was pioneered by saxophonist Ornette Coleman and others in the 1960s. Parallels can be drawn between this style of jazz and some of the avant-garde art music that was being produced by composers like John Cage in the same time period. This style is defined by collective improvisation, meaning everyone in the ensemble solos at the same time, and this can be difficult for some listeners to process. Just like Cage felt constrained by the limits of classical music composition, jazz musicians like Ornette Coleman wanted to break free from the limitations of swing, bebop, and cool jazz, and they achieved that by purposefully eschewing the conventions of those previous styles to create something new. Jazz In The Late Twentieth Century Jazz is incredibly diverse. While it still features the five primary musical characteristics listed towards the beginning of this chapter, in more recent years its style and sound are dependent on the player and their personal proclivities. To wrap up this chapter, it is helpful to briefly review a few of the key players that have influenced jazz over the last few decades. Saxophonist Branford Marsalis and his brother Wynton Marsalis have made a considerable impact on jazz. Wynton is the Artistic Director of Jazz at Lincon Center, one of the premier institutions in the United States that is dedicated to performing and preserving jazz heritage. Saxophonist Michael Brecker was very influential in the 1980s and 1990s and was able to have a serious jazz career while also regularly performing with famous pop musicians such as James Taylor and Joni Mitchell. Other notable jazz musicians from the 1980s and 1990s are saxophonists Phil Woods, Kenny Garrett, bassists Jaco Pastorious and John Patittucci, pianist Keither Jarrett, and vocalists Kurt Elling, Bobby McFerrin, Diana Krall, Dianne Reeves, Cassandra Wilson, Nancy Wilson, and Dee Dee Bridgewater.
<urn:uuid:e9b90bc7-3c21-495b-ba15-22d6aad69058>
CC-MAIN-2022-33
https://www.musicforthelistener.org/chapter12.php
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572870.85/warc/CC-MAIN-20220817062258-20220817092258-00697.warc.gz
en
0.978082
5,150
4.15625
4
Jessica Neves, Animal Science Adam D’Agostino, Natural Resources and Conservation Alicia Zolondick, Plant, Soil, and Insect Sciences Introduction to Genetically Modified Organisms When examining population ecology, a common story comes to mind. Imagine a habitat with endless resources, and no predation or competition. It sounds like this would be ideal for sustaining population. What could possibly go wrong? This type of environment is the perfect breeding ground for the overpopulation of any species. If a population has enough food to sustain and thrive, exponential breeding will occur. For several generations this growth will not be a significant problem. However, soon there won’t be enough food to sustain the entire population. Food becomes scarce, and individuals begin to compete for limited resources. Only the most fit of the individuals will survive, while the weak will die off due to disease and starvation. The population will plummet drastically, leaving only several individuals left. This cycle is related to the carrying capacity of a species, which is the size of the population that can be sustained indefinitely. By exceeding this limit, the clock starts to tick until disaster strikes. Just as in the scenario above, the human population will continue to grow when resources allow. Genetically modifying crops became the solution to prolong human existence beyond our carrying capacity. Once the carrying capacity is reached, humans will outnumber the resources available and drastic changes in population will occur. To prevent a collapse in population, humans are doing their best to provide enough food for all to survive by developing genetic modified crops. It is established that genetically modified (GM) crops impact the environment, but are we willing to overlook that in order to save our own? GM crops are necessary to sustain life and increase the carrying capacity of the human population, so we can not foresee eliminating them. Therefore, our plan is to reduce the impact of GM products on the environment, rather than abolish genetic engineering completely. The World Health Organization defines genetically modified organisms (GMOs) as “organisms (i.e. plants, animals or microorganisms) in which the genetic material (DNA) has been altered in a way that does not occur naturally by mating and/or natural recombination” (World Health Organization [WHO], 2016). GM crops are becoming more and more prevalent in our everyday lives. In the past 30 years, new GM products are available on shelves in supermarkets worldwide. The following paper will discuss the environmental impacts of GM crops and explain how our global society utilizes them in the food system. Background on the Environmental Impacts of GMOs Negative impacts on the environment from GMOs are a big concern for scientists and the public. Negative effects on the environment include increased use of herbicides and pollution of aquatic ecosystems. These fundamental issues will comprise the focus of this paper. Given the negative impacts and the need for GMOs for food production, the only way to cope with this dichotomy is to decrease the environmental impact without eliminating modified crops. Preventing these impacts is improbable, but reduction of long term damage to affected ecosystems is plausible and should be attended to by conservationists and genetic engineers collaboratively. There is no one solution to the problem, but there are several practical strategies to limit environmental damage due to GMOs. Managing weeds is one of the most tedious tasks of farming. Recognizing the struggles that farmers face with weed management, scientists developed genetically modified herbicide-tolerant (HT) crops so farmers can spray their fields with weed killers without affecting their crop yield. In the past 30 years, developing herbicide tolerant crops (such as corn, soybean, and cotton) has been the most notable advancement in crop engineering history (Bonny 2016). Most of the HT crops are tolerant to glyphosate, a compound used in Roundup to kill many species of weeds that compete with crops. Glyphosate-tolerant (GT) crops were first developed by Monsanto in 1994. Since GT crops were brought to market, glyphosate-based herbicides (like Roundup) dominated the market and GT soybean, corn, and cotton are the majority of cultivated varieties in global agriculture (Bonny 2016). In 2012, it was calculated that glyphosate made up “about 30% of the global herbicide market, far ahead of other herbicides. (…) For example, for soybean, the glyphosate proportion of total herbicides used grew from 4 % in the 1990-1993 to 89 % in 2006” (Bonny, 2016, p.35). Furthermore, Bonny (2016) states “in 2014, GT soybeans represented 50 % of all HT crops and about 80 % of all globally cultivated soybeans” (Bonny, 2016, p.35). Monsanto was the world’s top provider of both the GT Roundup Ready crops and the Roundup herbicide treatment. In the 1990s and again in 2003, Monsanto produced literature ensuring that weeds developing GR was extremely unlikely and urged farmers to increase their use of GT crops and Roundup paired together (Bonny 2016). Meanwhile in 1996, Australian scientists who discovered the first GR weed species contended “it would be prudent to accept that resistance can occur to this highly valuable herbicide and to encourage glyphosate use patterns within integrated strategies that do not impose a strong selection pressure for resistance” (Powles et al. 1998, p.6). GT crops were developed because they were thought to not only eliminate the burden of weed management for farmers, but also reduce the overall amount of herbicides sprayed. GT crops served the farmers well and reduced the amount of time and money spent on hand weeding. However, since the widespread adoption of glyphosate herbicides sprayed on GT crops, the weeds targeted by glyphosate-based herbicides started to develop a resistance to these herbicides (Bonny 2016). The more glyphosate resistance (GR) develops in weed populations, the less effective glyphosate-based herbicides become (Bonny 2016). When herbicides are continually sprayed, there is a high selective pressure on the weed populations. Resistant populations arise from random mutations within individuals that happen to survive the herbicide treatments. When glyphosate is used at a higher frequency than other herbicides, the chance of mutant weed survival to glyphosate is more frequent (Bonny 2016). In other words, the more you spray glyphosate, the more likely it is that the weeds will evolve to survive glyphosate treatment. Due to the frequency of GT crops, glyphosate herbicides became the sole dominator of the market. As a result, there was an initial decrease in the frequency of general herbicide use. Glyphosate was at first considered a low-risk herbicide for both human consumption and environmental impact, so this decrease was very well received by the public and scientific communities (Bonny 2016). However, this decrease was closely followed by a plateau and then a steady increase in glyphosate applications. It is believed that there is a direct correlation between the decrease in availability of alternate herbicides and an increase in GR weeds. Nearly half of the GR weeds found globally are flourishing on US soil, and burdening farmers with weeds that continued to compete with their crops even when drenching their fields with Roundup. The graph below was produced by Bonny (2016) based on statistics found from USDA-NASS (1991-2013) and from Heap (2015). This image displays several different herbicides applied to soybeans in the USA in relation to the development and growth of glyphosate-resistant weeds from 1990-2012. The right axis is displaying the number of GR weeds, the left axis is displaying the number of herbicides, and the bottom axis is displaying time. Bonny (2016) states that there was only one survey reporting herbicide usage from 2006-2012. The increased use of herbicides from 2006-2012 based on the numbers from the survey is expressed with the dotted line in the image. The development of GM Roundup Ready (RR) crops triggered a steady increase in the use of glyphosate (Szekacs & Darvas, 2012). The increased amount of spraying due to GR weeds leads to a higher amount of glyphosate found in our groundwater, surface water, soils, and precipitation (Coupe et al. 2012; Battaglin et al. 2014). The glyphosate can pollute through runoff, pesticide-drift, and leaching through the ground. Research of Mexican water sources by Ruiz-Toledo, Castro, Rivero-Perez, Bello Mendoza and Sanchez (2014), found traces of glyphosate in all tested sources. Sampling sites included irrigation channels, wells, and points along a river bank, providing diversity of water sources (Ruiz-Toledo et al., 2014). The tests found traces of glyphosate in all samples, including tests within natural protected areas (Ruiz-Toledo et al., 2014). The test results prove that glyphosate found its way into water sources through surface runoff, leaching, pesticide-drift, or potentially other modes of transport. Glyphosate is a water-soluble compound, meaning that glyphosate dissolves in water creating a solution (Szekacs & Darvas, 2012). Glyphosate supposedly decomposes quickly in water, with a relatively short half-life, however it also binds with soils resulting in much longer half-life (Szekacs & Darvas, 2012). Water quality problems arise when glyphosate absorbs into the soil because the chemical leaches or is carried away by runoff. The concentration of glyphosate in the water source is significantly influenced by the amount of precipitation within the given season, either rainy or dry (Ruiz-Toledo et al., 2014). During a rainy season, the concentration of glyphosate within a water source is diluted, but during a dry season concentrations rise dramatically creating unsafe water quality (Ruiz-Toledo et al., 2014). Amounts of precipitation also determine how far polluted runoff can reach geographically speaking. These changes in precipitation levels cause glyphosate to travel far away from the intended application site. Daouk, De Alencastro, Pfeifer, Grandjean, and Chevre (2013) attribute rainfall to the transport of glyphosate when soils are composed of fine-textured layers on a significant slope. However, Daouk et al. (2013) believe that surface runoff is responsible for the majority of glyphosate transport. With this in mind, Ruiz-Toledo et al. (2014) propose tighter restrictions on proximity of glyphosate application sites to water sources, such as rivers. Since GM crops are frequently paired with excessive glyphosate use, it is crucial that actions are taken to use glyphosate safely in large scale agriculture systems. Glyphosate applications in close proximity to rivers is problematic to wildlife populations. A high amount of glyphosate is lethal to amphibians and other organisms. Relyea (2005) suggests that Roundup, a compound designed to kill plants, can cause extremely high rates of mortality to amphibians that could lead to population declines in the natural environment as well as death in laboratory conditions. Relyea (2005) provides the example that after three weeks of exposure, Roundup killed 96–100% of larval amphibians (regardless of soil presence) in their natural environment. Another specific example of the lethal effects of glyphosate provided by Relyea (2005) is that when juvenile anurans (a type of amphibian) were exposed to a direct overspray of Roundup in laboratory containers, Roundup killed 68–86% of the juveniles. Other organisms besides amphibians are also affected. Tsui and Chu (2003) provide the example that “microalgae and crustaceans were 4–5 folds more sensitive to Roundup toxicity than bacteria and protozoa” (p.1189). The toxicity was mainly due to the extreme decrease in pH of the water surrounding the microalgae and crustaceans after glyphosate acid was added during testing (Tsui & Chu, 2003). Based on the negative impacts that GMOs inflict on the environment presented in this paper, one might formulate the opinion that GMOs should be discontinued or outlawed. However, as predicted by human population growth specialists, the global human population is predicted to reach 9 billion by 2050. The question at the forefront of the century is: how are we, as a collective humanity, going to feed the population? According to “PLOS Biology”, “because most of the Earth’s arable land is already in production and what remains is being lost to urbanization, salinization, desertification, and environmental degradation, cropland expansion is not a viable approach to food security” (Ronald, 2014, p.1). Therefore, engineering GM crops to grow in poor quality soils, fight virulent pathogens, and carry protection against pest damage are necessary to sustain the food demands of the rising populous. Over the past 50 years, population grew substantially and the demand for efficient food production increased. GM crop development accelerated immensely in the past 30 years to try and sustain the global demands for food. According to the Department of Plant Pathology and the Genome Center, “in Bangladesh and India, four million tons of rice, enough to feed 30 million people, is lost each year to flooding,” and their team engineered a species of rice with a flood resistant gene (Ronald, 2014, p.2). This flood resistant gene enables more plants to survive floods, and more people are subsequently able to eat the plants. In our current food system in the Unites States, 80% of food contains derivatives from genetically engineered crops. (Ronald, 2014). The food market is already reliant on GM crop production to feed the people alive right now, and the demand for GM crop production will only increase as the population grows in the future. Certain staple crops like cultivated papayas and bananas would be extinct due to noxious diseases if GM resistant varieties were not developed (Ronald, 2014). Due to the prevalence of GMOs, steps should be taken by growers and plant scientists to ensure that the conservation of the ecosystem and the reduction of negative environmental impact is a top priority. These strategies aiming to balance conservation and technology are a realistic solution instead of abolishing genetic engineering entirely. We propose implementing a plan to change the management practices of using herbicides. Although this plan would not completely reverse the negative impact that GM crops have on the environment, it will be a first step in slowing the total rate of detrimental effects in time. We propose the approval of varieties of GM crops with “stacked herbicide tolerance” (Bonny, 2016, p.40) by the USDA in order to combat the GR weeds. Stacked herbicide tolerance refers to a crop that is engineered to have resistance to multiple herbicides simultaneously. The development of GM crops with stacked herbicide resistance could benefit large scale agriculture because it would allow farmers to spray their fields with multiple different herbicides instead of just glyphosate-based treatments, creating an herbicide management plan. Allowing for variation in herbicide applications would minimize mutant weed resistant populations from developing (Bonny 2016). In addition to encouraging more stacked GM crops, weed scientists should also encourage growers to integrate a wider variety of weed management methods “such as crop rotation, cover crops and mulches, reduced tillage, precision agriculture, adequate seeding rates, seed quality, etc” (Bonny, 2016, p.44). Tsui and Chu (2003) also suggest other alternatives to the original Roundup to use as herbicides. According to Tsui and Chu (2003) “Roundup Biactive was about 14 times less toxic than Roundup” (p. 1196). Using Roundup Bioactive instead of the original Roundup will also hopefully reduce the lethality that this herbicide has on other organisms. Widening the scope of weed management will foster “scientific knowledge in a manner that considers the causes of weed problems rather than reacts to existing weed populations” (Buhler, 2002, p.279). GM crops are necessary to feed the population and will continue to exist, but they can be dangerous to the environment if they are not properly controlled. Changing the management practices of using herbicides will reduce the detrimental effects that many GM crops have on the environment, while simultaneously allowing humans to enjoy their benefits. Battaglin, W.A., Meyer, M.T., Kuivila, K.M., Dietze, J.E. (2014). Glyphosate and its degradation product AMPA occur frequently and widely in US soils, surface water, groundwater, and precipitation. Journal of the American Water Resources Association, 50(2):275–290. doi:10.1111/jawr.12159. Bonny, S. (2016). Genetically modified herbicide-tolerant crops, weeds, and herbicides: Overview and impact. Journal of Environmental Management, 57, 31-48. doi: http://dx.doi.org/10.1007/s00267-015-0589-7. Buhler, D. D. (2002). 50th Anniversary, Invited Article: challenges and opportunities for integrated weed management. Weed Science Society of America, 50(3):273–280. doi: http://dx.doi.org/10.1614/0043-1745(2002)050[0273:AIAAOF]2.0.CO;2 Coupe, R.H., Barlow, J.R., Capel, P.D. (2012) Complexity of human and ecosystem interactions in an agricultural landscape. Environmental Development (4):88–104. doi: http://dx.doi.org/10.1016/j.envdev.2012.09 Daouk, S., De Alencastro, L. F., Pfeifer, H., Grandjean, D., & Chevre, N. (2013). The herbicide glyphosate and its metabolite AMPA in the lavaux vineyard area, western switzerland: Proof of widespread export to surface waters. part I: Method validation in different water matrices. Journal of Environmental Science and Health, Part B Pesticides, Food Contaminants and Agricultural Wastes, 48(9), 717-724. GMO Corn. Digital Image. GMO Corn Crops Under Attack By Leafworms. 2014. 20 Apr 2014. Heap I (2015) The International Survey of Herbicide Resistant Weeds. http://www.weedscience.org. Accessed 22 July 2015 Powles, S.B., Lorraine-Colwill, D.F., Dellow, J.J., Preston, C. (1998). Evolved resistance to glyphosate in rigid ryegrass (Lolium rigidum) in Australia. Weed Science Society of America 46(5):604–607. doi: http://www. jstor.org/stable/4045968. Relyea, R. A. (2005). The lethal impact of roundup on aquatic and terrestrial amphibians. Ecological Applications, 15(4), 1118-1124. doi: 10.1890/04-1291 Ronald, P. C. (2014). Lab to farm: Applying research on plant genetics and genomics to crop improvement. Public Library of Science: Biology, 12(6). doi: http://dx.doi.org/10.1371/journal.pbio.1001878. Ruiz-Toledo, J., Bello-Mendoza, R., Sánchez, D., Castro, R., & Rivero-Pérez, N. (2014). Occurrence of glyphosate in water bodies derived from intensive agriculture in a tropical region of southern mexico. Bulletin of Environmental Contamination and Toxicology, 93(3), 289-293. Székács, A., & Darvas, B. (2012). Forty years with glyphosate. Herbicides-properties, synthesis and control of weeds. Hasaneen, MN: InTech. Tsui, M. T. K., & Chu, L. M. (2003). Aquatic toxicity of glyphosate-based formulations: Comparison between different organisms and the effects of environmental factors. Chemosphere Journal, 52(1), 1189-1197. doi: 10.1016/S0045-6535(03)00306-0 USDA-NASS (1991–2013) Agricultural chemical usage, field crops summary. USDA ESMIS (Economics, Statistics and Market Information System), Mann Library, Cornell University, 1990–2013. http://usda.mannlib.cornell.edu/MannUsda/viewDo cumentInfo.do?documentID=1560. Accessed 1 July 2015 World Health Organization. (2016). Frequently asked questions on genetically modified foods. Retrieved from: http://www.who.int/foodsafety/areas_work/food-technology/faq-
<urn:uuid:fa4acfd4-1ff0-44d4-a861-d32a8037fe87>
CC-MAIN-2022-33
https://blogs.umass.edu/natsci397a-eross/environmental-impact-of-gmos/comment-page-31/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570868.47/warc/CC-MAIN-20220808152744-20220808182744-00097.warc.gz
en
0.91504
4,320
4.3125
4
Old English þing "meeting, assembly, council, discussion," later "entity, being, matter" (subject of deliberation in an assembly), also "act, deed, event, material object, body, being, creature," from Proto-Germanic *thinga- "assembly" (source also of Old Frisian thing "assembly, council, suit, matter, thing," Middle Dutch dinc "court-day, suit, plea, concern, affair, thing," Dutch ding "thing," Old High German ding "public assembly for judgment and business, lawsuit," German Ding "affair, matter, thing," Old Norse þing "public assembly"). The Germanic word is perhaps literally "appointed time," from a PIE *tenk- (1), from root *ten- "stretch," perhaps on notion of "stretch of time for a meeting or assembly." The sense "meeting, assembly" did not survive Old English. For sense evolution, compare French chose, Spanish cosa "thing," from Latin causa "judicial process, lawsuit, case;" Latin res "affair, thing," also "case at law, cause." Old sense is preserved in second element of hustings and in Icelandic Althing, the nation's general assembly. Of persons, often pityingly, from late 13c. Used colloquially since c. 1600 to indicate things the speaker can't name at the moment, often with various meaningless suffixes (see thingamajig). Things "personal possessions" is from c. 1300. The thing "what's stylish or fashionable" is recorded from 1762. Phrase do your thing "follow your particular predilection," though associated with hippie-speak of 1960s is attested from 1841. 1590s, "a sending abroad" (as an agent), originally of Jesuits, from Latin missionem (nominative missio) "act of sending, a dispatching; a release, a setting at liberty; discharge from service, dismissal," noun of action from past-participle stem of mittere "to release, let go; send, throw," which de Vaan traces to a PIE *m(e)ith- "to exchange, remove," also source of Sanskrit methete, mimetha "to become hostile, quarrel," Gothic in-maidjan "to change;" he writes, "From original 'exchange', the meaning developed to 'give, bestow' ... and 'let go, send'." Meaning "an organized effort for the spread of religion or for enlightenment of a community" is by 1640s; that of "a missionary post or station" is by 1769. The diplomatic sense of "body of persons sent to a foreign land on commercial or political business" is from 1620s; in American English, sometimes "a foreign legation or embassy, the office of a foreign envoy" (1805). General sense of "that for which one is sent or commissioned" is from 1670s; meaning "that for which a person or thing is destined" (as in man on a mission, one's mission in life) is by 1805. Meaning "dispatch of an aircraft on a military operation" (by 1929, American English) was extended to spacecraft flights (1962), hence, mission control "team on the ground responsible for directing a spacecraft and its crew" (1964). As a style of furniture, said to be imitative of furniture in the buildings of original Spanish missions to western North America, it is attested from 1900. in the sports sense, 1879, originally in cricket, "taking three wickets on three consecutive deliveries;" extended to other sports c. 1909, especially ice hockey ("In an earlier contest we had handed Army a 6-2 defeat at West Point as Billy Sloane performed hockey's spectacular 'hat trick' by scoring three goals" ["Princeton Alumni Weekly," Feb. 10, 1941]). So called allegedly because it entitled the bowler to receive a hat from his club commemorating the feat (or entitled him to pass the hat for a cash collection), but the term probably has been influenced by the image of a conjurer pulling objects from his hat (an act attested by 1876). The term was used earlier for a different sort of magic trick: Place a glass of liquor on the table, put a hat over it, and say, "I will engage to drink every drop of that liquor, and yet I'll not touch the hat." You then get under the table; and after giving three knocks, you make a noise with your mouth, as if you were swallowing the liquor. Then, getting from under the table, say "Now, gentlemen, be pleased to look." Some one, eager to see if you have drunk the liquor, will raise the hat; when you instantly take the glass and swallow the contents, saying, "Gentlemen I have fulfilled my promise: you are all witnesses that I did not touch the hat." ["Wit and Wisdom," London, 1860] late Old English, "benevolence for the poor," also "Christian love in its highest manifestation," from Old French charité "(Christian) charity, mercy, compassion; alms; charitable foundation" (12c.), from Latin caritatem (nominative caritas) "costliness; esteem, affection," from carus "dear, valued," from PIE *karo-, from root *ka- "to like, desire." In the Vulgate the Latin word often is used as translation of Greek agape "love" -- especially Christian love of fellow man -- perhaps to avoid the sexual suggestion of Latin amor). The Vulgate also sometimes translated agape by Latin dilectio, noun of action from diligere "to esteem highly, to love" (see diligence). Wyclif and the Rhemish version regularly rendered the Vulgate dilectio by 'love,' caritas by 'charity.' But the 16th c. Eng. versions from Tindale to 1611, while rendering agape sometimes 'love,' sometimes 'charity,' did not follow the dilectio and caritas of the Vulgate, but used 'love' more often (about 86 times), confining 'charity' to 26 passages in the Pauline and certain of the Catholic Epistles (not in I John), and the Apocalypse .... In the Revised Version 1881, 'love' has been substituted in all these instances, so that it now stands as the uniform rendering of agape. [OED] General sense of "affections people ought to feel for one another" is from c. 1300. From c. 1300 as "an act of kindness or philanthropy," also "alms, that which is bestowed gratuitously on a person or persons in need." Sense of "charitable foundation or institution" in English attested by 1690s. Meaning "liberality in judging others or their actions" is from late 15c. A charity-school (1680s) was one maintained by voluntary contributions or bequests. 1756, "special vocabulary of tramps or thieves," later "jargon of a particular profession" (1801). The sense of "very informal language characterized by vividness and novelty" is by 1818. Anatoly Liberman writes here an extensive account of the established origin of the word from the Northern England noun slang "a narrow piece of land running up between other and larger divisions of ground" and the verb slanger "linger, go slowly," which is of Scandinavian origin (compare Norwegian slenge "hang loose, sling, sway, dangle," Danish slænge "to throw, sling"). "Their common denominator seems to be 'to move freely in any direction' " [Liberman]. Noun derivatives of these (Danish slænget, Norwegian slenget) mean "a gang, a band," and Liberman compares Old Norse slangi "tramp" and slangr "going astray" (used of sheep). He writes: It is not uncommon to associate the place designated for a certain group and those who live there with that group’s language. John Fielding and the early writers who knew the noun slang used the phrase slang patter, as though that patter were a kind of talk belonging to some territory. So the sense evolution would be from slang "a piece of delimited territory" to "the territory used by tramps for their wandering," to "their camping ground," and finally to "the language used there." The sense shift then passes through itinerant merchants: Hawkers use a special vocabulary and a special intonation when advertising their wares (think of modern auctioneers), and many disparaging, derisive names characterize their speech; charlatan and quack are among them. [Slang] is a dialectal word that reached London from the north and for a long time retained the traces of its low origin. The route was from "territory; turf" to "those who advertise and sell their wares on such a territory," to "the patter used in advertising the wares," and to "vulgar language" (later to “any colorful, informal way of expression”). [S]lang is a conscious offence against some conventional standard of propriety. A mere vulgarism is not slang, except when it is purposely adopted, and acquires an artificial currency, among some class of persons to whom it is not native. The other distinctive feature of slang is that it is neither part of the ordinary language, nor an attempt to supply its deficiencies. The slang word is a deliberate substitute for a word of the vernacular, just as the characters of a cipher are substitutes for the letters of the alphabet, or as a nickname is a substitute for a personal name. [Henry Bradley, from "Slang," in Encyclopedia Britannica, 11th ed.] A word that ought to have survived is slangwhanger (1807, American English) "noisy or abusive talker or writer." fourteenth letter of the English alphabet; in chemistry, the symbol for nitrogen. In late Middle English a and an commonly were joined to the following noun, if that word began with a vowel, which caused confusion over how such words ought to be divided when written separately. In nickname, newt, and British dialectal naunt, the -n- belongs to a preceding indefinite article an or possessive pronoun mine. Other examples of this from Middle English manuscripts include a neilond ("an island," early 13c.), a narawe ("an arrow," c. 1400), a nox ("an ox," c. 1400), a noke ("an oak," early 15c.), a nappyle ("an apple," early 15c.), a negge ("an egg," 15c.), a nynche ("an inch," c. 1400), a nostryche ("an ostrich," c. 1500). My naunt for mine aunt is recorded from 13c.-17c. None other could be no noder (mid-15c.). My nown (for mine own) was frequent 15c.-18c. In 16c., an idiot sometimes became a nidiot (1530s), which, with still-common casual pronunciation, became nidget (1570s), now, alas, no longer whinnying with us. It is "of constant recurrence" in the 15c. vocabularies, according to Thomas Wright, their modern editor. One has, among many others, Hoc alphabetum ... a nabse, from misdivision of an ABC (and pronouncing it as a word), and Hic culus ... a ners. Also compare nonce, pigsney. Even in 19c. provincial English and U.S., noration (from an oration) was "a speech; a rumor." The process also worked in surnames, from oblique cases of Old English at "by, near," as in Nock/Nokes/Noaks from atten Oke "by the oak;" Nye from atten ye "near the lowland;" and see Nashville. (Elision of the vowel of the definite article also took place and was standard in Chancery English of the 15c.: þarchebisshop for "the archbishop," thorient for "the orient.") But it is more common for an English word to lose an -n- to a preceding a: apron, auger, adder, umpire, humble pie, etc. By a related error in Elizabethan English, natomy or atomy was common for anatomy, noyance (annoyance) and noying (adj.) turn up 14c.-17c., and Marlowe (1590) has Natolian for Anatolian. The tendency is not limited to English: compare Luxor, jade (n.1), lute, omelet, and Modern Greek mera for hēmera, the first syllable being confused with the article. The mathematical use of n for "an indefinite number" is attested by 1717 in phrases such as to the nth power (see nth). In Middle English n. was written in form documents to indicate an unspecified name of a person to be supplied by the speaker or reader. Old English freo "exempt from; not in bondage, acting of one's own will," also "noble; joyful," from Proto-Germanic *friaz "beloved; not in bondage" (source also of Old Frisian fri, Old Saxon vri, Old High German vri, German frei, Dutch vrij, Gothic freis "free"), from PIE *priy-a- "dear, beloved," from root *pri- "to love." The sense evolution from "to love" to "free" is perhaps from the terms "beloved" or "friend" being applied to the free members of one's clan (as opposed to slaves; compare Latin liberi, meaning both "free persons" and "children of a family"). For the older sense in Germanic, compare Gothic frijon "to love;" Old English freod "affection, friendship, peace," friga "love," friðu "peace;" Old Norse friðr "peace, personal security; love, friendship," German Friede "peace;" Old English freo "wife;" Old Norse Frigg, name of the wife of Odin, literally "beloved" or "loving;" Middle Low German vrien "to take to wife," Dutch vrijen, German freien "to woo." Meaning "clear of obstruction" is from mid-13c.; sense of "unrestrained in movement" is from c. 1300; of animals, "loose, at liberty, wild," late 14c. Meaning "liberal, not parsimonious" is from c. 1300. Sense of "characterized by liberty of action or expression" is from 1630s; of art, etc., "not holding strictly to rule or form," from 1813. Of nations, "not subject to foreign rule or to despotism," recorded in English from late 14c. (Free world "non-communist nations" attested from 1950 on notion of "based on principles of civil liberty.") Sense of "given without cost" is 1580s, from notion of "free of cost." Free even to the definition of freedom, "without any hindrance that does not arise out of his own constitution." [Emerson, "The American Scholar," 1837] Free lunch, originally offered in bars to draw in customers, by 1850, American English. Free pass on railways, etc., attested by 1850. Free speech in Britain was used of a privilege in Parliament since the time of Henry VIII. In U.S., in reference to a civil right to expression, it became a prominent phrase in the debates over the Gag Rule (1836). Free enterprise recorded from 1832; free trade is from 1823; free market from 1630s. Free will is from early 13c. Free school is from late 15c. Free association in psychology is from 1899. Free love "sexual liberation" attested from 1822 (the doctrine itself is much older), American English. Free and easy "unrestrained" is from 1690s. "day on which a memorial is made," by 1819, of any anniversary date, especially a religious anniversary; see memorial (adj.). As a specific end-of-May holiday commemorating U.S. war dead, it began informally in the late 1860s and originally commemorated the Northern soldiers killed in the Civil War. It was officially so called by 1869 among veterans' organizations, but Decoration Day also was used. The Grand Army of the Republic, the main veterans' organization in the North, officially designated it Memorial Day by resolution in 1882: That the Commander-in-Chief be requested to issue a General Order calling the attention of the officers and members of the Grand Army of the Republic, and of the people at large, to the fact that the proper designation of May 30th is Memorial Day and to request that it may be always so called. [Grand Army Blue Book, Philadelphia, 1884] The South, however, had its own Confederate Memorial Day, and there was some grumbling about the apparent appropriation of the name. The word "Memorial" was adopted by the Maryland Confederates shortly after the war, and has been generally used throughout the South. It is distinctively Confederate in its origin and use, and I would suggest to all Confederate societies to adhere to it. The Federals' annual day of observance is known as "Decoration Day," having been made so by an act of Congress, and the 30th day of May named as the date. In Maryland there is annually a Decoration Day and a Memorial Day. The two words are expressive not only of the nature of the observance, but also of the people who participate therein. [Confederate Veteran, November 1893] unit of linear measure in Great Britain, the U.S., and a few other countries, formerly used in most European countries before the metric system; Old English mil, from West Germanic *milja (source also of Middle Dutch mile, Dutch mijl, Old High German mila, German Meile), from Latin milia "thousands," plural of mille "a thousand" (neuter plural was mistaken in Germanic as a fem. singular), which is of unknown origin. The Latin word also is the source of French mille, Italian miglio, Spanish milla. The Scandinavian words (Old Norse mila, etc.) are from English. An ancient Roman mile was 1,000 double paces (one step with each foot), for about 4,860 feet, but many local variants developed, in part in an attempt to reconcile the mile with the agricultural system of measurements. Consequently, old European miles were of various lengths. The medieval English mile was 6,610 feet; the old London mile was 5,000 feet. In Germany, Holland, and Scandinavia in the Middle Ages, the Latin word was applied arbitrarily to the ancient Germanic rasta, a measure of from 3.25 to 6 English miles. In England the ordinary mile was set by legal act at 320 perches (5,280 feet) by statute in Elizabeth's reign. In Middle English the word also was a unit of time, "about 20 minutes," roughly what was required to walk a mile. The word has been used generically since 1580s for "a great distance." Mile-a-minute (adj.) "very fast" is attested from 1957 in railroad publications (automobiles had attained 60 mph by 1903).
<urn:uuid:e4594534-30b7-4ca7-8ab7-eb5f6d26557d>
CC-MAIN-2022-33
https://www.etymonline.com/search?page=229&q=speech%20act&type=0
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00499.warc.gz
en
0.962887
4,143
3.078125
3
Introduction to urban agriculture in the Philippines, urban livestock farming in the Philippines: Growing plants and the rearing of animals, mainly for food and other domestic use in a city or town and its environment is known as urban agriculture. It also includes activities such as production, processing, marketing, and delivery of agricultural products. It is a key solution to rapid population growth, food crisis, and climate change. Urban gardening should not be limited to the person or farm growing the food. Because food waste is also a pressing matter that attracts a lot of attention, a community engaged in urban agriculture can not only preserve a fresh source of food but also help save the environment. Vertical farming, beekeeping, kitchen gardening, rooftop gardening, container gardening, and aquaculture, etc., are the different types of urban agriculture. Multiple views of crops can be cultivated in the least available space. With limited land available for large-scale cultivation of everything from herbs, vegetables, and fruits to perfumes and medicinal plants, the college has provided nursery management for ornaments, the rapid spread of orchids, high-value crops, and vegetable greens, and the development of native chicken. It should be noted that these are some of the components of urban agriculture. A guide to starting urban agriculture in the Philippines, and urbanlivestock farming in the Philippines Urban agriculture consists of several production systems. They vary from domestic production and domestic processing to large-scale agriculture. This is usually done in the inner city. Major Philippine agricultural products include Rice, Coconut, Corn, Sugarcane, Banana, Pineapple, and Mango. Principles related to urban agriculture in the Philippines Urban agriculture has increasing access to locally grown food. Some urban forms are designed for education, training, or re-entry programs only. Many are designed to improve food access in a particular community or to perpetuate traditional culinary cultures. Some principles related to agriculture in the Philippines (directly and indirectly); The state will promote a just and dynamic social system that complements the nation’s prosperity and freedom and provides people with poverty-free policies that provide adequate social services. Promote employment, raise living standards, and improve quality of life for all. The goals of the national economy are the more equitable distribution of opportunities, income, and wealth; constantly increasing the number of goods and services manufactured by the nation for the benefit of the people, and increasing production capacity as the key to raising the standard of living for all, especially the backward people. Urban agriculture has great potential in meeting basic human needs, not only providing food but also ensuring a sustainable distribution and production system that creates employment opportunities and a regular income for individuals. It also helps countries protect their environment and save on foreign currency and transportation costs. Planning for urban agriculture addresses pre-existing urban issues such as livelihood and income opportunities, food availability and access, and in many cases conflicting land use. Why is UA important? This question enables us to refer to the external functionality of UA in more detail. In principle, the UA is a source of supply in the urban food system and is just one of many foods security options for homes. Similarly, it is one of the many tools for productive use of numerous open spaces, treatment and/or recovery of urban solid and liquid waste, saving or generating income and employment, and handling freshwater resources more efficiently. In practice and, of course, in many areas and a growing number of cities, the UA has been growing again for three decades. It has become a major supplier of foodstuffs to growing urban sectors, poor and not so poor, and an important element of nutrition for poor households. Furthermore, it is managing open spaces more easily, reducing urban waste disposal and treatment, generating additional income and/or saving cash, and providing employment. Direct or not, part-time or full-time, on a temporary or long-term basis. In a circular city, urban agriculture must meet the water needs from the water resources that arise from within the urban watershed. More suitable resources for rainfed agriculture may include natural rainfall, rainwater is used temporarily stored in ponds – also known as rainwater harvesting or urban wastewater. Due to significant public health concerns for farmers and consumers, untreated urban wastewater is not considered in farming plans in developed countries. There are two parts to this system that focus on production and sustainability. It starts with an urban farm that produces high-value crops. Next, urban farmers maintain a sustainable approach by adopting activities such as composting, recycling, and more. Agricultural products are then made available to the community. For the next class, the community collects its waste from homes and returns it to the environment so that urban farms can be used for activities such as composting and recycling. Growing crops in portable and modular planters is the practice of urban container gardening. With the help of trails and good and compact irrigation, urban container gardening allows crops to be planted in narrow spaces and vertical structures, similar to hydroponic technology but at a lower cost. This process is also useful for combating plastic pollution, as containers can be made from things that would otherwise be seen as waste, such as old plastic bottles. Urban agriculture is also known as urban farming, it is defined as “the growing, processing and distribution of food crops and animal products in the urban environment, through the local community. It exists in many forms like community and backyard gardens; roofs and balconies. Growing in horticultural spaces, container gardening, aquaculture, hydroponics, fruit trees, and orchards; market farms, cattle breeding, and beekeeping. Urban agriculture includes post-harvest activities such as making value-added products in community kitchens, crops, and marketing products, and dealing with food waste is just a technique and approach to growing different types of plants (vegetables, herbs, spices, root crops, fruits). Major crops cultivated in urban agriculture Vegetable production is short-lived. Some can be obtained within 60 days of planting, so they are suitable for urban farming. Urban and peri-urban agriculture aims to produce high-value, destructive and high-demand fruits and vegetables. Green leafy vegetables or herbs: Spinach, Coriander, Curry Leaves, Kale, Watercress, etc. Root Crops: Potato, Sweet Potato, Cassava, Radish, Beetroot, Turmeric, Ginger, Carrot, etc. Vegetables: Tomato, Eggplant, Chili, Capsicum, Peas, French Bean, Guards, Crucifix, etc. Fruits: Avocados, Guava, Mango, Banana, Citrus, Cherry, Coconut, etc., Mushrooms: Button Mushrooms, Paddy Straw Mushrooms, Oyster Mushrooms, etc. Livestock: Poultry, Rabbit, Goat, Sheep, Cattle, Pigs, Guinea Pigs, etc. And fragrant plants, ornamental plants, tree products, etc. Bee products: honey, and beeswax, etc. The Philippines is known for growing crops such as Mangoes, Coconuts, and Bananas. Although there is growing consumer demand for locally grown food, some products that cannot be grown indoors, such as Rice, are still imported. Although the country successfully produces and exports rice, wheat, and maize, which make up 67% of the cultivated land, it will soon see lower yields from heat and water pressure, which are further exacerbated by climate change. Despite local food demands, the challenges of extreme weather patterns such as frequent typhoons, costly operational costs, supply problems from farm to market, and pests threaten the country’s food supply. Like the rest of the world, the Philippines is facing a growing farmer population. Through indoor agriculture, some of these supply challenges can be tackled. In addition, urban agriculture is a great way to engage the young population. Urban roots have designed indoor hydroponics farm systems. They currently grow a variety of microgreens, such as Black, Arugula, Basil, and Purple Radishes, focusing on the high demand and low supply of these products in the Philippines. As consumers become healthier, the chances of microgreens have only increased, as crops offer even more nutritional value than their adult counterparts. They are now on grocery lists and restaurant menus. Advanced hotels and restaurants are adding them to salads, sandwiches, or main dishes. Microgreen farmers benefit from the rapid growth cycle of these crops. Despite their delicacy, they only have two to four weeks to harvest. Urban roots can grow microgreens all year round by growing indoors, avoid the need to use pesticides, and reach the potential for high seed distribution throughout the Philippines. Urban agriculture program in the Philippines Agriculture in the Philippines focuses on rural areas. Urban areas are the only preferred market for rural agricultural products. Sometimes, the artificial scarcity of rural agricultural products is felt in urban centers due to market strategy, or for other reasons, traders are preferred to market their products when they can make more beautiful profits. There is a logical way to reduce the problem of malnutrition in urban agricultural population centers. This means that food or agricultural products are produced within city limits, which may include population centers in bustling cities. Families or organized groups also produce on and off the roofs of homes and surrounding, open community or public places. Food production within a city is the main purpose of urban farming, but we also want to focus more on other resources available from the urban farming system, which are generally considered waste. Plants are grown with the help of PAR or photosynthetically activated radiation in urban farming. Since not all lights are suitable for plants, PAR represents the amount of light that can help photosynthesis. Typically, PAR is 400 to 700 nm of light. PAR monitoring is essential to ensure that the plants are getting the light they need. Now you can do smart farming without unpredictable weather conditions. Urban farming ideas in the Philippines Here are some examples of emerging ideas for urban farming. 1. Containerized and modular farming – Food can also be grown indoors or outdoors in concrete spaces using growing containers. Containerized farming can use recycled products, waste materials, and other waste from residential, commercial, and industrial activities. Making container farming more beneficial for urban areas in the Philippines commercial and industrial companies are welcome to use their recycled products to produce sustainable food. 2. Vertical farming – This type of cultivation uses height to maximize plant growth. Vertical farming can also use containers. This type of farming can best be used if integrated into the overall architecture and design of the infrastructure. Alternative technologies can be used, such as the use of water to grow crops instead of hydroponics or soil, and the use of symbiotic relationships between aquaponics or fish (nutrients) and plants (waste filtration). In case if you miss this: Best Meat Sheep Breeds List. 3. Backyard gardens – It is cultivating food in the homeland. Food can also be safe and secure. Backyard gardens benefit the community as neighbors can share each other’s backyards to achieve better yields. 4. Greenhouses – This includes agricultural practices in residential, industrial, and public urban areas in greenhouses and they need enough land to rely on their crops. These systems provide farmers with the opportunity to grow crops year-round because they provide a regular environment in which crops can be exposed to the different conditions required for production. 5. Vertical farms – It theoretically consists of planting upwards to reduce the impression of agricultural property. Green walls can be used as a tool for vertical fields as they use limited space. 6. Aquaponics – It suggests the tradition of raising marine animals such as fish in urban areas. This requires the use of a device that collects stormwater from inside the city and then builds a self-contained circulating network in tanks or artificial fish ponds. It is an effective crop growth. 7. Beekeeping – Beekeeping can help you develop additional products as well as other indirect benefits, such as better pollination of your existing crops. Having bees around vegetable plants will dramatically increase yield. However, you will need to do thorough research before working in agriculture. You will need to consider the size of the bee colony, the workforce, the weather conditions, the availability of nectar for the bees that will affect your bottom line. How about this: How To Start Pig Farming In South Africa. 8. Closed-loop system – It combines crop production, water conservation, waste energy, solar energy, aquaculture, and many other technologies. The closed-loop system is based on the concept of nutritional efficiency through less reliance on external form input. For example, an irrigation system can use solar energy to reduce energy consumption and improve water use efficiency. Food security through urban agriculture in the Philippines Agriculture experts say it is still possible for the Philippines to achieve food security through urban agriculture, a form of farming that will encourage households to grow their basic food. With economies limited by lockdowns, we are concerned about food security in urban areas that rely on food produced in rural areas. Rising food costs at risk in urban areas Philippines – The Secretary of Agriculture has emphasized the role of urban agriculture in ensuring food supply amid epidemics as it works to increase food security. Natural pollen increases the population of bees, birds, and bats. The food shortages seen in many urban communities around the world are not as severe as if the urban poor increased food production where they live. Benefits of urban agriculture On a grand scale, urban agriculture can help alleviate the problem of food shortages in population centers, which can help alleviate the problem and enhance the beauty of communities and homes. The participants’ attitudes and thinking patterns include food production, waste recycling, environmental protection, nutrition, working together, and the dignity of hard work. There are many benefits to urban agriculture. It provides a solution to the threat of food insecurity in cities as it provides fresh, healthy food to Filipinos. Growing food in the city, especially if done by a community, can also help reduce the harmful effects of climate change as people play their part in promoting sustainable agriculture in their areas. For the personal benefit of the participants, all of the following may be added to them. The sense of fulfillment of the food they eat. Their sensitivity in their environment has changed because now it will mean what they eat to nourish their body. There is a desire to grow more and add more food because they look delicious and more nutritious except for the fact that they are safe from toxic chemicals. Changed regard to discarded material, refuse, rain, sunlight, wind, dust, and degradable waste that has been converted to something that can be used beneficially for food production. Sense of being well as their undefined restlessness finds solace in caring gardens that produce the food they need. Urban agriculture will benefit urban dwellers, urban centers, the environment, and the country in general. Planning and land use management for urban agriculture Identification of areas for urban agriculture – The relevant regional, provincial and municipal / city development plans for the program areas, as well as urban land use maps, will be reviewed to identify specific areas for urban agriculture. Identification of common agricultural land use – Land capacity and suitable land maps will be obtained and will be reviewed to identify suitable crops or crop types based on physical characteristics. The current status of urban agriculture in the Philippines Despite many collaborative efforts by various agencies, the Department of Agriculture, public academic institutions such as the University of Laguna Cavite State University in Indang, Cavite, Xavier University in Cagayan de Oro City, Nueva Ecija; believe that urban agriculture in the Philippines is underdeveloped. Therefore, there is still a large room for research, extension, and training activities in the Philippines to promote and properly implement urban agriculture. Urban policies in the Philippines Urban farming in the Philippines is a shared responsibility of the government at the national and sub-national levels, but local governments are key to urban development. Urban planning follows a hybrid approach from top to bottom. The country has long supported a decentralized approach to healthcare after integrated care from the national level to the district level. In this arrangement, the private sector plays a key role in providing health services, if not more. Governance – UA governance can primarily include land, land use, access, food, and the ecosystem, health, education, and the environment, as well as heritage and cultural practices, established a conceptual framework for the governance process of urban agriculture and identified the features that influence the management process of urban agriculture initiatives. The three levels of this framework, which include the basic features of urban agricultural governance: (i) Urban context (including the local geographic situation, economic and political situation, the agricultural context, and the status of urban-rural relations); (ii) features of external governance (including public policies, partnerships, legal process) and (iii) features of internal governance, including project objectives include local scale, time, actors and resources (land, finance and knowledge dynamics in the project). All of these are embedded in local conditions due to geography, climate, economic and political situation, cultural values, and urban-rural relations. NUDHF in the Philippines In the Philippines, the challenge for national urban policymakers and decision-makers continues to be to strike the right and effective balance between policy-making at the governmental level and to provide details on how to intervene and invest. What has been done, on the other hand, has been done by the local governments. Also, the current National Urban Development and Housing Framework (NUDHF) recognizes the right of the people to the city by emphasizing the concept of urban development included in its policy statements. And the progress that ordinary people have made without having to wait for the promise of a ” trickle-down” to work. However, rapid urbanization is a major challenge for both national and local governments as they struggle with the increasingly difficult task of handling and resolving urban issues and challenges. The national urban policy development in the Philippines is critical to achieving sustainable, smart, and green cities. Urbanization has been a major trend globally and has potentially been a key contributor to progressive development. NUDHF provides a broad framework for urban development. It aims to guide the efforts of the Philippine government, the private sector, and other stakeholders to improve the efficiency and effectiveness of the country’s urban systems. The current realities and the expected effects of urbanization now require updating the country’s urban development and housing framework. NUDHF, through the constant evolution of spaces and systems, is now looking for a new urban development model, which simultaneously expands and abandons previous policies. Urban infrastructure and basic services Water and sanitation - Streamline improve policies and regulatory frameworks to ensure sustainable water security in urban areas. Water and sanitation infrastructure must be in line with legislation, policies, and organizational development plans. Simplifying the regulatory framework from the approval of water and sanitation projects to maintenance will enable the protection, exploration, development, and expansion of water and sanitation services for large urban systems. Implement programs and measures on watershed protection. - Promote and support modern water and sanitation technologies. Cost-effective, alternative technologies, including water recycling, should be supported in water and sanitation. This includes investing in research, prototyping, and fully evolving technologies, especially local solutions. - Financial support for climate and disaster prevention water and sanitation infrastructure. Mobilizing resources, including in the private sector, will give the government the flexibility to develop and implement high-investment infrastructure projects. Replicating and improving the achievements of privately managed water utilities will further strengthen the flexibility of urban water infrastructure. - Strengthen local government capacity on water and sanitation management. Linking urban water use to urban agriculture has the potential to be mutually beneficial. The availability of safe alternative sources of water is possible; - facilitating greater use of urban agriculture, - proper use or reuse of municipal water that can improve stormwater and wastewater management on urban rivers excessive gutter load and nutrient load can reduce resource recovery. Greater private sector participation in urban development in the Philippines Because in a developing country like the Philippines there will be significant barriers to investment in capital assets for infrastructure, public services, and even disaster management, the government has decided to address these challenges privately without putting pressure or burden the population with higher taxes. The private sector, for its part, has made significant progress over time in overcoming the challenges of the Philippines ‘ urbanization, particularly in the areas of transportation, communications, property development, and disaster management. Much needs to be done and believe the private sector can still accelerate its role in developing more livable communities inside and outside the city that promotes decline and significantly improves living standards. Expanding access to community service – It is also worth noting how business groups in the Philippines are rapidly expanding access to products and services that meet the basic needs of a large segment of society. To meet the challenges of our urbanization, we must aim for development that is felt in all segments of the population. Found that different ways to provide products and services at different prices over time that meet a wide range of needs. Believe that our businesses can play a role in providing practical and realistic solutions to some of the challenges facing society at large, provided we participate in industries that meet basic human needs – housing, banking, telecommunications, and water distribution. Overall, we have seen that these initiatives can create social inclusion while reaping attractive benefits and create a more comprehensive development approach to communities and development and urban challenges. There are three main hazards of horticulture in cities and urban environments: soil, water, and air pollution. There are three types of air pollution: (1) non-accumulation in plants, (2) transport vectors of pollution, and (3) pollution that is carried in plants. Within a city, there are many sources of pollution: such as traffic, industries, and heating. - Top Latest Features of Life Insurance 2022 - Latest Equipment and Tools Required for Fish Farming: A Beginners Guide 2022 - Apple iPhone 13 Professional Max to Redmi Word 10 Professional Max: Greatest smartphones of 2021 - Finest Dell laptop computer to purchase in India - Children with weapons fueled a college shootings report in 2021 - How Samsung Foldables Are Defining the Way forward for Smartphone Innovation – Samsung World Newsroom - Housing affordability ratio improves in Hyderabad - LG’S NEW FEATURE-PACKED KITCHEN DUO UPGRADES THE COOKING EXPERIENCE WITH ThinQ™ RECIPES AND MORE - Hyderabad not terrified of Omicron, open for New Yr’s as govt permits liquor service until 1 am - Watchdog points replace on investigation into police capturing - Outsiders make beeline for ‘Hyderabadi New 12 months’ | Hyderabad Information - State Financial institution Workers School-Hyderabad celebrates diamond jubilee - 11 vital issues to do after establishing your new Android cellphone - College Capturing FPS Sport Makers Admit Lack of Tact After Fury - I switched from my iPad to an Android pill — here is what occurred - SWAT recreation loses writer following college taking pictures feedback - Air Conditioners Bought at Residence Depot and Costco Have Been Recalled — Greatest Life - What Are the Well being Results of Air Conditioning? - HP begins manufacturing laptops, a number of PC merchandise in India - After Oxford Excessive taking pictures, Michigan lecturers ask: How can we hold going? - The Ursid meteor bathe peaks tonight. Do not count on to see many ‘capturing stars.’ - Methods to pair your Put on OS smartwatch with a brand new telephone with out resetting it - I Thought Smartwatches Have been Pointless, Then I Tried a Low cost One – Evaluate Geek - Why Samsung might stay high international smartphone participant in close to future - We discovered a secret in Asus ROG’s CES teaser video - Samsung Galaxy A03s smartphone evaluation: Low-cost telephone, good name high quality - Bringing meals of Hyderabadi royals to widespread folks - What You Can See In The Evening Sky This Week - Meet Gajendra, the 1.5-tonne buffalo that prices Rs 80 lakh - Our favourite smartphone, Chromebook, smartwatch, and earbuds - The good approach to change to a brand new Android cellphone - US colleges are on excessive alert over a ‘faculty capturing’ TikTok problem scheduled for as we speak - Lenovo laptops susceptible to bug permitting admin privileges
<urn:uuid:1ad8080a-bdd1-424a-8512-26599359aa59>
CC-MAIN-2022-33
https://www.kjmeniya.in/agriculture-farming/urban-agriculture-in-the-philippines-livestock/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570868.47/warc/CC-MAIN-20220808152744-20220808182744-00098.warc.gz
en
0.929864
5,059
3.390625
3
Look, Listen & Live 1: Beginning with GOD சுருக்கமான வருணனை: Adam, Noah, Job, Abraham. 24 sections. It has a picture book to go along with the recording. உரையின் எண்: 418 மொழி: English: Southern Africa கருப்பொருள்: Creation; Multiple themes பகுப்பு: Bible Stories & Teac வேதாகம மேற்கோள்: Extensive Good day. Come and learn about some of the first people who lived on the earth. God's dealings with them teach us much about Him. Look at the pictures in the red book as you listen to this. It has teachings from God's book, the Bible. Turn to the next picture when you hear this music. Picture 1: Adam and the Animals Genesis 1:1 - 2:14 There is only one God. He is Spirit. He knows all things and He is everywhere. In the beginning only God existed; besides God there was no other living creature. The earth as we know it today, did not exist. God created everything. First He made the light. Then God made the sky, the clouds and the earth. He separated the sea from the dry land and plants started to grow. Then God made the sun, the moon and the stars. He made all the living creatures on earth and in the sea. The man in this picture is Adam. He was the first man who ever lived. God made him from the dust of the earth. He made the man after his image. God made the man to know Him (God) so that the spirit of man could be one with the spirit of God. All of the man's thoughts and desires agreed with those of God and he walked with God. Picture 2: A Wife for Adam Genesis 1:27-28, 2:15-25 God put Adam in a beautiful garden called Eden and told him to care for it. There were trees of every kind, including the tree of knowledge of good and evil and the tree of life. God allowed Adam to find names for all the animals and the birds. There were males and females of all creatures. They were not afraid of the man and did him no harm. But there was no suitable companion (mate) for Adam. So God created a woman as a companion for Adam. God caused Adam to fall into a deep sleep. He took a rib from Adam's side. From this rib God made the woman. God brought her to the man. The man and woman were naked, but they were not ashamed. They spoke to God like friends. There was no sorrow, death or evil in that wonderful place. Picture 3: The Snake in The Garden Genesis 2:16-17, 3; Isaiah 14:12 God commanded the man, Adam, "You are free to eat from any tree in the garden; but you must not eat from the tree of the knowledge of good and evil, for when you eat of it you will surely die." But Satan tricked Adam's wife. Satan had once been an angel of God, but he rebelled (turned) against God who made him. He wanted to become greater than God. So God cast him out of heaven and Satan became God's enemy. Satan, the evil one, came to the woman in the form of a beautiful snake. He tricked her and said that she would not die if she ate of the forbidden fruit. The woman ate and gave some of the fruit to Adam. As soon as they had eaten they knew that they were naked. They realised that they had disobeyed God and that their relationship with God was not the same. They made coverings of leaves for themselves and tried to hide from God. Picture 4: Adam and His Wife Outside the Garden In this picture we see what happened after Adam and his wife had disobeyed God. God had to punish them for disobeying his command, but first God said to the snake, "Because you have done this, cursed are you above all the livestock and all the wild animals!" From that time onwards Satan has been the enemy of God and man (people). God also foretold that the offspring of Adam and his wife would defeat Satan. Jesus did this when He died for the sins of the world and rose from the dead. God said to the woman, "I will greatly increase your pains in childbearing; with pain you will give birth to children..." God told Adam that because he had listened to his wife and had eaten from the tree, the ground would be cursed. Therefore he would have to work hard to make it produce enough food for him and his family until he died. God made them leave the garden. Adam named the woman Eve, which means 'living'. She had children by him. All people are descended from Adam and Eve. Adam and Eve, and all the people after them have evil in them and this evil separates them from God and from each other. Picture 5: Noah and the Ark Genesis 6:1 - 7:5 The children of Adam and Eve and the following generations became very many. They followed the ways of Satan and did what was wrong before God. God was sorry that He had made man. Only one man obeyed God. His name was Noah. One day God spoke to Noah and said, "I am going to put an end to all people, for the earth is filled with violence because of them . . . So make (build) yourself an ark." An ark is a big boat. "I am going to send a flood on the earth to destroy every living creature. But I will make an agreement with you. Go into the ark, you and your wife, your sons and their wives. Take into the ark with you male and female of every kind of animal and bird, in order to keep them alive. And you must take every kind of food for you and the animals to eat and store it away." God told Noah exactly how to build the ark. Noah did everything God commanded. As he was building the ark, Noah kept warning the people that God would destroy them, but they would not turn from their evil ways. Picture 6: The Great Flood When the ark was finished, Noah, his family and all the animals went into the ark. God closed the door and shut them in. Seven days later the rain began to fall. It rained for forty days and nights. Springs of water came from under the ground and the whole earth was covered with a great flood. All the people and animals outside the ark died. Picture 7: The Rainbow and God’s Promise Genesis 7:24 - 9:17 Noah and his family and all the animals were in the ark for more than a year. When at last the rain stopped and the water had dried up from the earth, they all came out of the ark. Then Noah built an altar to God, as you see here. He killed some animals and sacrificed them to God by burning them on the altar. God was pleased with it and put a rainbow in the sky. He promised never again to destroy all living creatures by a flood. Picture 8: The Tower of Babel God told Noah and his three sons to have many children and to spread out all over the earth. But their descendants would not listen. They disobeyed God and decided to build a city (big village) with a tower that reaches high into the sky. God saw the city and the tower. He also saw their disobedience and pride. So God said, "If as one people speaking the same language they have begun to do this, ... Come, let us go down and confuse (mix up) their language so (that) they will not understand each other." In this picture the people cannot understand one another any more. So they stopped building the city and they went to live in different places. Picture 9: Job Worships God Here we see a man named Job. He is offering a burnt sacrifice to God. Job worshipped the one true God and he would not follow Satan's ways. Job had seven sons and three daughters and he made offerings on behalf of his children. He asked God's forgiveness, if they had offended God in any way. God was pleased with Job. He blessed him and caused him to prosper. Job had large herds of animals and many servants. One day God told Satan that He was pleased with Job. But Satan started to accuse Job before God. He said that Job only worshipped God so that he would be rich. Satan said to God, "If you take away everything that Job has, then he will curse you." God knew that Satan was a liar and he knew that Job was a good man who only wanted to please Him (God). So God allowed Satan to take everything away from Job to test him. Picture 10: Job in Mourning This picture shows a day of great sorrow for Job. His servants brought him terrible news. The first servant came running to tell Job that enemies had killed his other servants and had stolen all his oxen and donkeys. Before he had finished speaking another servant came. He said lightning had struck the sheep and shepherds. All were killed. A third servant came to tell Job that bandits had attacked them and had stolen all his camels. Another servant brought the worst news of all, "Your children were feasting together when a storm blew the house down and killed them all." After hearing this, Job shaved his head and fell to the ground in grief. He said, "The Lord gave and the Lord has taken away; may the name of the Lord be praised." Although all these bad things had happened, Job did not sin by blaming God. Picture 11: Job Suffers Job 2:1 - 41:34 Satan saw that Job still honoured God, so he said to God, "If you hurt Job's body, he will curse you." God allowed Satan to test Job again. Satan brought a terrible skin disease on Job. Job went outside and scraped his sores with a piece from a broken pot. His wife said to him, "Curse God and die!" But Job answered, "Shall we accept good from God, and not trouble?" Three of Job's friends came and talked to him for many days. They tried to find out the reason for all Job's suffering. They said that he must have done something wrong and that God was punishing him. Job did not agree with them. Job questioned God's ways, but he never turned against God. Therefore God showed his greatness and knowledge to Job. Picture 12: Job Is Restored In this picture Job is well and rich again. I will tell you how this happened. When Job met God, he realised God's greatness and repented (felt truly sorry) of questioning God's ways. Then God spoke with one of Job's accusers named Eliphaz, "I'm angry with you and your two friends, because you have not spoken of me what is right, as my servant Job has." God also told them to go to Job and sacrifice a burnt offering for themselves and that his servant Job would pray for them. God said that He (God) would forgive them. Job did pray for his friends and God accepted his prayer. After this God made Job well and rich again. Job's friends and relatives came to feast with him and to bring him gifts. God gave him seven sons and three beautiful daughters. Job lived to see many of his descendants and he died a very old man. Let me tell you about a man called Abram. He obeyed God and through him many people have been blessed. Be ready to turn to the next picture when you hear this music. Picture 13: Abram Leaves Home Genesis 12:1 - 13:4 In this picture you see Abram, his wife Sarai, his nephew Lot and their servants. They are going on a long journey because God had spoken to Abram saying, "Leave your country, your people and your father's household and go to the land I will show you. I will make you into a great nation and I will bless you; ... and all peoples on earth will be blessed through you." Abram believed God's promise. He and his companions took all their possessions and travelled for many days until they reached the land Canaan. There God appeared to Abram and said, "To your offspring I will give this land." Abram had travelled through the land, but when he came to the Negev area in the south of Canaan, a famine came over the land, and there was no food for them to eat. Abram and his people had to go to another country called Egypt, to find food. But later he returned to Canaan, because he still believed God's promise to him. Picture 14: Abram and Lot Abram and Lot lived together in Canaan. They had so many animals that there was not enough grazing land for them in this one place. So the herdsmen of Abram and those of Lot quarrelled. Then Abram said to Lot, "Let's not have any quarrelling between you and me, or between your herdsmen and mine, for we are brothers." Abram told Lot to choose any part of the land he wanted. Lot wanted the best for himself, so he chose the fertile valley of the Jordan River, where there were also cities to live in. Lot went to live in this valley near a city called Sodom. The people of Sodom were very wicked. They displeased God. Abram stayed in the land of Canaan and there God spoke to him again. God said, "All the land that you see, I will give to you and your offspring forever." Picture 15: Abram Meets the King of Peace While Lot lived in the valley, war broke out. Lot and his family were taken away captive with all the people of Sodom. So Abram and his armed men went to set Lot free. They defeated the enemy and saved all the captives and their possessions. As they returned from the battle, they met a king, as you can see in this picture. The king's name was Melchizedek. He was king of a city called Salem, which means 'peace'. Melchizedek worshipped God and pleased Him. Melchizedek brought bread and wine to Abram. Abram bowed down before Melchizedek and brought him gifts. The king of Sodom also wanted to give Abram gifts Abram for saving his people, but Abram would receive nothing from this evil king. Picture 16: Abram and the Stars Genesis 15:1-21, 17:1-19 After Abram had rescued Lot and all the people of Sodom, Lot returned to that wicked city. Then God spoke to Abram, "Do not be afraid, Abram. I am your shield, your very great reward." But Abram was sad because God had not given him a son as an heir. For this reason God took Abram outside and said, "Look, try to count the stars. You will have as many descendants as that." Abram believed the promise of God, even though he had no children yet. Picture 17: The Baby Ishmael Genesis 16 - 17 Look! Abram has a baby son. This is how it came to be: Abram had lived in Canaan for ten years, still waiting to receive what God had promised, for Sarai, his wife, was still without a child. Sarai had a servant named Hagar. She wanted Hagar to have a child by Abram and then Sarai would take the child to be her own. That was the custom in those days. So Hagar became pregnant by Abram. Sarai became jealous of her. She treated her badly, so Hagar ran away. But the angel of God met Hagar and he confronted her. He told her, "... you will have a son. You shall name him Ishmael, for the Lord has heard of your misery." Ishmael means 'God hears'. "He will be a wild donkey of a man; his hand will be against everyone and everyone's hand against him, and he will live in hostility towards all his brothers." Ishmael was born and later became the father of the Arab people. Thirteen years after Ishmael's birth, God came to Abram and said to him, "I am God Almighty; walk before me and be blameless. I will confirm my covenant between me and you and will greatly increase your numbers . . . As for me, this is my covenant with you: You will be the father of many nations. No longer will you be called Abram; your name will be Abraham, for I have made you a father of many nations. Because of the new name that God had given him, Abraham knew that God would give him many descendants. God also said to Abraham, "The whole land of Canaan, . . . I will give . . . to you and your descendants after you; and I will be their God." God also changed Sarai's name to Sarah. God promised to bless her and give Abraham a son by her. Picture 18: Sarah Laughs One day three messengers of God came to Abraham's tent. Abraham called Sarah to prepare food, and he served them himself. Then the men asked Abraham, "Where is your wife Sarah?" Abraham answered, "There, in the tent." One of the three men was the Lord God Himself and He said to Abraham, "I will surely return to you about this time next year, and Sarah your wife will have a son." Sarah heard Him and she laughed to herself because she was past her years of childbearing. She did not believe Him. Then God spoke to Abraham and said, "Why did Sarah laugh and say, 'Will I really have a child, now that I am old?' Is anything too hard for the Lord? I will return to you at the appointed time next year and Sarah will have a son." Picture 19: Abraham Prays for Sodom Genesis 18:16 - 19:29 When the three visitors got up to leave, Abraham walked along with them to see them on their way. They looked down toward Sodom and God told Abraham that He was going to visit Sodom. He was planning to destroy the cities of Sodom and Gomorrah because the people there were doing what was wrong before God. They had abandoned God's plan for a holy relationship between a man and a woman. Abraham remembered Lot and said to the Lord God, "Will you sweep away the righteous with the wicked? . . . Will not the Judge of all the earth do right?" Abraham began to plead with God. Finally God said that He would not destroy Sodom if there were only ten people who served God. When the Lord had finished speaking to Abraham, He left and Abraham returned home. But there were not even ten servants of God in Sodom. When the two messengers of God came to Sodom, they saw the wickedness of the people. The messengers commanded Lot and his family to flee out of Sodom and told them not to look back, because God had shown mercy to Lot. Then God sent fire from heaven and Sodom and Gomorrah with all the wicked people who lived there were destroyed. Lot and his family fled just in time, but on their way his wife looked back and immediately turned into a pillar of salt. Abraham had pleaded with God to save all the righteous people. Only Lot and his two daughters lived, because God remembered Abraham and how he had prayed for Lot. Picture 20: Abraham’s Sacrifice Genesis 21:1 - 22:19 When Sarah was 90 years old she gave birth to a son, just as God had promised Abraham. Abraham was already 100 years old. They called the boy Isaac, which means 'he laughs'. When Isaac was a young boy, God tested Abraham. God wanted to see if Abraham would obey Him. God said to him, "Take your only son Isaac, whom you love, and offer him as a sacrifice to me." Abraham obeyed God. Abraham placed the wood on Isaac and he himself carried the fire and the knife. As the two of them went on together, Isaac said, " . . . but where is the lamb for the burnt offering?" Abraham answered, "God himself will provide the lamb for the burnt offering, my son." Abraham built the altar, put the wood on it, then bound his son and laid him on the altar. Just as Abraham raised the knife to kill Isaac, God's angel spoke to him and said to Abraham, "Do not lay a hand on the boy . . . Do not do anything to him. Now I know that you fear God, because you have not withheld (kept) from me your son, your only son." Then Abraham saw a ram caught in a bush. He offered the ram as a sacrifice to God instead of his son. So God said to him, "I will surely bless you (do good to you) . . . and through your offspring (descendents) all the nations on earth will be blessed, because you have obeyed me." Picture 21: Abraham and His Servant Genesis 24:1 - 25:11 When Abraham was very old, he called his servant and said, "Promise me that you will go back to my country and to my own relatives, and get a wife for my son, Isaac." Abraham did not want Isaac to leave the land God had promised, or take a wife from the Canaanitic women. The servant obeyed Abraham and travelled to the city of Nahor. There he asked God to guide him to find the right woman for Isaac. A beautiful young woman named Rebekah came out of the city to draw water from a well. She was willing to draw water for Abraham's servant and his ten camels. The servant knew that God had guided him to this woman and that she was the woman he was looking for. He found out that she was the granddaughter of Abraham's brother. The servant worshipped God who had answered his prayer and who had led him to his master's relatives. So Rebekah became Isaac's wife. Abraham died when he was 175 years old. All his life he trusted God to keep his promises, so God accepted and blessed him. God still blesses all those who trust in Him like Abraham. Picture 22: Jesus Is Born Matthew 1:18-25; Galatians 4:4-5 The Word of God is true. God never lies. God told Adam that He would send his chosen one to defeat Satan. He warned Noah that He would destroy those who are evil. God showed Job that he wants what is best for us. He showed Abraham that He always keeps his promises. In this picture you can see the baby called Jesus. He was born at the right time in the land Palestine, later named Israel. His mother's name was Mary. She was a descendant of Abraham and Isaac. Mary was a virgin when she gave birth to Jesus. Jesus had no human father. God is his Father. Jesus is the one who brings blessings to the whole world. He came to fulfil God's promise that Satan would be defeated and to bring us back to God. Picture 23: The Death of Jesus When Jesus grew to be a man, He taught people the ways of God. Jesus did many miracles to show that He was from God. But men did not believe Him. They killed Jesus by nailing Him to a wooden cross. A soldier pierced his side with a spear to make sure that He was really dead. God told Adam that he would die if he ate from the tree. We have all disobeyed God and we also deserve everlasting death. You will remember that God provided a ram for Abraham to be killed as a sacrifice instead of Isaac. In the same way God, instead of condemning all people to eternal death, gave his only son, Jesus, to die as a sacrifice for the sin of all people. Yes, our sin has separated us from God. Jesus is the way back to God. Picture 24: Jesus Is Alive Jesus' friends laid his dead body in a grave. He was in the grave for three days. Then He rose from the dead. Death could not hold Jesus, the son of God. Jesus showed himself to his disciples and stood in their midst. They were all amazed. One disciple, called Thomas, was not with the others when Jesus came to them. Thomas said to them, "If I don't see the nail marks in his hands and put my finger where the nails were, and put my hand into his side, I will not believe it." One week later Jesus appeared to them again. Look at the picture. Jesus showed Thomas the nail marks in his hands and He told Thomas to place his hands in his side. Then He said to Thomas, "Stop doubting and believe." Jesus also said, "... blessed are those who have not seen and yet have believed." Let us not be like Thomas who doubted God. Remember, God said to Adam that He would send someone to defeat Satan. When Jesus died on the cross and rose from the dead, He overcame Satan. Satan has no power over people who believe and trust in Jesus and obey Him. This is exactly what God promised Abraham too. God promised him that all the nations on earth would be blessed through him. This blessing is: that we can be saved from our sin and live with God eternally. This blessing came to be through Jesus. We can receive this blessing if we believe in Jesus and obey his Word.
<urn:uuid:b9b882f0-5346-4b9b-9518-dd1aabe26476>
CC-MAIN-2022-33
https://globalrecordings.net/ta/script/en-ZA/418
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573908.30/warc/CC-MAIN-20220820043108-20220820073108-00298.warc.gz
en
0.988981
5,680
3.09375
3
North Sea (n.) 1.an arm of the North Atlantic between the British Isles and Scandinavia; oil was discovered under the North Sea in 1970 voir la définition de Wikipedia mer et océan (fr)[Classe] les noms de cours d'eau, mer, océan (fr)[Classe...] les mers (fr)[Classe...] État membre de la Communauté Européenne (fr)[Classe...] les océans de la Terre (fr)[Classe...] les mers (fr)[Classe...] Atlantic, Atlantic Ocean[Desc] North Sea (n.) |Primary sources||Forth, Ythan, Elbe, Weser, Ems, Rhine/Waal, Meuse, Scheldt, Spey, Tay, Thames, Humber, Tees, Tyne, Wear, Crouch| |Basin countries||Norway, Denmark, Germany, Netherlands, Belgium, France and the United Kingdom (England, Scotland)| |Max length||960 km (600 mi)| |Max width||580 km (360 mi)| |Surface area||750,000 km2 (290,000 sq mi)| |Average depth||95 m (312 ft)| |Max depth||700 m (2,300 ft)| |Water volume||94,000 km3 (23,000 cu mi)| |Salinity||3.4 to 3.5%| |Max temperature||17 °C (63 °F)| |Min temperature||6 °C (43 °F)| |References||Safety at Sea and Royal Belgian Institute of Natural Sciences| The North Sea is a marginal sea of the Atlantic Ocean located between Great Britain, Scandinavia, Germany, the Netherlands, and Belgium. An epeiric (or "shelf") sea on the European continental shelf, it connects to the ocean through the English Channel in the south and the Norwegian Sea in the north. It is more than 970 kilometres (600 mi) long and 580 kilometres (360 mi) wide, with an area of around 750,000 square kilometres (290,000 sq mi). The North Sea has long been the site of important European shipping lanes as well as a major fishery. The sea is a popular destination for recreation and tourism in bordering countries and more recently has developed into a rich source of energy resources including fossil fuels, wind, and early efforts in wave power. Historically, the North Sea has featured prominently in geopolitical and military affairs, particularly in Northern Europe but also globally through the power northern European actors projected worldwide during much of the Middle Ages and modern era. The North Sea was the centre of the Vikings' rise and subsequently, the Hanseatic League, the Netherlands, and the British each sought to dominate the North Sea and through it to control access to the markets and resources of the world. As Germany's only outlet to the ocean, the North Sea continued to be strategically important through both World Wars. The coast of the North Sea presents a diversity of geological and geographical features. In the north, deep fjords and sheer cliffs mark the Norwegian and Scottish coastlines, whereas the south consists primarily of sandy beaches and wide mudflats. Due to the dense population, heavy industrialization, and intense use of the sea and area surrounding it, there have been a number of environmental issues affecting the sea's ecosystems. Environmental concerns—commonly including overfishing, industrial and agricultural runoff, dredging, and dumping among others—have led to a number of efforts to prevent degradation of the sea while still making use of its economic potential. The North Sea is bounded by the Orkney Islands and east coasts of England and Scotland to the west and the northern and central European mainland to the east and south, including Norway, Denmark, Germany, the Netherlands, Belgium, and France. In the southwest, beyond the Straits of Dover, the North Sea becomes the English Channel connecting to the Atlantic Ocean. In the east, it connects to the Baltic Sea via the Skagerrak and Kattegat, narrow straits that separate Denmark from Norway and Sweden respectively. In the north it is bordered by the Shetland Islands, and connects with the Norwegian Sea, which lies in the very north-eastern part of the Atlantic. It is more than 970 kilometres (600 mi) long and 580 kilometres (360 mi) wide, with an area of 750,000 square kilometres (290,000 sq mi) and a volume of 94,000 cubic kilometres (23,000 cu mi). Around the edges of the North Sea are sizeable islands and archipelagos, including Shetland, Orkney, and the Frisian Islands. The North Sea receives freshwater from a number of European continental watersheds, as well as the British Isles. A large part of the European drainage basin empties into the North Sea including water from the Baltic Sea. The largest and most important affecting the North Sea are the Elbe and the Rhine – Meuse watershed. Around 185 million people live in the catchment area of the rivers that flow into the North Sea encompassing some highly industrialized areas. For the most part, the sea lies on the European continental shelf with a mean depth of 90 metres (300 ft). The only exception is the Norwegian trench, which extends parallel to the Norwegian shoreline from Oslo to an area north of Bergen. It is between 20 and 30 kilometres (12 and 19 mi) wide and has a maximum depth of 725 metres (2,379 ft). The Dogger Bank, a vast moraine, or accumulation of unconsolidated glacial debris, rises to a mere 15 to 30 metres (50–100 ft) below the surface. This feature has produced the finest fishing location of the North Sea. The Long Forties and the Broad Fourteens are large areas with roughly uniform depth in fathoms, (forty fathoms and fourteen fathoms or 73 and 26 m deep respectively). These great banks and others make the North Sea particularly hazardous to navigate, which has been alleviated by the implementation of satellite navigation systems. The Devil's Hole lies 200 miles (320 km) east of Dundee, Scotland. The feature is a series of asymmetrical trenches between 20 and 30 kilometres (12 and 19 mi) long, 1 and 2 kilometres (0.62 and 1.2 mi) wide and up to 230 metres (750 ft) deep. On the Northwest. From Dunnet Head (3°22'W) in Scotland to Tor Ness (58°47'N) in the Island of Hoy, thence through this island to the Kame of Hoy (58°55'N) on to Breck Ness on Mainland (58°58'N) through this island to Costa Head (3°14'W) and to Inga Ness (59'17'N) in Westray through Westray, to Bow Head, across to Mull Head (North point of Papa Westray) and on to Seal Skerry (North point of North Ronaldsay) and thence to Horse Island (South point of the Shetland Islands). On the North. From the North point (Fethaland Point) of the Mainland of the Shetland Islands, across to Graveland Ness (60°39'N) in the Island of Yell, through Yell to Gloup Ness (1°04'W) and across to Spoo Ness (60°45'N) in Unst island, through Unst to Herma Ness (60°51'N), on to the SW point of the Rumblings and to Muckle Flugga ( ) all these being included in the North Sea area; thence up the meridian of 0°53' West to the parallel of 61°00' North and eastward along this parallel to the coast of Norway, the whole of Viking Bank being thus included in the North Sea. The average temperature in summer is 17 °C (63 °F) and 6 °C (43 °F) in the winter. The average temperatures have been trending higher since 1988, which has been attributed to climate change. Air temperatures in January range on average between 0 to 4 °C (32 to 39 °F) and in July between 13 to 18 °C (55 to 64 °F). The winter months see frequent gales and storms. The salinity averages between 34 to 35 grams of salt per litre of water. The salinity has the highest variability where there is fresh water inflow, such as at the Rhine and Elbe estuaries, the Baltic Sea exit and along the coast of Norway. The North Sea is an arm of the Atlantic Ocean receiving the majority of ocean current from the northwest opening, and a lesser portion of warm current from the smaller opening at the English Channel. These tidal currents leave along the Norwegian coast. Surface and deep water currents may move in different directions. Low salinity surface coastal waters move offshore, and deeper, denser high salinity waters move in shore. The North Sea located on the continental shelf has different waves than those in deep ocean water. The wave speeds are diminished and the wave amplitudes are increased. In the North Sea there are two amphidromic systems and a third incomplete amphidromic system. In the North Sea the average tide difference in wave amplitude is between 0 to 8 metres (0 to 26 ft). The Kelvin tide of the Atlantic ocean is a semidiurnal wave that travels northward. Some of the energy from this wave travels through the English Channel into the North Sea. The wave still travels northward in the Atlantic Ocean, and once past the northern tip of Great Britain, the Kelvin wave turns east and south and once again enters into the North Sea. The eastern and western coasts of the North Sea are jagged, formed by glaciers during the ice ages. The coastlines along the southernmost part are covered with the remains of deposited glacial sediment. The Norwegian mountains plunge into the sea creating deep fjords and archipelagos. South of Stavanger, the coast softens, the islands become fewer. The eastern Scottish coast is similar, though less severe than Norway. From north east of England, the cliffs become lower and are composed of less resistant moraine, which erodes more easily, so that the coasts have more rounded contours. In Holland, Belgium and in the east of England (East Anglia) the littoral is low and marshy. The east coast and south-east of the North Sea (Wadden Sea) have coastlines that are mainly sandy and straight owing to longshore drift, particularly along Belgium and Denmark. The southern coastal areas were originally amphibious flood plains and swampy land. In areas especially vulnerable to storm tides, people settled behind elevated levees and on natural areas of high ground such as spits and Geestland.:[302,303] As early as 500 BC, people were constructing artificial dwelling hills higher than the prevailing flood levels.:[306,308] It was only around the beginning of the High Middle Ages, in 1200 AD, that inhabitants began to connect single ring dikes into a dike line along the entire coast, thereby turning amphibious regions between the land and the sea into permanent solid ground. The modern form of the dikes supplemented by overflow and lateral diversion channels, began to appear in the 17th and 18th centuries, built in the Netherlands. The North Sea Floods of 1953 and 1962 were impetus for further raising of the dikes as well as the shortening of the coast line so as to present as little surface area as possible to the punishment of the sea and the storms. Currently, 27% of the Netherlands is below sea level protected by dikes, dunes, and beach flats. Coastal management today consists of several levels. The dike slope reduces the energy of the incoming sea, so that the dike itself does not receive the full impact. Dikes that lie directly on the sea are especially reinforced. The dikes have, over the years, been repeatedly raised, sometimes up to 9 metres (30 ft) and have been made flatter to better reduce wave erosion. Where the dunes are sufficient to protect the land behind them from the sea, these dunes are planted with beach grass to protect them from erosion by wind, water, and foot traffic. Storm tides threaten, in particular, the coasts of the Netherlands, Belgium, Germany, and Denmark and low lying areas of eastern England particularly around The Wash and Fens. Storm surges are caused by changes in barometric pressure combined with strong wind created wave action. The first recorded storm tide flood was the Julianenflut, on 17 February 1164. In its wake the Jadebusen, (a bay on the coast of Germany), began to form. A storm tide in 1228 is recorded to have killed more than 100,000 people. In 1362, the Second Marcellus Flood, also known as the Grote Manndränke, hit the entire southern coast of the North Sea. Chronicles of the time again record more than 100,000 deaths as large parts of the coast were lost permanently to the sea, including the now legendary lost city of Rungholt. In the 20th century, the North Sea flood of 1953 flooded several nations' coasts and cost more than 2,000 lives. 315 citizens of Hamburg died in the North Sea flood of 1962.:[79,86] Though rare, the North Sea has been the site of a number of historically documented tsunamis. The Storegga Slides were a series of underwater landslides, in which a piece of the Norwegian continental shelf slid into the Norwegian Sea. The immense landslips occurred between 8150 BC and 6000 BC, and caused a tsunami up to 20 metres (66 ft) high that swept through the North Sea, having the greatest effect on Scotland and the Faeroe Islands. The Dover Straits earthquake of 1580 is among the first recorded earthquakes in the North Sea measuring between 5.6 and 5.9 on the Richter Scale. This event caused extensive damage in Calais both through its tremors and possibly triggered a tsunami, though this has never been confirmed. The theory is a vast underwater landslide in the English Channel was triggered by the earthquake, which in turn caused a tsunami. The tsunami triggered by the 1755 Lisbon Earthquake reached Holland, although the waves had lost their destructive power. The largest earthquake ever recorded in the United Kingdom was the 1931 Dogger Bank earthquake, which measured 6.1 on the Richter Scale and caused a small tsunami that flooded parts of the British coast. Shallow epicontinental seas like the current North Sea have since long existed on the European continental shelf. The rifting that formed the northern part of the Atlantic Ocean during the Jurassic and Cretaceous periods, from about , caused tectonic uplift in the British Isles. Since then, a shallow sea has almost continuously existed between the highs of the Fennoscandian Shield and the British Isles. This precursor of the current North Sea has grown and shrunk with the rise and fall of the eustatic sea level during geologic time. Sometimes it was connected with other shallow seas, such as the sea above the Paris Basin to the south-west, the Paratethys Sea to the south-east, or the Tethys Ocean to the south. During the Late Cretaceous, about , all of modern mainland Europe except for Scandinavia was a scattering of islands. By the Early Oligocene, , the emergence of Western and Central Europe had almost completely separated the North Sea from the Tethys Ocean, which gradually shrank to become the Mediterranean as Southern Europe and South West Asia became dry land. The North Sea was cut off from the English Channel by a narrow land bridge until that was breached by at least two catastrophic floods between 450,000 and 180,000 years ago. Since the start of the Quaternary period about , the eustatic sea level has fallen during each glacial period and then risen again. Every time the ice sheet reached its greatest extent, the North Sea became almost completely dry. The present-day coastline formed after the Last Glacial Maximum when the sea began to flood the European continental shelf. In 2006 a bone fragment was found while drilling for oil in the north sea. Analysis indicated that it was a Plateosaurus from 199 to 216 million years ago. This was the deepest dinosaur fossil ever found and the first find for Norway. Copepods and other zooplankton are plentiful in the North Sea. These tiny organisms are crucial elements of the food chain supporting many species of fish. Over 230 species of fish live in the North Sea. Cod, haddock, whiting, saithe, plaice, sole, mackerel, herring, pouting, sprat, and sandeel are all very common and are fished commercially. Due to the various depths of the North Sea trenches and differences in salinity, temperature, and water movement, some fish such as blue-mouth redfish and rabbitfish reside only in small areas of the North Sea. Crustaceans are also commonly found throughout the sea. Norway lobster, deep-water prawns, and brown shrimp are all commercially fished, but other species of lobster, shrimp, oyster, mussels and clams all live in the North Sea. Recently non-indigenous species have become established including the Pacific oyster and Atlantic jackknife clam. The coasts of the North Sea are home to nature reserves including the Ythan Estuary, Fowlsheugh Nature Preserve, and Farne Islands in the UK and The Wadden Sea National Parks in Germany. These locations provide breeding habitat for dozens of bird species. Tens of millions of birds make use of the North Sea for breeding, feeding, or migratory stopovers every year. Populations of Black legged Kittiwakes, Atlantic Puffins, Northern fulmars, and species of petrels, gannets, seaducks, loons (divers), cormorants, gulls, auks, and terns, and many other seabirds make these coasts popular for birdwatching. The North Sea is also home to marine mammals. Common seals, and Harbour porpoises can be found along the coasts, at marine installations, and on islands. The very northern North Sea islands such as the Shetland Islands are occasionally home to a larger variety of pinnipeds including bearded, harp, hooded and ringed seals, and even walrus. North Sea cetaceans include various porpoise, dolphin and whale species. Plant species in the North Sea include species of wrack, among them bladder wrack, knotted wrack, and serrated wrack. Algae, macroalgal, and kelp, such as oarweed and laminaria hyperboria, and species of maerl are found as well. Eelgrass, formerly common in the entirety of the Wadden Sea, was nearly wiped out in the 20th century by a disease. Similarly, sea grass used to coat huge tracts of ocean floor, but have been damaged by trawling and dredging have diminished its habitat and prevented its return. Invasive Japanese seaweed has spread along the shores of the sea clogging harbours and inlets and has become a nuisance. Due to the heavy human populations and high level of industrialization along its shores, the wildlife of the North Sea has suffered from pollution, overhunting, and overfishing. Flamingos, pelicans, and Great Auk were once found along the southern shores of the North Sea, but went extinct over the 2nd millennium. Gray whale also resided in the North Sea but were driven to extinction in the Atlantic in the 17th century Other species have dramatically declined in population, though they are still found. Right whales, sturgeon, shad, rays, skates, salmon, and other species were common in the North Sea until the 20th century, when numbers declined due to overfishing. Other factors like the introduction of non-indigenous species, industrial and agricultural pollution, trawling and dredging, human-induced eutrophication, construction on coastal breeding and feeding grounds, sand and gravel extraction, offshore construction, and heavy shipping traffic have also contributed to the decline. The OSPAR commission manages the OSPAR convention to counteract the harmful effects of human activity on wildlife in the North Sea, preserve endangered species, and provide environmental protection. All North Sea border states are signatories of the MARPOL 73/78 Accords, which preserve the marine environment by preventing pollution from ships. Germany, Denmark, and the Netherlands also have a trilateral agreement for the protection of the Wadden Sea, or mudflats, which run along the coasts of the three countries on the southern edge of the North Sea. Through history various names have been used for the North Sea. One of the earliest recorded names was Septentrionalis Oceanus, or "Northern Ocean," which was cited by Pliny. The name "North Sea" probably came into English, however, via the Dutch "Noordzee", who named it thus either in contrast with the Zuiderzee ("South Sea"), located south of Frisia, or simply because the sea is generally to the north of the Netherlands. Prior to the adoption of "North Sea," "German Sea" or "German Ocean"--from the Latin name "Mare Germanicum" and "Oceanus Germanicus"--were the names in English, and they persisted even into the late 19th century. In Danish, the term "Vesterhavet" (lit. "Western ocean") is used as frequently as Nordsøen (lit. "North lake") as synonym for the North Sea. The North Sea has provided waterway access for commerce and conquest. Many areas have access to the North Sea with its long coastline and European rivers that empty into it. The British Isles had been protected from invasion by the North Sea waters until the Roman conquest of Britain in 43 AD. The Romans established organised ports, shipping increased and sustained trade began. When the Romans abandoned Britain in 410 the Germanic Angles, Saxons, and Jutes began the next great migration across the North Sea during the Migration Period invading England. The Viking Age began in 793 with the attack on Lindisfarne and for the next quarter-millennium the Vikings ruled the North Sea. In their superior longships, they raided, traded, and established colonies and outposts on the Sea's coasts. From the Middle Ages through the 15th century, the northern European coastal ports exported domestic goods, dyes, linen, salt, metal goods and wine. The Scandinavian and Baltic areas shipped grain, fish, naval necessities, and timber. In turn the North Sea countries imported high grade cloths, spices, and fruits from the Mediterranean region Commerce during this era was mainly undertaken by maritime trade due to underdeveloped roadways. In the 13th century the Hanseatic League, though centred on the Baltic Sea, started to control most of the trade through important members and outposts on the North Sea. The League lost its dominance in the 16th century, as neighbouring states took control of former Hanseatic cities and outposts and internal conflict prevented effective cooperation and defence. Furthermore, as the League lost control of its maritime cities, new trade routes emerged that provided Europe with Asian, American, and African goods. The 17th century Dutch Golden Age during which Dutch herring, cod and whale fisheries reached an all time high saw Dutch power at its zenith. Important overseas colonies, a vast merchant marine, powerful navy and large profits made the Dutch the main challengers to an ambitious England. This rivalry led to the first three Anglo-Dutch Wars between 1652 and 1673, which ended with Dutch victories. After the Glorious Revolution the Dutch prince William ascended to the English throne. With both countries united, commercial, military, and political power shifted from Amsterdam to London. The Great Northern War (1700–21) and the War of the Spanish Succession (1701–1714) were fought concurrently. The British did not face a challenge to their dominance of the North Sea until the 20th century. Tensions in the North Sea were again heightened in 1904 by the Dogger Bank incident, in which Russian naval vessels mistook British fishing boats for Japanese ships and fired on them, and then upon each other. During the First World War, Great Britain's Grand Fleet and Germany's Kaiserliche Marine faced each other on the North Sea, which became the main theatre of the war for surface action. Britain's larger fleet was able to establish an effective blockade for most of the war that restricted the Central Powers' access to many crucial resources. Major battles included the Battle of Heligoland Bight, the Battle of the Dogger Bank, and the Battle of Jutland. World War I also brought the first extensive use of submarine warfare, and a number of submarine actions occurred in the North Sea. The Second World War also saw action in the North Sea, though it was restricted more to aircraft reconnaissances, aircraft fighter/bombers, submarines and smaller vessels such as minesweepers, and torpedo boats and similar vessels. In the last years of the war and the first years thereafter, hundreds of thousands of tons of weapons were disposed of by being sunk in the North Sea. After the war, the North Sea lost much of its military significance because it is bordered only by NATO member-states. However, it gained significant economic importance in the 1960s as the states on the North Sea began full-scale exploitation of its oil and gas resources. The North Sea continues to be an active trade route. Countries that border the North Sea all claim the 12 nautical miles (22 km; 14 mi) of territorial waters, within which they have exclusive fishing rights. The Common Fisheries Policy of the European Union (EU) exists to coordinate fishing rights and assist with disputes between EU states and the EU border state of Norway. After the discovery of mineral resources in the North Sea, the Convention on the Continental Shelf established country rights largely divided along the median line. The median line is defined as the line "every point of which is equidistant from the nearest points of the baselines from which the breadth of the territorial sea of each State is measured." The ocean floor border between Germany, the Netherlands, and Denmark was only reapportioned after protracted negotiations and a judgement of the International Court of Justice. Test drilling began in 1966 and then, in 1969, Phillips Petroleum Company discovered the Ekofisk oil field distinguished by valuable, low-sulphur oil. Commercial exploitation began in 1971 with tankers and, after 1975, by a pipeline, first to Teesside, England and then, after 1977, also to Emden, Germany. The exploitation of the North Sea oil reserves began just before the 1973 oil crisis, and the climb of international oil prices made the large investments needed for extraction much more attractive. Although the production costs are relatively high, the quality of the oil, the political stability of the region, and the nearness of important markets in western Europe has made the North Sea an important oil producing region. The largest single humanitarian catastrophe in the North Sea oil industry was the destruction of the offshore oil platform Piper Alpha in 1988 in which 167 people lost their lives. Besides the Ekofisk oil field, the Statfjord oil field is also notable as it was the cause of the first pipeline to span the Norwegian trench. The largest natural gas field in the North Sea, Troll gas field, lies in the Norwegian trench dropping over 300 metres (980 ft) requiring the construction of the enormous Troll A platform to access it. The price of Brent Crude, one of the first types of oil extracted from the North Sea, is used today as a standard price for comparison for crude oil from the rest of the world. The North Sea contains western Europe's largest oil and natural gas reserves and is one of the world's key non-OPEC producing regions. The North Sea is Europe's main fishery accounting for over 5% of international commercial fish caught. Fishing in the North Sea is concentrated in the southern part of the coastal waters. The main method of fishing is trawling. In 1995, the total volume of fish and shellfish caught in the North Sea was approximately 3.5 million tonnes. Besides fish, it is estimated that one million tonnes of unmarketable by-catch is caught and discarded each year. In recent decades, overfishing has left many fisheries unproductive, disturbing marine food chain dynamics and costing jobs in the fishing industry. Herring, cod and plaice fisheries may soon face the same plight as mackerel fishing, which ceased in the 1970s due to overfishing. The objective of the European Union Common Fisheries Policy is to minimize the environmental impact associated with resource use by reducing fish discards, increasing productivity of fisheries, stabilising markets of fisheries and fish processing, and supplying fish at reasonable prices for the consumer. In addition to oil, gas, and fish, the states along the North Sea also take millions of cubic metres per year of sand and gravel from the ocean floor. These are used for beach nourishment, land reclamation and construction. Rolled pieces of amber may be picked up on the east coast of England. Due to the strong prevailing winds, countries on the North Sea, particularly Germany and Denmark, have used the shore for wind power since the 1990s. The North Sea is the home of one of the first large-scale offshore wind farms in the world, Horns Rev 1, completed in 2002. Since then many other wind farms have been commissioned in the North Sea (and elsewhere), including the two largest windfarms in the world as of September 2010; Thanet in the UK and Horns Rev 2 in Denmark. The expansion of offshore wind farms has met with some resistance. Concerns have included shipping collisions and environmental effects on ocean ecology and wildlife such as fish and migratory birds, however, these concerns were found to be negligible in a long-term study in Denmark released in 2006 and again in a UK government study in 2009. There are also concerns about reliability, and the rising costs of constructing and maintaining offshore wind farms. Despite these, development of North Sea wind power is continuing, with plans for additional wind farms off the coasts of Germany, the Netherlands, and the UK. There have also been proposals for a transnational power grid in the North Sea to connect new offshore wind farms. Energy production from tidal power is still in a pre-commercial stage. The European Marine Energy Centre has installed a wave testing system at Billia Croo on the Orkney mainland and a tidal power testing station on the nearby island of Eday. Since 2003, a prototype Wave Dragon energy converter has been in operation at Nissum Bredning fjord of northern Denmark. The beaches and coastal waters of the North Sea are popular destinations for tourists. The Belgian, Dutch, German and Danish coasts are especially developed for tourism. The North Sea Trail is a long-distance trail linking seven countries around the North Sea. Windsurfing and sailing are popular sports because of the strong winds. Mudflat hiking, recreational fishing and birdwatching are among other popular activities. The climatic conditions on the North Sea coast are often claimed to be especially healthful. As early as the 19th century, travellers used their stays on the North Sea coast as curative and restorative vacations. The sea air, temperature, wind, water, and sunshine are counted among the beneficial conditions that are said to activate the body's defences, improve circulation, strengthen the immune system, and have healing effects on the skin and the respiratory system. The North Sea is important for marine transportation and its shipping lanes are among the busiest in the world. Major ports are located along its coasts: Rotterdam, The busiest port in Europe and the third busiest port in the world by tonnage as of 2008[update], Antwerp (16th) and Hamburg (27th), Bremen/Bremerhaven and Felixstowe, both in the top 30 busiest container seaports, as well as the Port of Bruges-Zeebrugge, Europe's leading RoRo port. Fishing boats, service boats for offshore industries, sport and pleasure craft, and merchant ships to and from North Sea ports and Baltic ports must share routes on the North Sea. The Dover Strait alone sees more than 400 commercial vessels a day. Because of this volume, navigation in the North Sea can be difficult in high traffic zones, so ports have established elaborate vessel traffic services to monitor and direct ships into and out of port. The North Sea coasts are home to numerous canals and canal systems to facilitate traffic between and among rivers, artificial harbours, and the sea. The Kiel Canal, connecting the North Sea with the Baltic Sea, is the most heavily used artificial seaway in the world reporting an average of 89 ships per day not including sporting boats and other small watercraft in 2009. It saves an average of 250 nautical miles (460 km; 290 mi), instead of the voyage around the Jutland Peninsula. The North Sea Canal connects Amsterdam with the North Sea. |Wikimedia Commons has media related to: North Sea| |Look up north sea in Wiktionary, the free dictionary.| Contenu de sensagent dictionnaire et traducteur pour sites web Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web ! Solution commerce électronique Augmenter le contenu de votre site Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML. Parcourir les produits et les annonces Obtenir des informations en XML pour filtrer le meilleur contenu. Indexer des images et définir des méta-données Fixer la signification de chaque méta-donnée (multilingue). Renseignements suite à un email de description de votre projet. Jeux de lettres Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée. Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer Dictionnaire de la langue française La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés. Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID). L'encyclopédie française bénéficie de la licence Wikipedia (GNU). Changer la langue cible pour obtenir des traductions. Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent. calculé en 0,109s
<urn:uuid:ad9b72d2-e07f-431d-a227-adc1eb75d48c>
CC-MAIN-2022-33
https://dictionnaire.sensagent.leparisien.fr/North%20Sea/en-en/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570692.22/warc/CC-MAIN-20220807181008-20220807211008-00700.warc.gz
en
0.916915
8,131
3.46875
3
In late 2016, on a conference stage in Palm Springs, California, decision scientist Hannah Bayer made a bold declaration: “We’re going to measure everything we can possibly measure about 10,000 people over the course of the next 20 years or more. We’re going to sequence their genomes; track everywhere they go, everything they eat, everything they buy, everyone they interact with, every time they exercise.” 1 “We” is the Human Project, born as a collaboration between two research labs at New York University — the Institute for the Interdisciplinary Study of Decision Making (a world leader in neuroeconomics) and the Center for Urban Science and Progress (ditto for urban informatics) — with startup funding from the Kavli Foundation. As you might suspect from those origins, the partners are less interested in defining the essential qualities of our species than in understanding how those qualities are operationalized. “Human,” here, is an acronym: Human Understanding through Measurement and Analytics. HUMAN, here, is an acronym: Human Understanding through Measurement and Analytics. Any Quantified Self enthusiasts in that audience who might have relished the chance to be so intimately measured were out of luck. As the Human Project is a scientific study, it needs a representative sample. Researchers started by crunching datasets to identify 100 “micro-neighborhoods” that embody New York City’s diversity, and next they will contact randomly targeted households in those areas, inviting people to join the study, “not just as volunteers, but as representatives of their communities.” With promises of payment and self-enlightenment, recruiters will try to turn 10,000 human subjects into HUMANs. 2 Let’s say your family volunteers. To start, you might submit blood, saliva, and stool samples, so that researchers can sequence your genome and microbiome. You could undergo IQ, mental health, personality, and memory testing; and agree to a schedule of regular physical exams, where the researchers collect more biological samples so they can track epigenetic changes. They might compile your education and employment histories, and conduct “socio-political” assessments of your voting, religious, and philanthropic activity. (As the project leaders did not respond to interview requests, I pieced together this speculative protocol from their promotional materials, academic papers, and public statements.) 3 If you don’t have a smartphone, they may give you one, so they can track your location, activity, and sleep; monitor your socialization and communication behaviors; and push “gamified” tests assessing your cognitive condition and well-being. They may “instrument” your home with sensors to detect environmental conditions and track the locations of family members, so they can see who’s interacting with whom, when, and where (those without a home are presumably ineligible). You may be asked to keep a food diary and wear a silicon wristband to monitor your exposure to chemicals. Audits of your tax and financial records could reveal your socioeconomic position and consumer behavior, and could be cross-referenced with your location data, to make sure you were shopping when and where you said you were. The researchers assert that, for the first time ever, they are able to quantify the human condition. With your permission, researchers could access new city and state medical records databases, and they could tap public records of your interaction with schools, courts, police, and government assistance programs. They could assess your neighborhood: how safe is it, how noisy is it, how many trees are there? Finally, they could pull city data — some of it compiled and filtered by the Center for Urban Science and Progress — to monitor air quality, toxins, school ratings, crime, water and energy use, and other environmental factors. What does all this measuring add up to? The researchers assert, “For the first time ever we are now able to quantify the human condition.” By investigating “the feedback mechanisms between biology, behavior, and our environment in the bio-behavioral complex,” they aim to comprehend “all of the factors that make humans … human.” 4 Of course, that requires a huge leap of faith. As Steven Koonin, the theoretical physicist who founded the Center for Urban Science and Progress, observes: “What did Galileo think he was going to see when he turned his telescope on the heavens? He didn’t know.” 5 Now the telescope is turned inward, on the human body in the urban environment. This terrestrial cosmos of data will merge investigations that have been siloed: neuroscience, psychology, sociology, biology, biochemistry, nutrition, epidemiology, economics, data science, urban science. A promotional video boasts that the Human Project has brought together technologists, lawyers, ethicists, and “anthropologists, even!” to ask big questions. Even anthropologists! (It’s notable that several relevant fields — social work, geography, and most of the humanities — don’t make the list.) 6 Now the telescope is turned inward, on the human body in the urban environment. This is the promise of big data and artificial intelligence. With a sufficiently large dataset we can find meaning even without a theoretical framework or scientific method. As Wired-editor-turned-drone-entrepreneur Chris Anderson famously declared, “Petabytes allow us to say: ‘Correlation is enough.’ We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.” 7 Human Project director Paul Glimcher says that collecting data on “everything we can think of” — at least everything related to biology, behavior, and environment — will allow researchers to model every imaginable “phenotype,” or set of observable characteristics, both for people and the cities they inhabit. 8 Medical researchers have long harbored similar ambitions. The Framingham Heart Study (which began in 1948) and Seven Countries Study (1956) investigated the impact of diet, weight, exercise, genetics, and smoking on cardiovascular health. The Nurses’ Health Study (1976) collected biospecimens and questionnaires from hundreds of thousands of nurses, to better understand how nutrition, weight, physical activity, hormones, alcohol, and smoking affect disease. The English Longitudinal Study on Aging (2002) periodically interviewed and examined participants over the age of 50, looking for correlations among economic position, physical health, disability, cognition, mental health, retirement status, household structure, social networks, civic and cultural participation, and life expectancy. Some of these studies also considered environmental aspects of public health, although they didn’t have access to today’s rich geospatial data. Fast-forward to the age of smartphones and neural nets. Apple recently announced that its Health app will allow users to access personal medical records. The company is also developing apps to aid studies and even sponsoring clinical trials. 9 Seemingly everyone is trying to break into the risky but lucrative health tech market, which offers ample opportunities for data harvesting. And many medical providers are happy to cooperate. A few years ago, Google’s AI subsidiary Deep Mind and London’s Royal Free Hospital partnered to develop new clinical technologies, but they didn’t adequately inform patients about the use of their data, and were rebuked by the British government. 10 More recently, Facebook has approached hospitals about matching anonymized patient data with social media profiles to find patterns that might inform medical treatment. Plans were “paused” last month, as the Cambridge Analytica scandal came to light. 11 When I brought up this trend in a recent lecture, one of the attendees, a health informatics researcher at a Philadelphia hospital, emphatically declared, “All of us want to work with Google.” It’s easy to see why. More data can lead to better care, and the potential benefits of so-called “precision medicine” are enormous. Seemingly everyone is trying to break into the health tech market, which offers ample opportunities for data harvesting. To its credit, the Human Project is advised by privacy and security experts and has announced strategies for keeping data safe. Recruiters use videos to secure consent from subjects (some as young as seven years old) who may not understand legalese, and the FAQs state that data will be anonymized, aggregated, and protected from subpoena. According to reports, the data will be compartmentalized so that researchers have access only to the particular slice (or “data mart”) relevant to a given study. These “heavily partitioned data silos” will reside in sealed zones at the project’s data center in Brooklyn: a monitored green zone with limited data; a yellow zone, accessible via thumbprint and ID card, where researchers consult the anonymized data marts; and a high-security red zone, where the “crown jewels” are held. 12 It seems fitting that researchers will have to offer up their own biometrics to access their subjects’ data. Yet even if personal data are secure, methodological and ethical risks are exacerbated when university research programs are spun off into private companies. The Human Project is run through a partnership with Data Cubed, Inc., a health tech startup founded by Glimcher that aims to monetize the project tools (particularly the Phenome-X phenotyping platform) and ensure that “participants and the study benefit when for-profit companies use insights from [project] data for profitable, socially responsible work.” 13 Given the stakes here, that relationship needs close scrutiny. What’s more, the blind faith that ubiquitous data collection will lead to “discoveries that benefit everyone” deserves skepticism. Large-scale empirical studies can reinforce health disparities, especially when demographic analyses are not grounded in specific hypotheses or theoretical frameworks. Ethicist Celia Fisher argues that studies like the Human Project need to clearly define “what class, race, and culture mean, taking into account how these definitions are continuously shaped and redefined by social and political forces,” and how certain groups have been marginalized, even pathologized, in medical discourse and practice. Researchers who draw conclusions based on observed correlations — untheorized and not historicized — run the risk, she says, of “attributing health problems to genetic or cultural dispositions in marginalized groups rather than to policies that sustain systemic political and institutional health inequities.” 14 A recent report by Kadija Ferryman and Mikaela Pitcan at the Data & Society Research Institute shows how biases in precision medicine could threaten lives. 15 And history offers many examples of ethical problems that arise when health data circulate beyond the context of their collection. 16 We’ve seen such biases realized in other data-driven models, notably in law enforcement. Contemporary models of “actuarial justice” and “predictive policing” draw correlations between specific risk factors and the probability of future criminal action. Courts and police make decisions based on proprietary technologies with severe vulnerabilities: incomplete datasets, high error rates, demographic bias, opaque algorithms, and discrepancies in administration. 17 “Criminal justice management” software packages like Northpointe’s dramatically overestimate the likelihood of recidivism among black defendants. 18 Even the instruments used to collect data can misfire. Biometric technologies like facial recognition software and fingerprint and retina scanners can misread people of color, women, and disabled bodies. 19 As has always been the case, race and gender determine how “identities, rather than persons, interact with the public sphere.” 20 Race and gender determine how ‘identities, rather than persons, interact with the public sphere.’ These problems are compounded as datasets are combined. Palantir software now used by some local governments merges data from disparate city agencies and external organizations, enabling police to collate information about suspects, targets, and locations. 21 In New York, for example, Palantir worked with the Mayor’s Office of Data Analytics and the Office of Special Enforcement to develop a tablet application “that allows inspectors in the field to easily see everything that the City knows about a given location.” 22 Key analyses, even decisions about where to deploy resources, are automated, which means that “no human need ever look at the actual raw data.” 23 Biology, behavior, culture, history, and environment are thus reduced to dots on a map. End users don’t know which agencies supplied the underlying intelligence and how their interests might have shaped data collection. They can’t ask questions about how social and environmental categories are operationalized in the different data sets. They can’t determine whether the data reinscribe historical biases and injustices. All of this is to say that past efforts to combine vast troves of personal and environmental data should make us wary of new initiatives. As Virginia Eubanks demonstrates in Automating Inequality, “Marginalized groups face higher levels of data collection when they access public benefits, walk through highly policed neighborhoods, enter the healthcare system, or cross national borders. That data acts to reinforce their marginality when it is used to target them for suspicion and extra scrutiny. Those groups seen as undeserving are singled out for punitive public policy and more intense surveillance, and the cycle begins again. It is a kind of collective red-flagging, a feedback loop of injustice.” 24 While the neuroeconomists on Glimcher’s project gather data on everything “that makes humans … human,” their partners in urban informatics control a voluminous flow of information on what makes New York … New York. With special access to municipal data held by many offices and agencies, researchers at the Center for Urban Science and Progress have built “one of the most ambitious Geographic Information Systems ever aggregated: a block-by-block, moment-by-moment, searchable record of nearly every aspect of the New York City Landscape.” 25 In a video promoting the Human Project, every urban scene is overlaid with a bullseye, a calibration marker, or a cascade of 0’s and 1’s, signaling an aggressive intent to render the environment as data. The Human Project researchers regard the urban habitat as something that can be rehabilitated or reengineered. The partnership with CUSP may give the Human Project an advantage in the race to quantify health outcomes, but it is not the only such effort. The National Institutes of Health is building All of Us, a research cohort of one million volunteers with “the statistical power to detect associations between environment and/or biological exposures and a wide variety of health outcomes.” 26 The NIH receives data and research support from Verily Life Sciences, an Alphabet company that, in turn, runs Project Baseline, a partnership with Duke, Stanford, and Google that aims to recruit 10,000 volunteers to “share [their] personal health story” — as well as clinical, molecular, imaging, sensor, self-reported, behavioral, psychological, and environmental data — to help “map human health.” 27 Ferryman and Pitcan have diagrammed the complex topology of these projects in their Precision Medicine National Actor Map. Sidewalk Labs, another Alphabet company, recently announced Cityblock Health, which seeks to connect low-income urban residents with community-based health services, including clinics, coaches, tech tools, and “nudges” for self-care. 28 Again, the precise targeting of individual patients and neighborhoods depends on a vast dataset, including in this case Google’s urban data. All of these initiatives see public health through the lens of geography. The Human Project even refers to its emerging databank as an “atlas.” Programs like Cityblock Health conceive the urban environment not just as a background source of “exposure” or risk, but as a habitat in which biology and behavior inform one another. The qualities of this habitat affect how people make choices about diet and exercise, and how bodies respond to stress or industrial hazards. What seems to set the Human Project apart is that its researchers regard that habitat not as a given, but as something that can be rehabilitated or reengineered. Once researchers have identified relations between the city or neighborhood and the “human condition,” they can tweak or transform the habitat through urban planning, design, and policy. Their insights can also guide “the construction of future cities.” 29 Individual phenotypes are mapped to urban phonotypes, databodies to codespaces. Constantine Kontokosta, the head of CUSP’s Quantified Community project, is one of the most prominent advocates for this worldview. He wants to “instrument” neighborhoods with sensors and engage citizens in local data collection, so that the urban environment becomes a “test bed for new technologies”; “a real-world experimental site” for evaluating policy and business plans; a 3D model for analyzing “the economic effects of data-driven optimization of operations, resource flows, and quality-of-life indicators.” Machine-learning algorithms will find patterns among data from environmental sensors and residents’ smartphones in order to define each neighborhood’s “pulse,” to determine the community’s “normal” heartbeat. 30 Here, again, we see the resurgence of biomedical metaphors in urban planning. Meanwhile, a group of Human Project-affiliated researchers at Harvard and MIT are using computer vision to assign “Streetscores,” or measurements of perceived safety, to Google Street View images of particular neighborhoods. They then combine those metrics with demographic and economic data to determine how social and economic changes relate to changes in a neighborhood’s physical appearance — its phenotype. 31 This work builds on the PlacePulse project at the MIT Media Lab, which invites participants to vote on which of two paired Street View scenes appears “livelier,” “safer,” “wealthier,” “more boring,” “more depressing,” or “more beautiful.” In such endeavors, Aaron Shaprio argues, “computer-aided, data-mined correlations between visible features, geographic information, and social character of place are framed as objective, if ‘ambient,’ social facts.” 32 The algorithmicization of environmental metrics marks the rise of what Federico Caprotti and colleagues call a new “epidemiology of the urban.” 33 The new epidemiologists echo the “smart city” rhetoric I’ve critiqued often in these pages, but now the discourse is shaded toward the dual bioengineering of cities and inhabitants. The new epidemiologists echo the familiar ‘smart city’ rhetoric, but now the discourse is shaded toward the dual bioengineering of cities and inhabitants. Cities have long been regarded as biophysical bodies, with their own circulatory, respiratory, and nervous systems — and waste streams. In the mid-19th century, as industrialization transformed cities and spurred their growth, physicians were developing new theories of infectious disease (e.g., miasma, filth), complete with scientific models and maps that depicted cities as unhealthy. City planners and health officials joined forces to advocate for sanitation reform, zoning, new infrastructures, street improvements, and urban parks. 34 Healthy buildings and cities were associated with certain phenotypical expressions, although designers did not always agree on the ideal form. Frederick Law Olmsted’s parks, Daniel Burnham’s City Beautiful movement, Ebenezer Howard’s Garden Cities, 1920s zoning ordinances, Modernist social housing projects and sanatoria: all promised reform, yet produced distinct morphologies. 35 As the 20th century proceeded, epidemiologists focused on germs and the biological causes of disease, while modernist architects turned toward formal concerns and rational master plans. Public health and urban planning drifted apart until the 1960s, when the environmental justice and community health center movements brought them together again. Today, initiatives like the World Health Organization’s European Healthy Cities program and New York City’s Active Design Guidelines encourage the integration of health and planning. Now the focus is on designing cities that promote exercise and social cohesion, and that provide access to healthy food and quality housing. 36 Given the rise of artificial intelligence in both health and urban planning, we might imagine a Streetscore or “pulse” for healthy neighborhoods, which could be used to generate an algorithmic pattern language for urban design: every healthy neighborhood has one playground, two clinics, lots of fresh produce, and a bicycle path. Where do quantified humans fit in this new planning regime? Consider the fact that China is preparing to use Citizen Scores to rate residents’ trustworthiness and determine their eligibility for mortgages, jobs, and dates; their right to freely travel abroad; and their access to high-speed internet and top schools. “It will forge a public opinion environment where keeping trust is glorious,” the Chinese government proclaims. 37 This is the worst case scenario: obedience gamified, as Rachel Botsman puts it. Humanity instrumentalized. At least for now, most urbanists recognize that a city is more than a mere aggregation of spatial features that an AI has correlated with ‘wellness.’ Will the new data-driven urbanism — with its high-security data centers and black-boxed algorithms and proprietary software — usher in another era of top-down master planning in North America? Perhaps. But at least for now, most urbanists recognize that a city is more than a mere aggregation of spatial features that an AI has correlated with “wellness.” As Jane Jacobs argues, a healthy city is built on social inclusion and communication, and a shot of serendipity. Researchers affiliated with the Human Project are investigating Jacobsian questions like how economic changes affect housing and, in turn, residents’ social networks and health. 38 Others are asking how cities “encourage the free flow of information” and “how geography interacts with … knowledge” — you might say, how a city can be designed to provide the spatial conditions for a public sphere. 39 So in their rhetoric, at least, the project investigators recognize the political importance of involving communities in the research process and in the urban environments that may be reshaped by it. Kontokosta says his Quantified Community initiatives focus on the neighborhood scale in order to “connect and engage local residents” not only in data collection, but also in “problem identification, data interpretation, and problem-solving.” 40 Locals assume the role of “participatory sensors,” using their own smartphones to collect data and helping build and install ambient sensing devices. They also act as ground-truthers who verify harvested data through direct observations and experiences. On a more fundamental level, Kontokosta says he wants community members involved as research designers who help project leaders understand areas of curiosity and concern. Locals can identify the pressing problems in their neighborhoods and the sources of data that can provide insight. CUSP aims to bring communities typically excluded from “smart city” discussions into the planning process. One might hope that this would lead to a long-term personal investment in neighborhoods and interest in local planning and politics. Self-Datafication as Civic Duty The Human Project study design envisions that participants will be motivated by payment and by the promise of insight into their own health and their families’ medical histories. Data are currency. 41 But there’s a civic vision — and a civic aesthetic — behind this work, too. As the researchers gear up to collect data, they have rebranded the website with stock photos representing “diversity” and urban vitality, washed in New York University’s signature violet. The new logo, which evokes a circular genome map, is rendered in watercolor, humanizing all the hard science. 42 Framing the project as a “public service” may help convince New Yorkers to share their most personal data. 43 Contributors are assured that they will be more than mere research subjects; they will also be “partners” in governing the study, responsible for vetting proposals from researchers who want to use the databank. 44 They’ll receive newsletters and updates on research discoveries that their data has made possible, and they’ll have access to visualization tools that allow them to filter and interpret their own data and aggregate data for the study population. Apparently, handing over bank statements and biometrics is a form of activism, too: “instead of giving [their] data for free, to corporations,” they can “take [it] back,” “bring [it] together as a community… to make a better world.” 45 Glimcher maintains that New Yorkers will see the potential to generate new knowledge, therapeutics, and urban policy and will understand “that this is a civic project of enormous importance.” 46 Offering oneself up as data, or as a data-collector, is often framed as an act of civic duty. Offering oneself up as data, or as a data-collector, is often framed as an act of civic duty. Participation in U.S. census and government surveys, for instance, has historically been regarded as part of the “social contract”: citizens yield their personal information, and the government uses it for the public good. 47 In the 19th century, philanthropists, researchers, and activists garnered support for social and industrial reforms by generating an “avalanche of numbers.” 48 And in the early 20th century, as the social sciences popularized new sampling methods, a swarm of surveyors and pollsters began collecting data for other purposes. According to historian Sarah Igo, these modern researchers “billed their methods as democratically useful, instruments of national self-understanding rather than bureaucratic control.” Because they had to rely on voluntary participation, they manufactured consent by emphasizing “the virtues of contributing information for the good of the whole,” for the “public sphere.” Divulging one’s opinions and behaviors to Gallup or Roper pollsters was a means of democratic participation — an opportunity to make one’s voice heard. Pollsters, unlike newspaper editors and political commentators, were “of the people,” and they argued that their methods were “even more representative and inclusive than elections.” 49 Around the same time, A.C. Nielsen, which started off in manufacturing, marketing, and sales research, began acquiring and developing technology that allowed it to monitor people’s radio-listening (and, later, TV-watching and web-surfing) behaviors. Nielsen ratings drove advertising placement and programming decisions. Commercial broadcasters, meanwhile, began funding academic studies and incorporating social-scientific research into their operations, furthering the integration of academic and industry agendas. As Willard D. Rowland shows, the “image of certainty and irrefutability” cultivated by social scientists allowed them to “mesh neatly into the interaction of industrial, political, communications, and academic interests.” 50 Modern survey methods, Igo says, “helped to forge a mass public” and determined how that public saw itself in mediated representations. Surveys shaped beliefs about normalcy and nationality and individuality. 51 But like all methods of data-collection and analysis, those social surveys reflected and reinscribed biases. Consider the canonical Middletown Studies, sociological case studies conducted by Robert and Helen Lynd in the “typical” American city of Muncie, Indiana, in the 1920s and ’30s. Igo shows how the researchers were compelled to paint a picture of cultural wholeness and cohesion, and how they excised non-white, non-native and non-Protestant Americans from their portrait of this “representative” community. 52 The computationally-engineered city produces the urban citizen by measuring her. We can trace these histories forward to the cutting-edge work being conducted at the Institute for the Interdisciplinary Study of Decision Making. The researchers’ overarching goal, to link decision-making to social policy, is reflected in their motto: “from neurons to nations.” Yet the extraction of neurons will never fully describe the individual subject, let alone the nation in aggregate. Even the myriad data sources collated by the Human Project cannot capture “the human condition.” As Hannah Arendt observes, the disclosure of who one is “can almost never be achieved as a willful purpose, as though one possessed and could dispose of this ‘who’ in the same manner he has and can dispose of his qualities.” Who one is, rather than what one is, is revealed to others through speech and action and physical identity. 53 Quantifying humans and habitats turns them into “whats”: into biometric entities and Streetscores. This ontological reduction inevitably leads to impoverished notions of city planning, citizenship, and civic action. Shapiro argues that because planning algorithms like Streetscore embed “indicators of deviance and normativity, worth and risk,” they promote “normative and essentialist … aesthetics.” 54 The computationally-engineered city produces the urban citizen by measuring her. Then, Caprotti argues, “you’re actually producing a subject for governance.” 55 When civic action is reduced to data provision, the citizen can perform her public duties from the privacy of a car or bedroom. If her convictions and preferences can be gleaned through an automated survey of her browser history, network analysis of her social media contacts, and sentiment analysis of her texts and emails, she needn’t even go to the trouble of answering a survey or filling out a ballot. Yet she has no idea how an artificially intelligent agent discerns “what” kind of subject she is, how it calculates her risk of heart attack or recidivism, or how those scores impact her insurance premiums and children’s school assignments. Likewise, the researchers who deploy that agent, like those now working with Palantir and Northpointe, have no need to look at the raw data, let alone develop hypotheses that might inform their methods of collection and analysis. In this emerging paradigm, neither subjects nor researchers are motivated, nor equipped, to challenge the algorithmic agenda. Decision-making is the generation of patterns, a “pulse,” a “score” that will translate into policy or planning initiatives and social service provision. This is a vision of the city — society — as algorithmic assemblage. All our bodies and environments are already data — both public and proprietary. So how can we marshal whatever remains of our public sphere to take up these critical issues? And this is the world where we now live. 56 All our bodies and environments are already data — both public and proprietary. 57 So how can we marshal whatever remains of our public sphere to take up these critical issues? How can we respond individually and collectively to the regime of quantitative rationalization? How might we avert its risks, even as we recognize its benefits? We can start by intervening in those venues where pattern recognition is translated to policy and planning. Wouldn’t it be better to use algorithms to identify areas and issues of concern, and then to investigate with more diverse, localized qualitative methods? After the scores are assigned and hotpots are plotted on a map, we could reverse-engineer those urban pulses, dissect the databodies, recontextualize and rehistoricize the datasets that brought them into being. To prepare for this work, the ethicists and social scientists — even anthropologists! — should call in the humanists at every stage of research: from the constitution of the study population; through the collection, analysis, and circulation of data; and finally as those datasets are marshaled to transform the world around us. Projects like NYU’s and Alphabet’s and the NIH’s could yield tremendous improvements in public health. And even in their methodological and ethical limitations, they can teach us a few things about measuring a public and the spheres in which it is constituted. The methods by which publics and public spheres become visible — to one another and to the sensors that read them — reflect the interests and ideologies of their sponsors. At the same time, these databody projects remind us that public health is a critical precondition for, and should be a regular subject of debate within, the public sphere. 58 They signal that the liberal subject has a physical body, one whose health and illness, pleasure and pain, affect and cognition, race and gender, class and sexual orientation, affect its ability to safely navigate and make itself seen and heard amidst the myriad publics that emerge across our digital and physical worlds.
<urn:uuid:817874ff-6ec5-46d6-b1cc-ec10600b324b>
CC-MAIN-2022-33
https://placesjournal.org/article/databodies-in-codespace/?utm_source=citylab-daily&silverid=NDM1MDQwNDEyMjYxS0&utm_medium=email&utm_campaign=Issue:%202018-04-19%20Smart%20Cities%20Dive%20Newsletter%20%5Bissue:14974%5D&utm_term=Smart%20Cities%20Dive
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00699.warc.gz
en
0.940963
6,926
2.765625
3
|HMS Foudroyant (1798)| Capture of the William Tell, by Robert Dodd. Foudroyant is seen in the background. |Ordered:||17 January 1788| |Builder:||the dockyard at Plymouth Dock| |Laid down:||May 1789| |Launched:||31 March 1798| |Fate:||Sold 1890. Foundered on Blackpool Sands, 16 June 1897.| |General characteristics | |Class & type:||80-gun third rate| |Tons burthen:||2054 65⁄94 (bm)| |Length:||184 ft 8 1⁄2 in (56.299 m)(gundeck)| |Beam:||50 ft 6 in (15.39 m)| |Draught:||23 ft (7.0 m)| |Depth of hold:||22 ft 6 in (6.86 m)| |Sail plan:||Full rigged ship| |Complement:||650 officers and men| HMS Foudroyant was an 80-gun third rate of the Royal Navy, one of only two British-built 80-gun ships of the period (the other was HMS Caesar (1793)). Foudroyant was built in the dockyard at Plymouth Dock (aka Devonport) and launched on 31 March 1798.[Note 2] Foudroyant served Nelson as his flagship from 6 June 1799 until the end of June 1801. Foudroyant had a long and successful career, and although she was not involved in any major fleet action, she did provide invaluable service to numerous admirals throughout her 17 years on active service. In her last years she became a training vessel for boys. Her designer was Sir John Henslow. She was named after the 80-gun Foudroyant, which Swiftsure and Monmouth, both 70-gun ships, and Hampton Court (64 guns), had captured from the French on 28 February 1758. Foudroyant was a one-off design. She followed French practice of favouring large two-decked, third rates mounting 80 guns rather than the typical British preference for building three-decked second-rate ships mounting 98 guns. The two ship types, despite the difference in absolute gun numbers, had similar gun power but the British thought the second rate had a more imposing appearance and some advantages in battle, while they considered the 80 gun ship as usually faster and less 'leewardly'. French Revolutionary War Foudroyant was first commissioned on 25 May 1798, under the command of Captain Thomas Byard. On 12 October Foudroyant was with the squadron under Captain Sir John Borlase Warren in Canada engaged a French squadron under Commodore Jean-Baptiste-François Bompart in the Battle of Tory Island. The British captured the French ship of the line Hoche and four of the eight French frigates. Foudroyant was only minimally engaged, though she did suffer nine men wounded, and went off in unsuccessful pursuit of the French frigates that had escaped. (Other British warships captured two of these frigates; two frigates and a schooner escaped completely). In 1847 The Admiralty awarded the Naval General Service Medal with clasp "12th October 1798" to all surviving claimants from the action. Byard's command lasted only until 31 October when, after bringing the ship back to Plymouth, he died. Commander William Butterfield took temporary command of the ship until he transferred to Hazard just twelve days later. Captain John Elphinstone took up command of the ship on 26 November 1798, in Cawsand Bay. Lord Keith hoisted his flag in Foudroyant on 28 November, and she departed to join the Mediterranean Squadron on 5 December. After arriving at Gibraltar, Keith shifted his flag to Barfleur on 31 December, and Captain Elphinstone left the ship the following day. His replacement was Captain James Richard Dacres. Dacres' command lasted for four months, before Captain William Brown replaced him on 22 March 1799. On 30 March Foudroyant was among the several British warships in sight, and so entitled to share in the prize money, when Alcmene captured the Saint Joseph or Hermosa Andalusia, off Cadiz. Foudroyant sailed from Gibraltar on 11 May, calling at Port Mahon before arriving at Palermo on 7 June. At this time, Brown transferred to Vanguard, and Captain Thomas Hardy took over the command. The following day, Lord Nelson hoisted his flag in Foudroyant. Over the following months, Foudroyant was involved in the efforts to return the Neapolitan royal family to Naples. Nelson's fleet arrived in Naples on 24 June. The fleet consisted of a total of 18 ships of the line, 1 frigate and 2 fire ships.[Note 3] The British landed 500 British and Portuguese marines in support of the Neapolitans on 27 June, all under the command of Captain Sir Thomas Troubridge, of Culloden. The next day they captured the castles Ovo and Nuovo. On 29 June they commenced the siege of Fort St. Elmo. The first batteries were in place by 3 July, with the last still being constructed on 11 July. The British, Portuguese and Russian forces commenced the bombardment on 3 July and the French capitulated on 11 July, forestalling the need for an assault. On 10 July His Sicilian Majesty arrived in the Bay of Naples and immediately hoisted his standard on board the Foudroyant. There the king and his ministers remained until after the capitulation of Fort St. Elmo. A series of reprisals against known insurgents followed. The Neapolitans conducted several courts martial, some of which resulted in hangings. Whilst Foudroyant was in Naples harbour, Nelson began his affair with Emma, Lady Hamilton. Foudroyant departed Naples on 6 August, in company with the frigate Syren, and the Portuguese ship Principe Real. Foudroyant also transported the Sardinian royal family to Leghorn on 22 September. On 13 October, Foudroyant entered Port Mahon harbour, and Captain Sir Edward Berry replaced Captain Hardy as acting captain. Foudroyant was back in Palermo by 22 October. Nelson remained ashore when Foudroyant departed for Gozo on 29 October, together with Minotaur. In November, after weathering a storm in Palermo harbour, Foudroyant departed once more, this time with Culloden, and ran aground in the Straits of Messina. With Culloden's assistance, it was possible to haul the ship off and into deep water. On 6 December a large part of the 89th Regiment embarked on Foudroyant.[Note 4] The soldiers landed at St. Paul's Bay, on Malta on the 10th. Foudroyant was back at Palermo on 15 January 1800, when Lord Nelson hoisted his flag in her once again, and she sailed on to Livorno, arriving on the 21st. There Foudroyant received salutes from Danish and Neapolitan frigates, and two Russian ships of the line. Sicilian soldiers embarked on 11 February, and Foudroyant sailed the next day for Malta, in company with Alexander, Northumberland (both 74s), and Success (32). Audacious (74), and Corso (16) joined them later. On 18 February, the British squadron began a chase of a squadron of four French ships — Généreux (74), Badine (24), Fauvette (20), another corvette of 20 guns, and a fluyt. Alexander forced the fluyt to surrender, whilst Success engaged Généreux, and the two ships exchanged a couple of broadsides before Foudroyant came up and fired into Généreux, which struck her colours.[Note 5] It turned out that Rear-Admiral Jean-Baptiste Perrée, the commander-in-chief of the French navy in the Mediterranean, had been aboard Généreux and had been killed at the start of the action. His ships had been carrying some 4,000 troops intended to relieve Malta. Their failure to arrive significantly harmed the French hold on Malta and was a testament to the success of the British blockade of the island. British casualties amounted to one man killed and eight wounded, all on Success. At the beginning of March, Nelson remained at Palermo due to illness when on 25 March Foudroyant sailed for Malta once more with Rear-Admiral Decres on board. On 29 March, she encountered the sloop Bonne Citoyenne, and from her Berry learned that French ships were expected to leave Valletta that evening. Guillaume Tell put to sea on the evening of the 30th, where she encountered Lion and Penelope. As day broke and the scene became apparent, Foudroyant maneuvered to pistol range of the French ship — the last French survivor of Aboukir, Généreux being the only other — and joined the battle. Foudroyant's log for the Action of 31 March 1800 notes that at one point during the battle the French had nailed their colours to the stump of Guillaume Tell's mizzen mast. Still, Guillaume Tell eventually struck, but not before Foudroyant had lost her fore topmast and main topsail yard. The initial estimates put the number of dead and wounded on Lion and Foudroyant at 40 per vessel. Later in the day, Foudroyant's mizzen mast fell, having been damaged during the battle. Lion took Foudroyant in tow for a time, whilst a jury rig was set up. She entered Syracuse on 3 April. Amongst the British vessels, Foudroyant had borne the heaviest casualties with eight men killed and 61 wounded, including Berry, who was only slightly wounded and did not leave the deck during the fight. The British estimated that the French had had over two hundred casualties. On 3 June, the Neapolitan king and queen boarded Foudroyant, accompanied by Sir William Hamilton and his wife Emma. The royal family departed the ship after their arrival in Livorno on 15 June, and just two weeks later Nelson hauled down his flag and began the journey home to England overland together with the Hamiltons. Lord Keith raised his flag in Foudroyant for the second time on 15 August, returning the ship to Gibraltar on 13 September. Captain Berry transferred out of the ship on 2 November for the 38-gun frigate Princess Charlotte. Captain Philip Beaver took over the command on 17 November and sailed into the Eastern Mediterranean with a fleet of 51 vessels, many armed en flûte and carrying the 16,150 men of General Sir Ralph Abercromby's force, which was intended to drive the French out of Egypt. Still, on 22 December Foudroyant captured the French brig Hyppolite, which was carrying rice from Alexandria to Marseilles. Keith sailed from Marmarice on 22 February, arriving off Abukir Bay on 2 March. Sea conditions meant that the British were unable to land until 8 March. They met resistance from the French but by evening all the troops had landed and driven the French from the beach. The landing cost Foudroyant one man killed and one wounded. In all, the landings cost the British 22 men killed, 72 men wounded, and three missing. On the 13th, the landing party of seamen and marines, under the command of Captain Sir William Sidney Smith, were again in action at Mandora as the British moved towards Alexandria. Foudroyant had one man wounded. In all, the British navy lost six seamen killed and 19 wounded, and 24 marines killed and 35 wounded. Keith then used his ships to reduce the castle at the entrance of Abukir Bay, which eventually fell to the British on 18 March 1801. A French counter-attack on 21 March by some 20,000 men, although ending in defeat, caused General Abercromby a severe injury; he died aboard Foudroyant a week after the battle. In addition to the army losses, the Royal Navy lost four men killed and 20 wounded, though none were from Foudroyant. Foudroyant lay off Alexandria until June, and on 17 June Captain Beaver transferred to Determinée. His replacement was Captain William Young, who in turn was replaced by Captain T. Stephenson. Captain John Clarke Searle took command in June 1801, before handing over to Captain John Elphinstone, again, in September. In mid-August, the fleet transported the British army to Alexandria. On 26 September the French proposed a three-day armistice to discuss terms of capitulation. Because Foudroyant had served in the navy's Egyptian campaign between 8 March 1801 and 2 September, her officers and crew qualified for the clasp "Egypt" to the Naval General Service Medal that the Admiralty authorised in 1850 for all surviving claimants.[Note 6]> In January 1803, Foudroyant was docked in Plymouth Dock for a somewhat major repair. The ship was recommissioned under the command of Captain Peter Spicer on 11 June. Her former captain, now Rear Admiral Sir James Richard Dacres, hoisted his flag on the same day, and remained aboard until 28 October. Two days later, Rear Admiral of the White, Sir Thomas Graves hoisted his flag. Captain Peter Puget took over the command on 27 February 1804; however, owing to a serious injury while Foudroyant served with the Channel Fleet, he was returned to England (leaving Christopher Nesham in acting command) and officially left the ship on 31 May 1805. Foudroyant returned to dock on 26 March 1804 for repairs. 24 February 1805 saw Captain Edward Kendall take over the command, and in June Foudroyant was flagship of Grave's fleet, consisting of Barfleur, Raisonnable, Repulse, Triumph, Warrior, Windsor Castle, and Egyptienne blockading the French port of Rochefort. Command of the ship passed to Captain John Erskine Douglas on 9 December temporarily, before Captain John Chambers White assumed command on the 13th. On 13 March 1806, Foudroyant was involved in an action between some ships of the fleet and two French vessels - Marengo of 80 guns, and Belle Poule of 40. Both ships were captured and taken into the navy. On 24 November Captain Richard Peacock took command of the ship, and Admiral Sir John Borlase Warren hoisted his flag in Foudroyant on 19 December. Rear Admiral Sir Albemarle Bertie raised his flag in Foudroyant on 20 May 1807, and remained in the ship until 17 November. Peacock's command passed to Captain Norborne Thompson on 31 May. Foudroyant joined with Admiral Sir Sir Sidney Smith's squadron blockading Lisbon.[Note 7] Smith hoisted his flag in Foudroyant on 24 January 1808. Captain Charles Marsh Schomberg took command of the ship on 6 June.[Note 8] On 12 March Foudroyant parted company for South America, arriving in Río de Janeiro in August. Captain John Davie took command on 25 January 1809, and then Captain Richard Hancock on 17 May. Smith transferred his flag to Diana on the same day. From 25 May, Foudroyant was in company with Agamemnon, Elizabeth, Bedford, Mutine, Mistletoe and Brilliant, escorting a convoy. On 8 June they entered Moldonado Bay at the mouth of the Río de la Plata where Agamemnon struck rocks and was wrecked. Foudroyant assisted in taking off men and stores from the stricken ship and no lives were lost. Foudroyant remained in the Río area until August 1812, when she returned to England, entering Cawsand Bay on 21 October, and entering Plymouth Dock on 6 November. Hancock departed the ship on 30 November, and then Foudroyant lay at her anchor until 26 January 1815, when she was taken into dock for a large repair that lasted 4 years. When Foudroyant came out of dock in 1819, she took up her role as guard ship in Plymouth Dock (renamed Devonport 1824) until about 1860. Throughout this period she was in and out of dock on several occasions for repairs. In 1862 she was converted into a gunnery training vessel, a role she fulfilled until 1884. She was thereafter stationed at Devonport on dockyard duties, and was attached as to tender to the gunnery schoolship HMS Cambridge. She was finally placed on the Sales List in 1891 and sold out of the service the following January for £2,350. Bought by J. Read of Portsmouth, she was promptly resold to German shipbreakers. This prompted a storm of public protest. Wheatley Cobb then bought her and used the ship as a boy's training vessel. To offset the restoration cost of £20,000, it was then decided to exhibit her at various seaside resorts. In June 1897 she was towed to Blackpool. On 16 June 1897 during a violent storm, she parted a cable and dragging the remaining anchor, went ashore on Blackpool Sands, damaging Blackpool North Pier in the process. The Blackpool lifeboat was able to rescue all 27 of her crew. After vain attempts to refloat her, her guns were removed and she was sold for ₤200. She finally broke up in the December gales. Craftsmen used flotsam from the wreck to make furniture, and, between 1929 and 2003, the wall panelling of the boardroom of Blackpool F.C.'s Bloomfield Road ground. The ship's bell now resides in Blackpool Town Hall. As a replacement, Cobb purchased the 38-gun frigate Trincomalee, and renamed her Foudroyant in the previous ship's honour. This Foudroyant remained in service until 1991, when she was taken to Hartlepool and renamed back to Trincomalee. - As given by Goodwin. Lavery quotes similarly, though the carronades are absent. - Goodwin (p.179) gives the launch date for Foudroyant as 31 March, 25 May, and 31 August. The text highlights this discrepancy and attributes the August date to Lyon's Sailing Navy List, published in 1993. Dates given for commissioning and other movements, which are taken from the ship's logs, indicate that the March date is correct. A painting depicting the launch, dated 25 May 1798, adds further confusion, though it is not clear from the text if the date represents the launching or the date the painting was finished. - Mutine, a brig-sloop of 16 guns should also be included in this tally. Ships known to have comprised this fleet are: Alexander, Bellerophon, Bellona, Culloden, Goliath, Leviathan, Majestic, Northumberland, Powerful, Swiftsure, Vanguard, Zealous (ships of the line); Syren (frigate); Mutine (brig-sloop). - The majority of the 89th came aboard at 0900 on 6 December, together with their women and children — 523 people in total. - The victory was of particular significance to Berry. In 1798, after the Battle of the Nile, he was returning to England in command of Leander when he encountered Généreux. After a lopsided and courageous battle with Généreux Berry had had to surrender. Subsequently, his captors maltreated Berry and his crew. - A first-class share of the prize money awarded in April 1823 was worth £34 2s 4d; a fifth-class share, that of an able seaman, was worth 3s 11½d. The amount was small as the total had to be shared between 79 vessels and the entire army contingent. - Sidney Smith's squadron was composed of Hibernia, London, Conqueror, Elizabeth, Marlborough, Monarch, and Plantagenet. - The ship's records indicate that Captain Thompson left the ship on 3 February. The gap between him leaving the ship and Schomberg joining is not explained. - "No. 20939". 26 January 1849. https://www.thegazette.co.uk/London/issue/20939/page/ - "No. 21077". 15 March 1850. https://www.thegazette.co.uk/London/issue/21077/page/ - Lavery, Ships of the Line vol.1, p183. - "No. 15072". 21 October 1798. https://www.thegazette.co.uk/London/issue/15072/page/ - "No. 15081". 17 November 1798. https://www.thegazette.co.uk/London/issue/15081/page/ - Winfield (2008), pp. 301. - "No. 15306". 28 October 1800. https://www.thegazette.co.uk/London/issue/15306/page/ - Goodwin, p.182. - "No. 15169". 13 August 1799. https://www.thegazette.co.uk/London/issue/15169/page/ - Goodwin, p.184. - "No. 15545". 28 December 1802. https://www.thegazette.co.uk/London/issue/15545/page/ - "No. 15255". 6 May 1800. https://www.thegazette.co.uk/London/issue/15255/page/ - "No. 15242". 25 March 1800. https://www.thegazette.co.uk/London/issue/15242/page/ - "No. 15255". 6 May 1800. https://www.thegazette.co.uk/London/issue/15255/page/ - "No. 15263". 31 May 1800. https://www.thegazette.co.uk/London/issue/15263/page/ - "No. 15358". 25 April 1801. https://www.thegazette.co.uk/London/issue/15358/page/ - "No. 15362". 5 May 1801. https://www.thegazette.co.uk/London/issue/15362/page/ - "No. 15364". 15 May 1801. https://www.thegazette.co.uk/London/issue/15364/page/ - "No. 15427". 14 November 1801. https://www.thegazette.co.uk/London/issue/15427/page/ - "No. 17915". 3 April 1823. https://www.thegazette.co.uk/London/issue/17915/page/ - Goodwin, p189. - Hepper (1994), p.129. - Gossett (1986), p.125. - Gillatt, Peter (30 November 2009). Blackpool FC on This Day: History, Facts and Figures from Every Day of the Year. Pitch Publishing Ltd. ISBN 1-905411-50-2. - One such piece was featured on the BBC Antiques Roadshow, 2005, in the Portsmouth, UK episode focusing on Lord Nelson. AC200607 Lot:120-149 - Goodwin, Peter (2002) Nelson's Ships - A History of the Vessels in which he Served, 1771-1805. Conway Maritime Press. ISBN 0-85177-742-2 - Gossett, William Patrick (1986). The lost ships of the Royal Navy, 1793-1900. Mansell. ISBN 0-7201-1816-6. - Hepper, David J. (1994). British Warship Losses in the Age of Sail, 1650-1859. Rotherfield: Jean Boudriot. ISBN 0-948864-30-3. - The Capture of the Foudroyant by HMS Monmouth, 28 February 1758. National Maritime Museum, Greenwich. Retrieved 25 October 2006. - Lavery, Brian (2003) The Ship of the Line - Volume 1: The development of the battlefleet 1650-1850. Conway Maritime Press. ISBN 0-85177-252-8. - Winfield, Rif (2008). British Warships in the Age of Sail 1793–1817: Design, Construction, Careers and Fates. Seaforth. ISBN 1-86176-246-1. - Gallery of pictures, and history of HMS Foudroyant. - Phillips, Michael. Ships of the Old Navy, A History of Ships of the 18th Century Royal Navy. Ships of the Old Navy. Retrieved 25 October 2006. |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
<urn:uuid:7beb5a0b-dc31-435c-8135-ac7d94c611c3>
CC-MAIN-2022-33
https://military-history.fandom.com/wiki/HMS_Foudroyant_(1798)
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.78/warc/CC-MAIN-20220817001643-20220817031643-00299.warc.gz
en
0.952811
5,443
2.71875
3
Remembering Howard Zinn (1922 – 2010) – by Stephen Lendman Distinguished scholar, author, political scientist, people’s historian, activist, and son of blue-collar immigrant parents, Zinn was born on August 24, 1922 in Brooklyn, New York and died in Santa Monica, CA of a reported heart attack while swimming on January 27. He’s survived by two children, Myla Kabat-Zinn and Jeff Zinn, and five grandchildren. He was 87, and a valued guest several times on The Lendman News Hour and Progressive Radio News Hour. He’ll be sorely missed. Writing in CounterPunch on January 28, journalist, author and activist Harvey Wasserman called him “above all a gentleman of unflagging grace, humility and compassion.” Interviewed on Democracy Now, his former student, author Alice Walker, said “he had such a wonderful impact on my life and on the lives of the students of Spelman and of millions of people….he loved his students.” On the same program, Noam Chomsky spoke about Zinn during the Vietnam war period saying: His book, The Logic of Withdrawal “really broke through. He was the first person to say – loudly, publicly, very persuasively – that this simply has to stop; we should get out, period, no conditions; we have no right to be there; it’s an act of aggression; pull out.” He “not only wrote about (it) eloquently, but he participated in” anti-war efforts to end the war, for civil and worker rights, and “any significant action for peace and justice. Howard was there. People saw him as a leader, but he was really a participant. His remarkable character made him a leader….” Also interviewed, author/activist Anthony Arnove said: “Howard never rested. He had such energy. And over the last few years, he continued to write, continued to speak….He wanted to bring a new generation of people into contact with the voices of dissent, the voices of protest, that they don’t get in their school textbooks, that we don’t get in our establishment media, and to remind them of the power of their own voice, remind them of the power of dissent, the power of protest….it’s incumbent upon all of us to extend and keep (his legacy) alive and vibrant.” In his book, “You Can’t Be Neutral on a Moving Train: A Personal History of Our Times,” Zinn recounted how he went “to work in a shipyard at the age of eighteen and (spent) three years working on the docks, in the cold and heat, amid deafening noise and poisonous fumes, building battleships and landing ships in the early years of the Second World War.” At age 21, he “enlist(ed) in the Air Force, (was) trained as a bombardier, fl(ew) combat missions in Europe, and later ask(ed himself) troubling questions about what (he) had done in the war.” When it ended, he “married, becam(e) a father, (went) to college under the GI Bill while loading trucks in a warehouse, with (his) wife (Roslyn: 1944 – 2008) working and (his) two children in a charity day-care center, and all of (them) living in a low-income housing project on (Manhattan’s) Lower East Side.” He got his BA from New York University, then his MA and Ph.D. in history and political science from Columbia University, after which he got his “first real teaching job, going to live and teach (at Spelman College) in a black community in the Deep South for seven years.” He then “move(d)….north to teach (at Boston University), and join(ed) the protests against the war in Vietnam, and (got) arrested a half-dozen times,” officially charged with “sauntering and loitering, disorderly conduct, (and) failure to quit.” He recalled speaking at “hundreds of meetings and rallies….helping a Catholic priest stay underground in defiance of the law, (and testifying) in a dozen courtrooms….in the 1970s and 1980s.” He wrote about “the prisoners (he knew), short-timers and lifers, and how (they) affected (his) view of imprisonment.” When he began teaching, he “could not possibly keep out of the classroom (his) own experiences. (In his) teaching, (he) never concealed (his) political views: (his) detestation of war and militarism, (his) anger at racial inequality, (his) belief in a democratic socialism, in a rational and just distribution of the world’s wealth. (He) made clear (his) abhorrence of any kind of bullying, whether by powerful nations over weaker ones, governments over their citizens, employers over employees, or by anyone, on the Right or the Left, who thinks they have a monopoly on the truth.” He explained mixing activism with teaching, insisting education “cannot be neutral on the crucial issues of our time, (but it) always frightened the guardians of traditional education. They prefer (it to) simply prepare the new generation to take its proper place in the old order, not to question that order.” He began every course telling students “they would be getting (his) point of view, but (he) would” encourage them to disagree. He “didn’t pretend to an objectivity that was neither possible nor desirable,” saying: “You can’t be neutral on a moving train,” explaining that “events are already moving in certain deadly directions, and to be neutral means to accept that.” For many years, he taught thousands of students. They gave him hope for the future, even though their activism was small in scale. He obsessed over “the bad news we are constantly confronted with. It surround(ed him), inundate(d him), depress(ed him) intermittently, anger(ed him).” He spoke of the poor, “so many of them in the ghettos of the nonwhite, often living a few blocks away from fabulous wealth.” He noted “the hypocrisy of political leaders, of the control of information through deception, through omission. And (that) all over the world, governments play on national and ethnic hatred.” He expressed awareness “of the violence of everyday life for most of the human race. All represented by the images of children. Children hungry. Children with missing limbs. The bombing of children officially reported as ‘collateral damage.’ “ He was frustrated that new leadership in American is no different from the old. It lacks vision, boldness and will to break from the past. They “maintain a huge military budget which distorts the economy and makes possible no more than puny efforts to redress the huge gap between rich and poor. (The result is communities) riddled with violence and despair.” And there’s no national movement to change this. People want change “but feel powerless, alone (waiting for others to) make the first move, or the second.” But historically, courageous people acted and got others to follow. “And if we understand this, we might make the first move.” He said he got a gift, “undeserved, just luck,” the fact that he survived the war while close buddies perished. He felt “no right to despair. (He) insist(ed) on hope,” and devoted his life to inspiring others. He explained how John Hersey’s Hiroshima report made him aware of war’s true horrors, to civilians, children, the elderly, to “see the Japanese as human beings, not simply a nation of ferocious, cruel warriors.” On a 1966 trip to the rebuilt city, he visited a House of Friendship for survivors. He saw men and women, “some without legs, others without arms, some with sockets for eyes, or with horrible burns on their faces and bodies.” He recalled his days as a bombardier, choked up, and couldn’t speak. The next year he visited the rebuilt town of Royan, France, spoke to survivors and examined documents. These two cities “were crucial in (his) gradual rethinking of what (he) had once accepted without question – the absolute morality of the war against fascism.” He began to realize that no war is just, all of them mostly harm civilians, and one side becomes indistinguishable from the other. Interviewed on Democracy Now in 2005, he reflected on participating in the Royan bombing, saying: His mission was ordered a few weeks before the war’s end….”everybody knew it was going to be over, and our armies were past France into Germany, but there was a little pocket of German soldiers hanging around this little town of Royan on the Atlantic coast of France, and the Air Force decided to bomb them.” He was on one of 1,200 heavy bombers dropping napalm on the town, its first use in Europe. “And we don’t know how many people were killed or how many people were terribly burned as a result of what we did. But I did it like most soldiers do, unthinkingly, mechanically, thinking we’re on the right side, they’re on the wrong side, and therefore we can do whatever we want, and it’s OK.” Only afterward did he learn the human effects of bombing, mostly harming civilians – including children, women, and the elderly. He flew at “30,000 feet, six miles high, couldn’t hear screams, couldn’t see blood. And this is modern warfare….soldiers fire, they drop bombs, and they have no notion, really, of what is happening to the human beings that they’re firing on. Everything is done at a distance. This enables terrible atrocities to take place.” And it’s happening now in Iraq and Afghanistan. In WW II, the German, Japanese and Italian atrocities were appalling, but allied nations did the same things – Hiroshima, Nagasaki, Royan, the fire-bombings of Dresden and Tokyo, the slaughter of civilians to break the will of the axis. The more he read, the more convinced he became that “war brutalizes everyone involved, begets a fanaticism in which the original moral factor (like fighting fascism) is buried at the bottom of a heap of atrocities committed by all sides.” By the 1960s, his former belief in “just war was falling apart.” He concluded that “while there are certainly vicious enemies of liberty and human rights in the world, war itself is the most vicious of” all. “And that while some societies can rightly claim to be more liberal, more democratic, more humane than others, the difference is not great enough to justify the massive, indiscriminate slaughter of modern warfare.” He asked shouldn’t the real motivations for war be examined. Shouldn’t the claim of fighting for democracy, liberty, a just cause, and human rights be questioned. Wouldn’t it be clear that all nations fight for power, privilege, wealth, territory, supremacy, national pride, and dominance of one side over others, the notions of freedom, righteousness, and innocent victims never considered. Tyranny is in the eye of the beholder when one side is as bad as the other. War isn’t inevitable, said Zinn. It doesn’t arise from an instinctive human need. Political leaders manufacture it, then use propaganda to justify it to the public and mobilize them to fight. Zinn’s “growing abhorrence of war, (his) rethinking of the justness of even ‘the best of wars, led (him) to oppose, from the start, the American war in Vietnam,” and all of them thereafter. War for him was the moral equivalent of the worst kind of terrorism. Toward the end of his life he wrote: “Wherever any kind of injustice has been overturned, it’s been because people acted as citizens, and not as politicians. They didn’t just moan. They worked, they acted, they organized, they rioted if necessary to bring their situation to the attention of people in power. And that’s what we have to do today.” His numerous books include: — LaGuardia in Congress, a book version of his doctoral dissertation; — You Can’t Be Neutral on a Moving Train: A Personal History of Our Time; — The Politics of History; — Disobedience and Democracy: Nine Fallacies on Law and Order; — Terrorism and War; — Passionate Declarations: Essays on War and Justice; — Vietnam: The Logic of Withdrawal; — The Power of Nonviolence: Writings by Advocates of Peace; — A People’s History of the United States; — Voices of a People’s History of the United States; and — A People’s History of American Empire, a pictorial, comics version of his notable book’s most relevant chapter, the centuries-long story of America’s global expansionism. The Media on Zinn’s Death The New York Times ran the AP’s report headlined, “Howard Zinn, Historian, Dies at 87,” calling him a “historian and shipyard worker, civil rights activist, World War II bombardier, and author of A People’s History of the United States, a best seller that inspired a generation of high school and college students to rethink American history….” AP also referred to Zinn’s left-wing writing, saying that even “liberal historians were uneasy with (him). Arthur M. Schlesinger Jr. once said: ‘I know he regards me as a dangerous reactionary. And I don’t take him very seriously. He’s a polemicist, not a historian.” On January 29, Times columnist Bob Herbert called him “A Radical Treasure,” what Zinn called himself. “He was an unbelievably decent man who felt obliged to challenge injustice and unfairness wherever he found it. (He) protest(ed) peacefully for important issues he believed in – against racial segregation (or) the war in Vietnam (and) at times he was beaten and arrested for doing so….He was a treasure and an inspiration. That he was considered radical says way more about this society than it does about him.” True to form, National Public Radio’s (NPR) Allison Keyes interviewed right-wing ideologue David Horowitz, a notorious bigot and progressive left opponent. As expected, he said: “There is absolutely nothing in Howard Zinn’s intellectual output that is worthy of any kind of respect. Zinn represents a fringe mentality which has unfortunately seduced millions of people at this point in time. So he did certainly alter the consciousness of millions of younger people for the worse.” Horowitz earlier called Zinn one of the “most dangerous academics in America.” The Washington Post’s Patricia Sullivan expressed other views, quoting him saying he focused: “not on the achievements of the heros of traditional history, but on all those people who were the victims of those achievements, who suffered silently or fought back magnificently.” She cited Noam Chomsky, a rarity in the corporate media, saying “His writings have changed the consciousness of a generation, and helped open new paths to understanding and its crucial meaning for our lives.” In her lengthy tribute, she explained that after WW II, he “gathered his Air Medal, other awards and documents and put them in a folder he labeled ‘Never again.’ ” In 2008, he said he “want(ed) to be remembered as somebody who gave people a feeling of hope and power that they didn’t didn’t have before.” CNN.com called him a “Noted author and social activist,” recounted his early years, and education, then quoted his daughter, Myla Kabat-Zinn, saying her father lived a “very full and exciting life,” pursuing many social issues important to him. Above all that he “believed that there is no ‘just war.’ ” Zinn’s contribution to a Nation magazine special on, “Obama at One” said: “I’ve been searching hard for a highlight. The only thing that comes close is some of Obama’s rhetoric; I don’t see any kind of a highlight in his actions and policies.” He added that he didn’t expect much as “a traditional Democrat president (on foreign policy is) “hardly any different from a Republican.” He concluded that “Obama is going to be a mediocre president – which means, in our time, a dangerous president – unless there is some national movement to push him in a better direction.” Boston Globe writers Mark Feeney and Bryan Marquard headlined, “Howard Zinn, historian who challenged status quo, dies at 87,” saying his “activism was a natural extension of the revisionist brand of history he taught.” It was “a recipe for rancor between Dr. Zinn and John Silber, former (Boston University) president. Dr. Zinn, a leading critic of Silber, twice helped lead faculty votes to oust” (him), who in turn once accused Dr. Zinn of arson, (a charge he quickly retracted, but called him a) “prime example of teachers ‘who poison the well of academe.’ “ The writers quoted Boston Globe columnist James Carroll, a good friend of Zinn’s for many years, calling him “simply one of the greatest Americans of our time. He will not be replaced – or soon forgotten. How we loved him back.” The London Guardian’s writer Godfrey Hodgson called him a “Radical US historian and leftwing activist who fought for peace and human rights. (As a) much-loved and much-vituperated icon of the American left, (he was) always a courageous and articulate campaigner for his vision of a just and peaceful America.” Few could deny his commitment to his core belief – “that people should stand for their rights and their vision of the good society.” For decades, Zinn did that and more with the best of the most committed. “A People’s History of the United States” First published in 1980, it became an extraordinary non-fiction best seller at over two million copies and counting. Its first edition was runner-up for the National Book Award. Enlightened teachers made it required high school and college reading throughout the country. It became an acclaimed play, and, in 2003, won the Prix des Amis du Monde Diplomatique for the book’s French edition. AK Press also produced a video of readings, and the History Channel aired Zinn narrating The People Speak, a film version of noted passages of “Voices of a People’s History of the United States,” presenting the words of labor and anti-war activists, anti-racists, feminists, socialists, and others rarely heard. In 2004, Zinn and Anthony Arnove published the above-mentioned companion volume, “Voices of a People’s History of the United States,” on the writing, speeches, poems, songs, and other material produced by notable figures, including Frederick Douglass, Henry David Thoreau, Upton Sinclair, Emma Goldman, Eugene Debs, Malcolm X, Martin Luther King, Jr., Leonard Peltier, Noam Chomsky, Mumia Abu-Jamal, and many others. In its newest edition, A People’s History covers the period 1492 to the new millennium under George Bush from the point of view of ordinary people, workers, minorities, the poor and disadvantaged, persecuted and oppressed, victimized, forgotten and ignored. Zinn himself wrote: It’s “a biased account, one that leans in a certain direction. I am not troubled by that, because the mountain of history books under which we all stand leans so heavily in the other direction – so tremblingly respectful of state and statesmen and so disrespectful, by inattention, to people’s movements – that we need some counterforce to avoid being crushed into submission.” His account is exhaustive, informative, and gloriously original. Though not the first revisionist text, it’s far and away the most important given its influence on so many readers. It’s also an easy read and important reference to check events and facts. It explains the extermination of Native Americans, the unpopularity of the Revolutionary War, the audacity of top leaders, including the Founders – a group of duplicitous rich white men, not populists or civil libertarians. They were politicians, lawyers, merchants, and land owners. Today, we’d call them a Wall Street crowd. Many, in fact, were slave owners, including Washington and Jefferson who was in France at the time as Ambassador. The 55 delegates drafted a Constitution for themselves alone. Popular democracy wasn’t considered, nor in the Bill of Rights four years later. Property owners wanted them for protection against unreasonable searches and seizures; the right to bear arms; free expression, the press, religion, assembly and petition; due process in speedy trials, and other provisions, including their right to vote, the other 85% of the population excluded. Women, Indians, non-property owners, and children couldn’t do it. Blacks were commodities, not people. Stripped of its romanticism and misconceptions, the Constitution was no masterpiece of political architecture. It was the conservative document the Founders intended, so they could govern the way Michael Parenti explained: to “resist the pressure of popular tides (and protect) a rising bourgeoisie(‘s freedom to) invest, speculate, trade and accumulate wealth,” the same as today. It let the nation be governed the way politician, jurist, and first Chief Supreme Court Justice, John Jay wanted – by “The people who own the country,” for them alone, and in times of war lets presidents be virtual dictators. A single sentence, easily passed over or misunderstood, constitutes the essence of presidential power. It’s from Article II, Section 1 saying: “The executive power shall be vested in a President of the United States of America.” Article II, Section 3 nonchalantly adds: “The President shall take care that the laws be faithfully exercised,” omitting that they can make them through Executive Orders, Presidential Directives and other means, despite no constitutional authority to do so. Lincoln took full advantage and did what he pleased. He provoked the Fort Sumpter attack and began the Civil War for economic reasons, not to end slavery. William McKinley created a pretext for war with Spain, annexed Hawaii, colonized Puerto Rico, established a protectorate over Cuba, forced the Spanish government to cede the Phillipines, occupied the country, fought a dirty war, and slaughtered hundreds of thousands of Filipinos. Theodore Roosevelt succeeded him, continued the carnage, and won a Nobel Peace Prize. In 1916, Woodrow Wilson was reelected on a pledge to “keep us out of war,” then in 1917 established the Committee on Public Information that turned a pacifist nation into raging German-haters for the war he planned to enter all along. FDR waged illegal naval warfare against Germany before Pearl Harbor and, after it, governed as a dictator. Truman atom-bombed Japan twice gratuitously when their leaders were negotiating surrender. He attacked North Korea illegally. So did Johnson and Nixon against Vietnam. Ronald Reagan against Grenada and through proxies in Central America and elsewhere. GHW Bush against Panama and Iraq. Clinton against Yugoslavia and eight years of genocidal sanctions against Iraq. GW Bush against Afghanistan and Iraq, continued under Obama, expanded against Pakistan, and now in occupied Haiti for resources and other exploitive reasons. In theory, presidents can’t violate the law, but can interpret it as freely as they wish. Allied with, representing, chosen and controlled by powerful interests, they can operate largely unconstrained, except when one party seeks political advantage over the other. Historians call FDR one of the nation’s greatest presidents, a widely admired democrat, a leader who freed the world from fascism. In fact, he was a conservative who partly yielded to necessity after first bailing out Wall Street. Yet he failed to end the Great Depression; did little for blacks, women, immigrants, small farmers, agricultural workers, and the poor; let blacks be persecuted, discriminated against, denied their voting rights and be lynched in the South; interned Japanese, German and Italian Americans during WW II; and gave the public airwaves to private interests. He tried to save capitalism, not change America into a social democracy, and literally forced the Japanese to attack Pearl Harbor to get into the war 80% of the public opposed. Zinn wrote this about Andrew Jackson: “If you look through high school textbooks and elementary textbooks in American history, you will find Jackson the frontiersman, soldier, democrat, man of the people – not Jackson the slaveholder, land speculator, executioner of dissident soldiers, exterminator of Indians.” Others were the same, including George Washington. He envisioned empire, called Native Americans “red savages (and) beasts of prey,” dispatched generals to slaughter them, destroy their villages, fields, food supplies, cattle herds, and orchards, seize their land, and take more of it. American imperialism today is global, for much bigger stakes, and nothing deters presidential actions. From the start, the notion of checks and balances was largely myth. In fact, governments, especially presidents, can and repeatedly have done whatever they wished, with or without popular, congressional, or judicial approval, within or outside the law, and it’s no different today. When once asked to name a single admirable president, Zinn said there were none, given their allegiance to privilege, wealth and wars, not ordinary people and real democracy, ours he called “rotten at the root, requir(ing) not just a new president or new laws, but an uprooting of the old order, the introduction of a new kind of society – cooperative, peaceful, egalitarian.” Zinn was a people’s historian. His book pays homage to the ones history forgot and ignore. His life’s work was dedicated to inspiring new generations to work for the society he envisioned – moral, righteous, free, just, egalitarian, at peace. Stephen Lendman is a Research Associate of the Centre for Research on Globalization. He lives in Chicago and can be reached at firstname.lastname@example.org. Also visit his blog site at sjlendman.blogspot.com and listen to the Lendman News Hour on RepublicBroadcasting.org Monday – Friday at 10AM US Central time for cutting-edge discussions with distinguished guests on world and national issues. All programs are archived for easy listening.
<urn:uuid:69a1c713-5909-4487-8157-0da9d22e17ca>
CC-MAIN-2022-33
https://stephenlendman.org/2010/02/01/remembering-howard-zinn-1922-2010/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571719.48/warc/CC-MAIN-20220812140019-20220812170019-00297.warc.gz
en
0.961064
5,893
2.515625
3
|Distribution map of horned lark | The horned lark or shore lark (Eremophila alpestris) is a species of lark in the family Alaudidae found across the northern hemisphere. It is known as "horned lark" in North America and "shore lark" in Europe. Taxonomy, evolution and systematics The horned lark was originally classified in the genus Alauda. The horned lark is suggested to have diverged from Temminck's lark (E. bilopha) around the Early-Middle Pleistocene, according to genomic divergence estimates. The Horned lark is known from around a dozen localities of Late Pleistocene age, including those in Italy, Russia, The United Kingdom and the United States. The earliest known fossil is from the Calabrian of Spain, around 1–0.8 million years old. In 2020, a 46,000 year old frozen specimen was described from the Russian Far East. Recent genetic analysis has suggested that the species consists of six clades that in the future may warrant recognition as separate species. A 2020 study also suggested splitting of the species, but into 4 species instead, the Himalayan Horned Lark E. longirostris, Mountain Horned Lark E. penicillata, Common Horned Lark E. alpestris (sensu stricto), alongside Temminck's Lark. - Pallid horned lark (E. a. arcticola) – (Oberholser, 1902): Found from northern Alaska to British Columbia (western Canada) - Hoyt's horned lark (E. a. hoyti) – (Bishop, 1896): Found in northern Canada - Northern American horned lark (E. a. alpestris) – (Linnaeus, 1758): Found in eastern Canada - Dusky horned lark (E. a. merrilli) – (Dwight, 1890): Found on western coast of Canada and USA - Streaked horned lark (E. a. strigata) – (Henshaw, 1884): Found on coastal southern British Columbia (western Canada) to coastal Oregon (western USA) - St. Helens horned lark (E. a. alpina) – (Jewett, 1943): Found on mountains of western Washington (northwestern USA) - Oregon horned lark (E. a. lamprochroma) – (Oberholser, 1932): Found on inland mountains of western USA - Desert horned lark (E. a. leucolaema) – Coues, 1874: Also known as the pallid horned lark. Found from southern Alberta (southwestern Canada) through north-central and central USA - Saskatchewan horned lark (E. a. enthymia) – (Oberholser, 1902): Found from south-central Canada to Oklahoma and Texas (central USA) - Prairie horned lark (E. a. praticola) – (Henshaw, 1884): Found in southeastern Canada, northeastern and east-central USA - Sierra horned lark (E. a. sierrae) – (Oberholser, 1920): Also known as the Sierra Nevada horned lark. Found on mountains of northeastern California (western USA) - Ruddy horned lark (E. a. rubea) – (Henshaw, 1884): Found in central California (western USA) - Utah horned lark (E. a. utahensis) – (Behle, 1938): Found on mountains of west-central USA - Island horned lark (E. a. insularis) – (Dwight, 1890): Found on islands off southern California (western USA) - California horned lark (E. a. actia) – (Oberholser, 1902): Found on coastal mountains of southern California (western USA) and northern Baja California (northwestern Mexico) - Mohave horned lark (E. a. ammophila) – (Oberholser, 1902): Found in deserts of southeastern California and southwestern Nevada (southwestern USA) - Sonora horned lark (E. a. leucansiptila) – (Oberholser, 1902): Found in deserts of southern Nevada, western Arizona (southwestern USA) and northwestern Mexico - Montezuma horned lark (E. a. occidentalis) – (McCall, 1851): Originally described as a separate species. Found in northern Arizona to central New Mexico (southwestern USA) - Scorched horned lark (E. a. adusta) – (Dwight, 1890): Found in southern Arizona and southern New Mexico (southwestern USA), possibly north-central Mexico - Magdalena horned lark (E. a. enertera) – (Oberholser, 1907): Found in central Baja California (northwestern Mexico) - Texas horned lark (E. a. giraudi) – (Henshaw, 1884): Found in coastal south-central USA and northeastern Mexico - E. a. aphrasta – (Oberholser, 1902): Found in Chihuahua and Durango (northwestern Mexico) - E. a. lactea – Phillips, AR, 1970: Found in Coahuila (north-central Mexico) - E. a. diaphora – (Oberholser, 1902): Found in southern Coahuila to northeastern Puebla (north-central and eastern Mexico) - Mexican horned lark (E. a. chrysolaema) – (Wagler, 1831): Originally described as a separate species in the genus Alauda. Found from west-central to east-central Mexico - E. a. oaxacae – (Nelson, 1897): Found in southern Mexico - Colombian horned lark (E. a. peregrina) – (Sclater, PL, 1855): Originally described as a separate species. Found in Colombia - Shore lark (E. a. flava) – (Gmelin, JF, 1789): Originally described as a separate species in the genus Alauda. Found in northern Europe and northern Asia - Steppe horned lark (E. a. brandti) – (Dresser, 1874): Also known as Brandt's horned lark. Originally described as a separate species. Found from southeastern European Russia to western Mongolia and northern China - Moroccan horned lark (E. a. atlas) – (Whitaker, 1898): This subspecies is also called "shore lark". Originally described as a separate species. Found in Morocco - Balkan horned lark (E. a. balcanica) – (Reichenow, 1895): This subspecies is also called "shore lark". Found in southern Balkans and Greece - E. a. kumerloevei – Roselaar, 1995: Found in western and central Asia Minor - Southern horned lark (E. a. penicillata) – (Gould, 1838): This subspecies is also called "shore lark". Originally described as a separate species in the genus Alauda. Found from eastern Turkey and the Caucasus to Iran - Lebanon horned lark (E. a. bicornis) – (Brehm, CL, 1842): This subspecies is also called "shore lark". Originally described as a separate species. Found from Lebanon to Israel/Syria border - Pamir horned lark (E. a. albigula) – (Bonaparte, 1850): This subspecies is also called "shore lark". Originally described as a separate species. Found from northeastern Iran and Turkmenistan to northwestern Pakistan - E. a. argalea – (Oberholser, 1902): This subspecies is also called "shore lark". Found in extreme western China - Przewalski's lark (E. a. teleschowi) – (Przewalski, 1887): This subspecies is also called "shore lark". Originally described as a separate species. Found in western and west-central China - E. a. przewalskii – (Bianchi, 1904): This subspecies is also called "shore lark". Found in northern Qinghai (west-central China) - E. a. nigrifrons – (Przewalski, 1876): This subspecies is also called "shore lark". Originally described as a separate species. Found in northeastern Qinghai (west-central China) - Long-billed horned lark (E. a. longirostris) – (Moore, F, 1856): This subspecies is also called "shore lark". Originally described as a separate species. Found in northeastern Pakistan and western Himalayas - E. a. elwesi – (Blanford, 1872): This subspecies is also called "shore lark". Originally described as a separate species. Found on southern and eastern Tibetan Plateau - E. a. khamensis – (Bianchi, 1904): This subspecies is also called "shore lark". Found in southwestern and south-central China Unlike most other larks, this is a distinctive-looking species on the ground, mainly brown-grey above and pale below, with a striking black and yellow face pattern. Except for the central feathers, the tail is mostly black, contrasting with the paler body; this contrast is especially noticeable when the bird is in flight. The summer male has black "horns", which give this species its American name. North America has a number of races distinguished by the face pattern and back colour of males, especially in summer. The southern European mountain race E. a. penicillata is greyer above, and the yellow of the face pattern is replaced with white. - Length: 6.3-7.9 in (16-20 cm) - Weight: 1.0-1.7 oz (28-48 g) - Wingspan: 11.8-13.4 in (30-34 cm) Vocalizations are high-pitched, lisping or tinkling, and weak. The song, given in flight as is common among larks, consists of a few chips followed by a warbling, ascending trill. Distribution and habitat The horned lark breeds across much of North America from the high Arctic south to the Isthmus of Tehuantepec, northernmost Europe and Asia and in the mountains of southeast Europe. There is also an isolated population on a plateau in Colombia. It is mainly resident in the south of its range, but northern populations of this passerine bird are migratory, moving further south in winter. This is a bird of open ground. In Eurasia it breeds above the tree line in mountains and the far north. In most of Europe, it is most often seen on seashore flats in winter, leading to the European name. In the UK it is found as a winter stopover along the coasts and in eastern England. In North America, where there are no other larks to compete with, it is also found on farmland, on prairies, in deserts, on golf courses and airports. Breeding and nesting Males defend territories from other males during breeding season and females will occasionally chase away intruding females. Courting is composed of the male singing to the female while flying above her in circles. He then will fold his wings in and dive towards the female, opening his wings and landing just before hitting the ground. The nest site is selected in the early spring by only the female and is either a natural depression in the bare ground or she digs a cavity using her bill and feet. She will spend 2–4 days preparing the site before building her nest. She weaves fines grasses, cornstalks, small roots, and other plant material and lines it with down, fur, feathers, and occasionally lint. The nest totals to be about 3-4 inches in diameter with the interior diameter about 2.5 inches wide and 1.5 inches deep. It has been notes she often adds a “doorstep” of pebbles, corncobs, or dung on one side of the nest. The speculation is, it is used to cover the excavated dirt and hide her nest more. Females will lay a clutch of 2-5 gray eggs with brown spots, each about 1 inch long and half an inch wide. Incubation will take 10–12 days until hatching and then the nestling period will take 8–10 days. During the nestling period, the chick is fed and defended by both parents. A female in the south can lay 2-3 broods a year while in the north, 1 brood a year is more common. The structure of Horned Lark nests can vary depending on the microclimate, prevailing weather and predation risk, revealing flexibility in nesting behaviour to adjust to changing environmental conditions to maintain nest survival and nestling size development. Status and conservation Horned Lark populations are declining according to the North American Breeding Bird Survey. In 2016, the Partners in Flight Landbird Conservation Plan detailed the Horned Lark as a “Common Bird in Steep Decline,” but the Horned Lark as of 2016 is not on the State of North America's Birds’ Watch List. This species’ decline could be contributed to the loss of habitat due to agricultural pesticides, the disturbed sites the birds prefer reverting to forested lands through reforestation efforts, urbanization and human encroachment as well as collisions with wind turbines. In the open areas of western North America, horned larks are among the bird species most often killed by wind turbines. In 2013, the U.S. Fish and Wildlife Service listed the subspecies streaked horned lark as threatened under the Endangered Species Act. - BirdLife International (2019). "Eremophila alpestris". IUCN Red List of Threatened Species. 2019: e.T22717434A137693170. doi:10.2305/IUCN.UK.2019-3.RLTS.T22717434A137693170.en. Retrieved 12 November 2021. - Jobling, James A (2010). The Helm Dictionary of Scientific Bird Names. London: Christopher Helm. pp. 42, 148. ISBN 978-1-4081-2501-4. - Drovetski, Sergei V.; Raković, Marko; Semenov, Georgy; Fadeev, Igor V.; Red'kin, Yaroslav A. (2014-01-01). "Limited phylogeographic signal in sex-linked and autosomal loci despite geographically, ecologically, and phenotypically concordant structure of mtDNA variation in the Holarctic avian genus Eremophila". PLOS ONE. 9 (1): e87570. Bibcode:2014PLoSO...987570D. doi:10.1371/journal.pone.0087570. ISSN 1932-6203. PMC 3907499. PMID 24498139. - Ghorbani, Fatemeh; Aliabadian, Mansour; Olsson, Urban; Donald, Paul F.; Khan, Aleem A.; Alström, Per (January 2020). "Mitochondrial phylogeography of the genus Eremophila confirms underestimated species diversity in the Palearctic". Journal of Ornithology. 161 (1): 297–312. doi:10.1007/s10336-019-01714-2. ISSN 2193-7192. S2CID 203439127. - CARRERA, Lisa; PAVIA, Marco; PERESANI, Marco; ROMANDINI, Matteo (2018). "Late Pleistocene fossil birds from Buso Doppio del Broion Cave (North-Eastern Italy): implications for palaeoecology, palaeoenvironment and palaeoclimate". Bollettino della Società Paleontologica Italiana (2): 145–174. doi:10.4435/BSPI.2018.10. ISSN 0375-7633. - Sánchez-Marco, Antonio (September 1999). "Implications of the avian fauna for paleoecology in the Early Pleistocene of the Iberian Peninsula". Journal of Human Evolution. 37 (3–4): 375–388. doi:10.1006/jhev.1999.0345. PMID 10496993. - Dussex, Nicolas; Stanton, David W. G.; Sigeman, Hanna; Ericson, Per G. P.; Gill, Jacquelyn; Fisher, Daniel C.; Protopopov, Albert V.; Herridge, Victoria L.; Plotnikov, Valery; Hansson, Bengt; Dalén, Love (2020-02-21). "Biomolecular analyses reveal the age, sex and species identity of a near-intact Pleistocene bird carcass". Communications Biology. 3 (1): 84. doi:10.1038/s42003-020-0806-7. ISSN 2399-3642. PMC 7035339. PMID 32081985. S2CID 211217336. - "IOC World Bird List 6.4". IOC World Bird List Datasets. doi:10.14344/ioc.ml.6.4. - "Horned Lark Identification, All About Birds, Cornell Lab of Ornithology". www.allaboutbirds.org. Retrieved 2020-09-28. - "Horned Lark - Eremophila alpestris | Wildlife Journal Junior". nhpbs.org. - "Horned Lark Life History, All About Birds, Cornell Lab of Ornithology". www.allaboutbirds.org. - "Horned Lark". American Bird Conservancy. - "Horned Lark". Audubon. 13 November 2014. - de Zwaan, D.R.; Martin, K. (2018). "Substrate and structure of ground nests have fitness consequences for an alpine songbird". Ibis. 160 (4): 790–804. doi:10.1111/ibi.12582. - Erickson, W.P., G. D. Johnson, D. P. Young, Jr., M. D. Strickland, R.E. Good, M.Bourassa, K. Bay. 2002. Synthesis and Comparison of Baseline Avian and Bat Use, Raptor Nesting and Mortality Information from Proposed and Existing Wind Developments. Technical Report prepared for Bonneville Power Administration, Portland, Oregon. http://www.bpa.gov/Power/pgc/wind/Avian_and_Bat_Study_12-2002.pdf - "Species Fact Sheet: Streaked horned lark". U.S. Fish and Wildlife Service. 2014-08-05. Retrieved 2014-08-19. - van den Berg, Arnoud (2005) Morphology of Atlas Horned Lark Dutch Birding 27(4):256–8 - Small, Brian (2002) The Horned Lark on the Isles of Scilly Birding World 15(3): 111–20 (discusses a possible Nearctic race bird on the Isles of Scilly in 2001) - Dickinson, E.C.; R.W.R.J. Dekker; S. Eck & S. Somadikarta (2001). "Systematic notes on Asian birds. 12. Types of the Alaudidae". Zool. Verh. Leiden. 335: 85–126. - Seebohm, H (1884). "On the East-Asiatic Shore-Lark (Otocorys longirostris)". Ibis. 26 (2): 184–188. doi:10.1111/j.1474-919x.1884.tb01153.x. - Beason, Robert (1995). Horned Lark (Eremophila alpestris), version 2.0. In The Birds of North America. Ithaca, New York, USA: Cornell Lab of Ornithology. - Picture – Cyberbirding - Species account – Cornell Lab of Ornithology - "Horned lark media". Internet Bird Collection. - Horned lark – Eremophila alpestris – USGS Patuxent Bird Identification InfoCenter - Horned lark photo gallery at VIREO (Drexel University) - Interactive range map of Eremophila alpestris at IUCN Red List maps
<urn:uuid:3efcd63a-8fac-4b37-bd7e-8223c9b7e4fb>
CC-MAIN-2022-33
https://en.wikipedia.org/wiki/Horned_lark
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571692.3/warc/CC-MAIN-20220812105810-20220812135810-00099.warc.gz
en
0.813259
4,690
3.875
4
27 Feb Assignment: Human Services Organizations as Systems Social workers use the person-in-environment approach to understand the rela Assignment: Human Services Organizations as Systems Social workers use the person-in-environment approach to understand the relationship between individuals and their physical and social environments. This ecological perspective is a framework that is based on concepts associated with systems theory. Systems theory guides social workers when they assess how factors in the environment such as school, work, culture, and social policy impact the individual. Although social workers commonly use the systems approach to focus on the individual, they may apply this approach to human services organizations as well. Human services organizations exist within the context of the social, economic, and political environments, and any type of change in one aspect of the environment will influence the organization’s internal and external functioning. For this Assignment, consider how administrators of human services organizations may apply systems theory in their work. Also, consider what you have discovered about the roles of leadership and management and how these contribute to an organization’s overall functioning. Assignment (2–3 pages in APA format): Explain how systems theory can help administrators understand the relationships between human services organizations and their environments. Provide specific examples of ways administrators might apply systems theory to their work. Finally, explain how leadership and management roles within human services organizations contribute to their overall functioning. Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=wasw21 Administration in Social Work ISSN: 0364-3107 (Print) 1544-4376 (Online) Journal homepage: https://www.tandfonline.com/loi/wasw20 Theoretical Perspectives on the Social Environment to Guide Management and Community Practice An Organization-in-Environment Approach Elizabeth A. Mulroy PhD To cite this article: Elizabeth A. Mulroy PhD (2004) Theoretical Perspectives on the Social Environment to Guide Management and Community Practice, Administration in Social Work, 28:1, 77-96, DOI: 10.1300/J147v28n01_06 To link to this article: https://doi.org/10.1300/J147v28n01_06 Published online: 23 Sep 2008. Submit your article to this journal Article views: 5837 View related articles Citing articles: 3 View citing articles Theoretical Perspectives on the Social Environment to Guide Management and Community Practice: An Organization-in-Environment Approach Elizabeth A. Mulroy, PhD ABSTRACT. This paper introduces a conceptual framework called Or- ganization-in-Environment that is intended to help social work students, particularly those preparing for careers in management and community practice, understand the complexity of the social environment in the con- text of a global economy. This model is based on two assumptions. First, organizations and communities are embedded in large, complex macro systems that helped to create institutional barriers of the past. Second, or- ganizations are civic actors with the potential to strengthen communities and change institutional inequities set in larger societal systems. Theories of social justice, the political economy, vertical and horizontal linkages, organization/environment dimensions, and interorganizational collabora- tion are presented and used to help analyze the model. Case examples of privatization, gentrification, and homelessness are used to illustrate theory for practice. Finally, implications are drawn for a future-oriented practice that emphasizes external relations and their political dimensions: strategic management, interorganizational collaboration, community building, re- gional action, and a commitment to social justice. [Article copies available for a fee from The Haworth Document Delivery Service: 1-800-HAWORTH. E-mail address: <[email protected]> Website: <http://www.HaworthPress. com> © 2004 by The Haworth Press, Inc. All rights reserved.] Elizabeth A. Mulroy is Associate Professor, School of Social Work, University of Maryland-Baltimore, 525 West Redwood Street, Baltimore, MD 21201 (E-mail: [email protected]). The author thanks Michael J. Austin for his very helpful comments on earlier versions of the article. Administration in Social Work, Vol. 28(1) 2004 http://www.haworthpress.com/web/ASW 2004 by The Haworth Press, Inc. All rights reserved. Digital Object Identifier: 10.1300/J147v28n01_06 77 KEYWORDS. Social justice, social environment theory, organizational change, social change, community theory, collaboration The purpose of this paper is to examine the concept of the social environment and to consider some theoretical perspectives of management and community practice. The study of macro level factors begins with an examination of the social environ- ment; namely understanding how people interact–how they respond, adapt, and cope with family, friends, peers, and intimate others, and how they interact in less personal relationships within work organizations, schools, or associations in which a person assumes a role as citizen, producer, consumer, or client. It should then exam- ine social norms, social institutions, and institutional arrangements–the working agreements about the distribution of wealth, power, prestige, privilege associated with race, ethnicity, gender, age, mental status, or sexual orientation. While de- signed to create stability for society, institutional arrangements can be a source of conflict for those who experience institutional inequities (Mulroy, 1995a). Students of management and community practice, and in fact all social work stu- dents, need to critically examine how macro level factors affect the lives of people who live in neighborhoods and communities, especially the lives of very low-income children and their families who live in neighborhood poverty. Gephart (1997) writes: Existing research suggests the interaction of several forces in American cities over the past fifty years has led to the increased spatial concentration of poverty, the geographic spread of concentrated poverty, and the in- creased clustering of poverty with other forms of social and economic dis- advantage. These forces have altered the context of urban poverty at the community level and created the neighborhoods and communities of con- centrated poverty . . . (1994, pp. 3-4) The concept of the social environment becomes more holistic when we in- clude the physical environment, especially in relation to land use and population distribution (Norlin & Chess, 1997). The question for management and commu- nity practice is how do we understand the social environment in this way, and how do we educate students to manage and change it? While a discussion of the social environment usually begins with community theory and organization theory as if communities and organizations were separate topics, a broader and more integrated conceptual framework is needed for the edu- cational task at hand. Communities and organizations are located in larger, com- plex systems as part of an ecology of shifting resources and constraints. Based on a theoretical foundation that informs this reality, the next generation of practitio- ners will need to: 78 ADMINISTRATION IN SOCIAL WORK • Identify and understand the critical strategic issues external to their organi- zations. • Assess the inter-relatedness and cross-cutting impacts of the issues. • Analyze how the issues affect their agency’s mission, purposes, resources, and operations. • Learn which other organizations are affected across a range of community types such as geographic community and communities of interest. • Determine which theoretical perspectives offer guidance to inform a range of practice innovations that will help to solve the presenting problems while holding firm to the overriding goal of social justice. This article examines the social environment by building on social systems and ecological theories (not reviewed here) in order to focus on the political economy, vertical and horizontal linkages, organization-environment relations, and inter- organizational collaboration. These are selected for illustrative purposes to dem- onstrate how they can inform macro-level practice. The goal of helping students understand the social environment is related to the following four points: 1. The social environment and the physical environment are tightly linked and intertwined. 2. Factors and relationships external to an organization are important. 3. Public policies and societal factors are continuous forces of change not only for organizations but also for the communities in which organizations are located. 4. A commitment to social justice is a core principle that frames management and community practice. A MODEL OF ORGANIZATION-IN-ENVIRONMENT Social justice, a core value of social work (Reamer, 2000), drives the model (see Figure 1). Social justice has historically guided reformers and social workers to re-frame the pressing social issues of the times and to engage in the complex work of finding solutions to vexing societal problems (Addams, 1910; Wald, 1915; Schorr, 1964; Schorr, 1997; Patti, 2000). Today this means confronting the rearrangement of institutional barriers that emerged in our urban areas during the past 30 years–barriers that helped to create and sustain neighborhood poverty that continue to affect the health and well-being of residents and prevent the advance- ment of many very low-income people, especially minorities. Elizabeth A. Mulroy 79 The starting point for most discussions of social justice is the theory of justice developed by philosopher John Rawls (1972) who proposed three guiding prin- ciples: equality in basic liberties, equality of opportunity for advancement, and positive discrimination for the underprivileged in order to ensure equity. Rawls derived these principles of justice on what he believed reasonable people, with no prior knowledge or stake in the outcome, would apply to a society in which they were to live (Ife, 1996). Ife (1996) moves the analysis of social justice from the individual to the commu- nity level. Following Ife’s thinking, social justice at the macro level is based on six 80 ADMINISTRATION IN SOCIAL WORK Level 3 Societal/Policy Forces Level 2 Locality-Based Community IMPACTS SOLUTIONS AgencyLevel 1 Economic Globalization Market Economy Mulroy, E. 2003 FIGURE 1. Organization-in-Environment: A Conceptual Framework principles: structural disadvantage, empowerment, needs, rights, peace and non-vio- lence, and participatory democracy. He argues that unless changes are made to the basic structures of oppression, which create and perpetuate an unequal and inequita- ble society, any social justice strategy has limited value. “. . . all programmes that claim a social justice label need to be evaluated in terms of their relationship with the dominant forms of structural oppression, specifically class, gender, and race/ethnic- ity” (1996, p. 55). He believes that a specific commitment to addressing the inequal- ities of class, gender and race/ethnicity must be a core element of any social justice strategy, and the guiding principle of community practice (p. 56). Harvey (1973), writing from an economic and urban perspective states, “The evidence suggests that the forces of urbanization are emerging strongly and moving to dominate the centre stage of world history . . . We have the opportu- nity to create space, to harness creatively the forces making for urban differenti- ation. But in order to seize these opportunities we have to confront the forces that create cities as alien environments, that push urbanization in directions alien to our individual or collective purpose. To confront these forces we first have to understand them” (pp. 313-314). That is, social workers must first understand how the forces of oppression operate across a metropolitan landscape in order to devise strategies capable of bringing about lasting change. Levels of Influence Figure 1 depicts a social environment in which communities and agencies are part of larger systems. The first set of arrows suggests that macro level factors Im- pact communities and the organizations in them. The second set of arrows sug- gests that organizations and communities work to find Solutions to help break down or change oppressive institutional barriers in the larger society. The circular pattern emphasizes the interconnectedness of the ideas presented (Ife, 1996). Level 3–Societal/Policy Factors Macro level factors include, but are not limited to the market economy, globalization, immigration, poverty, and a range of public policies. Institu- tional arrangements are formulated at Level 3. These may include, for exam- ple, international real estate investment and financial lending decisions and supportive public policies related to housing and urban development; na- tional or regional labor market needs and supportive federal policies and reg- ulations related to immigration; medical, managed care, and health facilities decisions driven by insurance companies; or shifting national political prior- ities toward privatization of public services generally and the adoption of a contracting and purchase of services culture. Elizabeth A. Mulroy 81 Political Economy. The political economy concerns the intersection of events and decisions in a community and the wider polity that have economic implica- tions and political considerations. For example, the political economy involves powerful elite forces that own and control economic capital, use economic re- sources to promote industrial growth, and compete for control over modes of pro- duction and resources. Land, for example, is considered an economic resource to be brought to its highest and best use. The urban political economy creates the physical environment through real estate development and the highly politicized processes of land use planning and zoning with their manifestations in state and local-level land use plans, governance, and control (Feagin, 1998; Gottdiener, 1994; Lefebrvre, 1991). In The Political Economy of the Black Ghetto, Tabb (1970) asserted that racism is perpetuated by elements of oppression within an economic and political system that must be understood as a system (p. vii). The political economy can also be applied to organizations and their environ- ments (Hasenfeld, 1983; 1992). The capacity of a human service organization to survive and to deliver services in the 21st century is based on its ability to mobi- lize power, legitimacy, and economic resources (Hasenfeld, 1992, p. 96). For nonprofit organizations this is reflected in the increased degree of dependency on resources external to their own organizations from federal and state grants and contracts, and private philanthropic grants from foundations (Gibelman, 2000; Martin, 2000). Functions of management include the acquisition of a wide range of external funding, financial control through management of multiple grants and contracts, impacts on program implementation, competition among internal programs for scarce resources, and effects on organization-wide fiscal stability (Gummer, 1990). Implications of resource dependency include the po- litical effects on nonprofit and public human service agencies when national and state budget priorities shift, and newly elected legislative bodies fail to reauthorize allocated funds for existing demonstration and other programs mid-stream in their implementation cycles (Mulroy & Lauber, 2002). The con- cept of privatization is used in the following example to illustrate the ways in which macro level factors can operate in the social environment, in this case on agencies directly. (A range of diverse macro level factors can be introduced in Level 3 for purposes of analysis.) Example: Privatization. Privatization is the shifting of service delivery from the public sector to the private for-profit and nonprofit sectors through contracts and the purchase of services. It is a market-oriented approach in which individ- ual nonprofit human service organizations compete for public funds on an un- even playing field. It increased competition first within the nonprofit sector as large and small nonprofits vied with each other for public sector contracts in a period of overall reduced federal expenditures for domestic social services. Competition then increased outside the sector as nonprofits had to compete with 82 ADMINISTRATION IN SOCIAL WORK private firms. Hard hit were community-based nonprofit organizations with so- cial change missions (Fabricant & Fisher, 2002). The for-profit sector has benefited from privatization, particularly after pas- sage of the Personal Responsibility and Work Opportunity Act of 1996. Highly resourced, large corporations with no ties to local communities offered large state and county agencies the chance to purchase packages of diverse services that included management information systems, welfare-to-work job training programs, Medicaid billing, case management, and direct services to recipients (Frumkin & Andre-Clark, 2000). Many smaller nonprofit human service organizations faced serious dilemmas such as being priced out of existence, scaling back services to the poorest or sickest, and proving in the short term that their interventions get results. When viewed from a social justice perspective, implications of privatization can be drawn for service equity, access, cost, continuity, and quality of care (see Gibelman & Demone, 2002). Level 2–The Geographic Community Institutional arrangements developed in Level 3 are absorbed and imple- mented in Level 2. The locality-based community can be a neighborhood, city, county, or other jurisdiction with boundaries and an interactional field (Warren, 1978) of subunits that serve collective needs. The locality-based definition of community for Level 2 was selected because it has a geographic boundary, be it a neighborhood, city, or county that students in field placement internships can readily identify. Other definitions of community can be woven in as needed (see Fellin, 2001). Vertical Links as “Windows on the World.” The pioneering work of Roland Warren (1971; 1977; 1978) provides a powerful and provocative concept for an- alyzing communities in terms of their horizontal and vertical patterns. The hori- zontal pattern is understood to be an “interactional field” that viewed community as the aggregate of people and organizations occupying a geo- graphic area whose interactions represent systemic interconnections (1978). He explicitly stated that the interactional arena was of social rather than physical space. The importance of vertical ties was that they linked community units to units outside the community, or to the macro system and thus to the larger soci- ety and culture. Such ties could have a number of aspects that were economic, thought systems or ideologies, economic roles or occupations, technologies, public behavior, common values and norms, patterns of land use, social stratifi- cation, power structures, organizational linkages, and social problems (Warren, 1978, pp. 432-437). Elizabeth A. Mulroy 83 The concept of a vertical pattern of ties is an intriguing idea to me because it introduces this question: To what extent does the strength of a community’s ver- tical ties determine the resources and support it gets from national, state, city, or county sources in an increasingly global economy? My interest in this question launched the trajectory of my own research based on the macro system ap- proach. I have attempted to systematically analyze relationships between as- pects of the macro system and community subunits (see for example, Mulroy, 2000; 1997; 1995a; 1988; Mulroy & Shay, 1997; Mulroy & Shay, 1998; Mulroy & Lauber, 2002). The reported findings suggest that a community’s physical envi- ronment is tightly linked with the social environment; patterns of land use such as urban sprawl can determine the status of a community’s health and the well-being of its residents; and in the global economy economic decisions made by multi-national firms with no national or local community affiliation or loy- alty profoundly affect both. The gentrification of a community will serve to il- lustrate these concepts. Example: Gentrification. Staying with the theme of neighborhood and con- centrated poverty introduced at the beginning of the paper, the concept of gentri- fication is used to illustrate two main points; namely the decline of urban neighborhoods and urban sprawl. First, the decline of many urban neighborhoods was part of a larger pattern of urbanization and sprawl that occurred over decades. Federal and state housing and urban policies, for example, are examples of vertical links that attempted to respond to urban blight in inner city neighborhoods and central business districts by targeting deteriorating commercial districts and residential neighborhoods for revitalization. Housing is a connector between the physical and social envi- ronments in all neighborhoods, including those targeted for gentrification. Housing concerns affordability, security, safety, health, neighbor and social re- lations, and confers status. The location of housing determines a household’s ac- cess to facilities, services, jobs, transportation systems, safety, and quality schools (Mulroy, 1995a; 1988). It affects the formation of social networks, and thus the ability of residents to build social and human capital (Coleman, 1988; Wilson, 1996). Federal and state housing policies require cities and counties to have land use plans, and housing is a core element. The increasingly high cost of suburban housing made the revitalized districts attractive to many people who worked in the central business district and they were enticed to move back into the urban core. The return of upper- and mid- dle-income people to the central city was an explicit public policy and an eco- nomic development goal of gentrification. New mixed-income communities were created that stabilized entire city blocks. Gentrified neighborhoods, how- ever, tended to displace and disperse many local very-low income residents and furthered their downward mobility in search of rental housing they could afford 84 ADMINISTRATION IN SOCIAL WORK (Mulroy, 1988). Most urban neighborhoods, however, did not receive public and private investments for gentrification. From the political economy perspective, the processes of urbanization such as real estate and financial lending decisions made by national and multi-na- tional firms with vertical ties to a neighborhood–and bolstered by help from sup- portive federal housing and urban development policies–changed the spatial organization of communities with serious impacts on poor neighborhoods (Feagin, 1998). For example, our understanding of where people live in a city and why they live there has traditionally been guided by concentric zone theory developed in the 1920s. Simply put, ecological processes result in city growth and development that evolve outward in five zones of concentric rings: (1) the central business district, (2) transitional manufacturing zone, (3) worker housing close to low-wage manufacturing jobs, (4) higher income housing, and (5) the suburbs. (See Fellin, 2001 for a more complete discussion.) The theory of hous- ing filtration postulates that as low-wage households in worker housing save money they would seek better housing and move out to the next residential zone, freeing up their multi-family housing for the next group of low-wage workers, typically new immigrant groups. Housing “filtered” down in this pattern of sup- ply and demand. Over time, this “filtering” of the housing market was the basis for private builders to construct new housing in the suburbs. Housing has always been a private market function in America, and therefore private developers ra- tionally build where the demand for expensive housing and therefore greater profits will be highest–the suburbs. It was assumed that there would always be an adequate supply of housing stock for the poor in older inner-neighborhoods (Mulroy, 1995b). Second, the effects of urban sprawl have restructured communities and nei- ther concentric zone theory nor housing filtration may work as theorized. When a neighborhood was gentrified “reverse” housing filtration took place. Neighborhoods had vertical ties to aspects of the macro system, particularly through political, economic, and organizational linkages (Warren, 1978). For example, as manufacturing wound down and firms relocated to cheaper points of production in the suburbs, rural “exurbs,” or to international locations with cheaper labor costs, inner-city plants were closed and often abandoned. Neigh- borhoods around them began to decline. Many insurance companies and banks not horizontally linked in the neighborhood’s interactional field habitually de- nied loans to home buyers and small entrepreneurs in many of these deteriorat- ing inner-city neighborhoods. Red lines were drawn on maps to identify communities in which investment was considered a bad risk. The Community Reinvestment Act of 1977 made this practice of redlining neighborhoods ille- gal, but it still persists, resulting in large pockets of urban decline. Elizabeth A. Mulroy 85 Low-income residents who lived there had limited access to jobs that paid a living wage and thus no ability to save and move out to zones with better housing and living conditions. Absentee landlords, not horizontally linked in the com- munity’s interactional field, owned most multi-family housing and apartment buildings in declining inner-city neighborhoods as investments to make money. Rather than make needed repairs, they often let buildings run down and aban- doned them. Residents had no access to capital to purchase or improve the hous- ing in which they lived, or to start or improve a business. The impacts of the flow of capital out of these neighborhoods and the absence of vertical links for posi- tive community building purposes can be seen today in urban neighborhoods rife with rising poverty, failing schools, abandoned buildings, poor public ser- vices, and increased levels of crime (Richmond, 2000). At the time these neighborhoods were in decline, highway construction pro- liferated from central business districts out to the sprawling new suburbs. Less expensive housing was built in rural areas far from central cities but near new super highways. This made it easier for commuters to get to work in the central cities but the highways cut through and divided the old working class inner-city neighborhoods in the process. Traffic congestion and air pollution increased as these new patterns of land use development were repeated across America. The point of the gentrification example is to highlight how dynamic changes in a specific geographic community are driven by external forces that may work to decrease the strength of local horizontal ties as vertical ties to dis- tant but influential and powerful sources increase. Such vertical ties, however, may have negative or positive impacts on a target community as the gentrifica- tion example illustrates. While some vertical ties served to extract capital, oth- ers were used to infuse capital and improve neighborhood conditions. This conceptualization helps the practitioner to monitor local community conditions in terms of the patterns of horizontal and vertical links. That analysis can then be related to: (1) the structure of the housing market relative to the availability of safe, habitable, and affordable housing, (2) location of public transit lines relative to employment for low-wage workers, (3) access to finan- cial capital (banks, credit unions), basic needs (groceries, pharmacy, clothing stores, health clinics, public schools), social capital (outreach offices for social services, family support centers), (4) physically safe and environmentally healthy places for children to play, and (5) culturally appropriate services for new immigrant groups. Level 1–The Organization Both macro level forces in Level 3 and the ways they are executed and imple- mented in Level 2, in turn, influence individual agencies. It is understood that many agencies are not community-oriented, but because their client groups may 86 ADMINISTRATION IN SOCIAL WORK live in unhealthy and unsafe neighborhood environments, civic infrastructure is a matter for agency concern. The model of Organization-in-Environment (Figure 1) makes the following three assumptions. First, the organization’s internal/external boundary is porous, so environmental surveillance and solution-finding are continuous and therefore strategic. Second, social workers need to be active community leaders at the deci- sion making table when complex coalitions are formed, issued framed and de- bated, tough political decisions made, and Solutions created (see arrows in Figure 1). Since an environment is dynamic, changes to agency structure, resource base, or functions can be anticipated not only from the organizational life cycle perspective (Hasenfeld & Schmid, 1989) but also from an ecological perspective as adapta- tions to the influences from Levels 3 and 2. Third, organizational behavior is guided by effectiveness, efficiency, and equity criteria. Effectiveness and effi- ciency are considered criteria for good internal management generally. Equity re- flects the social justice criteria and all three criteria need to be in balance as noted in Figure 1. Two theoretical perspectives are introduced next; namely, organiza- tional-environment relations and inter-organizational collaboration. Organizational-Environment Relations. The relationship between formal or- ganizations and their external environments has interested a number of organi- zational sociologists and social work theorists for many years (Aldrich, 1979; Alter & Hague, 1993; Gummer, 1990; Hasenfeld, 1983; Lawrence & Lorsch, 1969; Schmid, 1992; 2000; Zald, 1970). Theorists once differentiated between a general environment of … Our website has a team of professional writers who can help you write any of your homework. They will write your papers from scratch. We also have a team of editors just to make sure all papers are of HIGH QUALITY & PLAGIARISM FREE. To make an Order you only need to click Ask A Question and we will direct you to our Order Page at WriteDemy. Then fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.
<urn:uuid:7a0a660b-835a-4978-8ae9-027248ca9ee6>
CC-MAIN-2022-33
https://varsitythesis.com/assignment-human-services-organizations-as-systems-social-workers-use-the-person-in-environment-approach-to-understand-the-rela/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570741.21/warc/CC-MAIN-20220808001418-20220808031418-00099.warc.gz
en
0.934885
6,168
2.546875
3
Transmission Perspective Assignment Help QUESTION NO 01 What do you mean by the term role in context of English language teaching? Distinguish between a transmission role and interpersonal role. Justify with examples Increasingly the world, there is a move within education to adopt a constructivist view of learning and teaching. In part, the argument for this move is a reaction against teacher-centered training that has dominated much of education, particularly adult and higher education, for the past forty years or more. While I do notargue with the basic tenets of constructivism No single view of learning or teaching dominated what might be called, ‘good teaching.’ In our research, we documented five different perspectives on teaching, each having the potential to be good teaching. (Pratt and Associates, 1998) This chapter will introduce those five perspectives, namely: Transmission, Developmental, Apprenticeship, Nurturing, and Social Reform. Hopefully, this will convince you to resist any ‘one size fits all’ approach to the improvement or evaluation of teaching. What is a Perspective on Teaching? A perspective on teaching is an inter-related set of beliefs and intentions that gives direction and justification to our actions. It is a lens through which we view teaching and learning. We may not be aware of our perspective because it is something we look through, rather than look at, when teaching. Each of the perspectives in this chapter is a unique blend of beliefs, intentions and actions. Yet, there is overlap between them. 2 Similar actions, intentions, and even beliefs can be found in more than one perspective. Teachers holding different perspectives may, for example, have similar beliefs about the importance of critical reflection in work and educational contexts. To this end, all may espouse the use of higher-level questions as a means of promoting critical thinking. However, the way questions are asked, and the way in which teachers listen and respond when people consider those questions, may vary considerably across perspectives. These variations are also directly related to our beliefs about learning, knowledge, and the appropriate role of an instructor. It is common for people to confuse perspectives on teaching with methods of teaching. Some say they use all five perspectives, at one time or another, depending on circumstances. On the surface, this seems reasonable. However looking more deeply, one can see that perspectives are far more than methods. In part, this confusion derives from the fact that the same teaching actions are common across perspectives: Lecturing, discussion, questioning, and a host of other methods are common activities within all five perspectives. It is how they are used, and toward what ends, that differentiates between perspectives. It could not be otherwise, given that perspectives vary in their views of knowledge, learning, and teaching. What follows is a ‘snapshot’ of each perspective, including a metaphor for the adult learner and a set of key beliefs, primary responsibilities, typical strategies, and common difficulties. Each snapshot is a composite of many representative people. Therefore, it would be unlikely that any one individual would have all the characteristics listed for any one perspective. As you read them, try to locate yourself, not by looking for a perfect fit, but for the best fit. A Transmission Perspective The Transmission Perspective is the most common orientation to teaching in secondary and higher education, though not in elementary and adult education. From the Transmission Perspective, effective teaching starts with a substantial commitment to the content or subject matter. It is essential, therefore, for Transmission-oriented teachers to have mastery over their content. Many who teach from this perspective hold certain assumptions and views of adults as learners. Some tend to think of the adult learner as a ‘container’ that is to be filled with something (knowledge). This knowledge exists outside the learner, usually within the text or in the teacher. Teachers are to efficiently and effectively pass along (teach) a common body of knowledge and way of thinking similar to what is in the text or the teacher. Such a process of learning is additive, meaning that teachers should take care not to overload their learners with too much information. To increase the amount that is learned, teachers should focus their presentations on the internal structure of the content. This structure can then be used as an effective means of storing and retrieving the material. With proper delivery by the teacher, and proper receptivity by the learner, knowledge can be transferred from the teacher to the learner. From the Transmission Perspective learners are expected to learn the content in its authorized or legitimate forms and teachers are expected to take learners systematically through a set of tasks that lead to mastery of the content. To do this teachers must provide clear objectives, well-organized lectures, beginning with the fundamentals, adjust the pace of lecturing, make efficient use of class time, clarify misunderstandings, answer questions, correct errors, provide reviews, summarize what has been presented, direct students to appropriate resources, set high standards for achievement and develop objective means of assessing learning. How do effective Transmission teachers accomplish this? What strategies do they use? Some Transmission strategies include the following: First, Transmission teachers spend a lot of time in preparation, assuring their mastery over the content to be presented. They also specify what students should learn (objectives) and take care to see that resources and assignments are in line with those objectives. Their goal is to pass on to learners a specific body of knowledge or skill as efficiently and effectively as possible. In order to accommodate individual differences, they vary the pace of instruction, sometimes speeding up, other times slowing down or repeating what was said. Feedback to learners is directed at errors and pointing out where learners can improve their performance. Assessment of learning is usually a matter of locating learners within a hierarchy of knowledge or skill to be learned. As with all perspectives, teachers holding Transmission as their dominant perspective have some difficulties. For example, they often find it difficult to work with people that do not understand the logic of their content. This causes difficulty anticipating where and why learners are likely to struggle with the content. In addition, many whom we studied had difficulty thinking of examples or problems from the ‘real world,’ outside the classroom, as a means of making their content come to life. And when challenged by learners, they often returned to the content as a means of dealing with those challenges. Finally, in our observations, it was not unusual to see Transmission teachers spend too much time talking. In fact, it seemed that many used learner responses or questions as an 4 opportunity to talk some more. They were primarily focused on the content rather than the learners. Much of this sounds negative and, indeed, most of us can think of teachers that were less than stellar and fit well in this perspective. Transmission orientations to teaching provide some of the most common negative examples of teaching. Nevertheless, for many of us there are also positive memories of teachers in our past that were passionate about the content, animated in its delivery, and determined that we go away with respect and enthusiasm for their subject. Such an individual may have inspired us to take up a particular vocation or field of study. Their deep respect and enthusiasm for the subject was infectious. It is the memory of those teachers that must be preserved if we are to see Transmission as a legitimate perspective on teaching. The Transmission Perspective is the most common orientation to teaching in secondary and higher education; however, it is not the most common in elementary and adult education. In order for effective teaching to occur, the transmission perspective starts with a strong commitment to the content or subject matter (Pratt, 2002). Therefore, it is necessary for Transmission perspective teachers to have mastery over their content (Pratt, 2002). The teacher’s main responsibility is to represent the content correctly and efficiently. Learner’s, on the other hand, are responsible for learning the content in its official or genuine form (Pratt & Collins, 2002). The Transmission Teacher Effective transmission orientation teaching necessitates a significant dedication to the content or subject matter (Pratt, 1999). “The instructional process should be shaped and guided by the structure of the content” (Pratt, 1999). If the teacher has not master the content or subject knowledge, they will not be able to share a extensive variety of examples, answer questions accurately and competently, offer valuable resources, or design assignments that emphasize course objectives—all of which are key responsibilities of the transmission teacher. Learning is considered additive in the transmission perspective, therefore, teachers should not overload their students with too much information (Pratt, 2002). According to Pratt (2002) to increase the amount learned, the teacher should focus their presentations on the internal structure of the content. This is used as an effective means of storing and retrieving the information being presented. If done properly (the delivery by the teacher and correct receptivity by the student) knowledge can be transferred from the teacher to the student (Pratt, 2002). Pratt (1999) offers a model of the transmission teaching perspective as described below: Focus Within The General Model • Dominant element/relationship: teacher’s content-credibility • Teacher’s efficient and accurate presentation of content • PRIMARY ROLE: Content expert, skilled presenter Set standards for achievement • Specify course objectives • Select and sequence readings and assignments • Provide clear and well-organized lectures • Make efficient use of class time • Provide answers to questions • Provide direction to reading and studying • Clarify misunderstanding • Correct errors • Provide reviews and summaries • Develop objective means of assessment • Deep respect for the content, expressed through… • Accurate representation of content • Enthusiasm for the content • Encouragement of people to go on in the subject • Organization and logic of knowledge or skills to be learned • Mastery of pre-requisites/basics before going on The Transmission Learner Described below is Pratt’s (1999) account of the adult transmission learner. The common model of the adult learner: “A Sponge” • The learner is a sponge to be filled • The learning process is an additive process • The product of learning is an increase in knowledge Sponges imply degrees of saturation or fullness. What is already in the sponge may be added to, squeezed out, replaced, measured, analyzed, classified, even discarded without losing its essential character. To increase saturation (amount learned), the sponge (learner) can be submerged in a body of liquid (knowledge) and squeezed (motivated) to prepare it for greater absorption (learning). Hence, the learner and that which is learned are positioned as separate but compatible entities. The goal of learning is get more information or knowledge, usually through a process of adding to what one already knows. The goal of education/training is to increase the learners’ knowledge base (degree of saturation) without exceeding their capacity. Transmission of Expert Knowledge and Skills Boldt (1998) provides a great figure that breaks down the transmission perspective nicely. [pic] Common Difficulties • Adjusting to individual differences • Anticipating where (in content) and why learners will have difficulty • Following this view when teaching on “ill-structured” parts of a task or problem, e.g., where *there is more than one acceptable answer or way of thinking • Working with people who cannot understand the content • Using content as security/protection against ‘difficult’ learners • Using materials from ‘real world’ outside the classroom Interpersonal perspectives on teaching . The focus is on the relationship between students and teachers. Teachers have both a direct and an indirect influence on students. As a result they contribute to the learning environment of these students. For example, teaching behavior, teaching styles and student perception of the learning environment have been studied and found to be related to student learning (Bennet, 1976; Brophy & Good, 1986; Fraser et al., 1991). According to Moos (1979) the relationship between students and teachers is an important dimension of class climate. Moos distinguishes three dimensions of classroom atmosphere. These three dimensions are relationships within the classroom, personal development and goal orientation, and maintenance and changes within the system. From an interpersonal perspective, it is the first dimension which interests us. This dimension represents the nature of personal relationships within the classroom, particularly the support a teacher offers his students. Involvement and affiliation are also classified under this dimension. Based on these three dimensions, Maslowski (2003) describes class climate as ‘the collective perceptions of students with respect to the mutual relationships within the classroom, the organization of the lessons and the learning tasks of the students’. It is important to mention that the relationship between students and teachers is closely related to the classroom climate. Within the system theoretical perspective of communication, it is assumed that the behaviors of participants mutually influence each other. The behaviour of the teacher influences that of his students, whereas at the same time the behaviour of the students influences that of the teacher. In the classroom, the effects of this circular communication process can be seen for example in the creation and maintenance of a good classroom climate, and the behaviours that determine the quality of relationships and feelings. The link between teacher behaviour and student behaviour (Wubbels & Levy, (1993) suggests that teachers can benefit directly from knowing how their interpersonal behaviour affects student behaviour. This mutual relationship is therefore an essential topic in this study. The complex character of classroom environment implies that multiple perceptions are necessary to get a comprehensive image of the education process. Since perceptions are the result of an interaction between the person and his environment, they reveal how someone experiences a classroom situation. • Considering the teacher as an actor in the interpersonal relationship, this study focuses on his perception of the situation. Most teachers perceive the classroom environment more positively than their students (Brekelmans, 1989). This may be because teachers complete the questionnaire from a more idealistic perception of the context than students do. Their answers can also be geared more towards the socially desirable or they can underestimate their influence on students. In relation to this, Brekelmans (1989) points out the difference between actual and ideal perceptions. Our study is restricted to actual perception. Teachers describe how they experience the actual educational situation. An additional explanation for the fact that teachers have a more positive perception of the classroom environment than students have, may be caused by differential power relationships. The fact that students’ classroom attendance is essentially involuntary can also be an important factor Primary focus on content rather than the learners Perspectives are neither good nor bad. They are simply philosophical orientations to knowledge, learning, and the role and responsibility of being a teacher. Therefore it is important to remember that each of these perspectives represents a legitimate view of teaching when enacted appropriately. Conversely, each of these perspectives holds the potential for poor teaching. However if teachers are to improve, they must reflect on what they do, why they do it, and on what grounds those actions and intentions are justified. Besides resisting a ‘One size fits all’ approach to development and evaluation, how can these perspectives help in that process? For several years now, educators of adults have been admonished to reflect critically on the underlying assumptions and values that give direction and justification to their work. For many of us this is not an easy task. What is it that we are to reflect upon? How are our underlying values and assumptions to be identified? In other words, the objects of critical reflection are not self-evident. Indeed, it is something of a twist to look not only at our teaching, but at the very lenses through which we view our teaching. In our work with educators we use these perspectives as a means of helping people identify, articulate, and, if necessary, justify their approach to teaching. In this process it also helps them thoughtfully revisit assumptions and beliefs they hold regarding learning, knowledge, and teaching. I believe this is what faculty development should be, rather than the mastery of technique. Throughout the process, pre-conceived notions of “good teaching” are challenged as educators are asked to consider what teaching means to them. QUESTION NO. 02 Groups are made to perform certain activities in class room according to the teaching plan. What are the various ways of forming groups in classroom? Justify the use of group work in English Language classroom. What is Group Work: Like anything in education, grouping works best when it is planned and used thoughtfully. Simply seating students in groups of four or five does not mean students are engaged with each other. It could simply mean they are going to play and talk to each other, rather than complete class work. That is why it is important to plan group work and the types of groups you will be using. Grouping students should allow, and even force, students to work together. It should build their communication skills and it should help them learn how to respectfully hold each other accountable. Types Of Groups There are two main types of groups that teachers use when having their students work cooperatively. The first type of grouping is heterogeneous grouping. This means grouping students of different ability levels together. The definition could also be expanded to include grouping together students of different ages and races. In my classroom, I always think of it as groups of students of differing ability levels. The key word is different. I use heterogeneous grouping more frequently at the beginning of the school year so my students get to know each other and use it less frequently as the year progresses. The second type of grouping is homogenous grouping. It simply means grouping together students that are similar. Who Benefits From Grouping? When used thoughtfully, all students benefit from grouping. The key is that there must be a plan and a purpose behind group work. Simply seating students together does not constitute group work. It might just mean you are creating a classroom management problem! Teachers also benefit from group work. It allows you time to work with students in a small group setting rather than teaching the class as a whole. This will let you work more intensely with students, as well as get to know them better. Familiarize yourself with a few classroom management techniques for group work before setting your students loose. Otherwise group work will not benefit anyone. Planning For Group Work When planning for group work, consider what you want your students to get out of it. Do you want your students of higher ability levels to help those with lower ability levels? (Just be careful here and know your students. Make sure they will all benefit from this.) Do you want to have students of lower ability levels grouped together so you can work with them in a smaller group setting? Do you simply want your students to get to know each other and start building community in your class? Your purpose should drive your groups. Ways of Forming Groups in the Classroom Here are a few fun ways of forming groups in the classroom, they are mainly focused on teaching primary and elementary school classrooms. You can use these grouping activities to form groups for a number of games on this site. 1. Puzzle Pieces: Get enough interesting pictures from magazines for the number groups you want to form (5 pictures for 5 group). Cut each picture up into the number of students you would like in each group (4 pieces for 4 students in each group). Then shuffle the pieces and give each one to each student, their task is to then walk around the room finding the other students to complete the puzzle their piece belongs in. The completed puzzle will become their group. 2. Hit the target card game: Give each student 1 playing card (Ace = 1, Jack = 11, Queen = 12, King = 13), specify a target number, this number would be based on the level of mathematical understanding your students have. Tell the students that they need to form a group of a specific number of students, and using the numbers on their cards along with the mathematical symbols (x / = -) they need to make a sum with the answer as close to the target number as possible (if the target number is 34, you could get close to 34 with; 5 x 3 + 9 + 11 = 35). Tell the student they all have to form a group and a sum that is as close as possible, using the cards they have, to equaling the target number. You could focus on a few target numbers before telling the students that the groups they have formed will be their groups for the following activity. 3. Similar interests: This can be used as a getting to know you game before another activity using the formed groups. It’s a fun and easy game to play. Give the students a topic or theme, the students then have to form groups of similar interest relating to that topic or theme. It’s up to you whether you specify how many students need to go into each group, maybe you could start with any number and at the end specify a number based on how many you need in each group for the next activity. Here are some ideas of themes or topics: Form a group of people with the same: – Favorite color – Favorite TV show – Number of family members – Favorite pet – Eye color – Favorite style of music This list could go forever, I’m sure you’ll think of a lot more ideas. 4. Complete the sentence: This game is great to develop comprehension and reading skills and perhaps could be used to form groups for a literacy based activity. It’s similar to the puzzle piece game but uses sentences instead of pictures. Find some sentences (5 sentences for 5 groups). Separate the sentence into sections based on the number of students you want in each group (5 sections for 5 in each group). Give each section of the sentence to each student in the class and ask them to walk around the room and try to complete the sentence. If we give these ways of forming groups in the classroom we would learn see what works the best for class. They are certainly very helpful. for all class levels. Especially for the language class rooms.
<urn:uuid:3b27b51e-9a5a-4478-9e3a-b86e6cb3b671>
CC-MAIN-2022-33
https://www.cheapassignmenthelp.co.uk/transmission-perspective-assignment-help/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00697.warc.gz
en
0.958522
4,546
3.4375
3
1. What is Gifted? The definition of gifted varies. Depending upon the definition, the target children in a gifted education program and its delivery will be different. The word "gifted*1" can imply a quality given from heaven, which indicates heritability. However, there is disagreement regarding what accounts for giftedness, heritability or environmental contexts, or a complex interaction between them. According to the federal definition, gifted students are "children and youths who give evidence of higher performance capability in such areas as intellectual, creative, artistic, or leadership capacity, or in specific academic fields, and who require services or activities not ordinarily provided by the schools in order to develop such capabilities fully." Howard Gardner explains that a child's special capability should be defined within a variety of areas: linguistic, logical-mathematical, musical, spatial, kinesthetic, interpersonal, intrapersonal, naturalistic and existential (multiple intelligences). However, the formal identification of gifted students is often done with IQ test scores and standardized tests. Regarding the population of gifted students in the United States, there is no official data from the federal government. However, the National Association for Gifted Children (NAGC), the largest organization of gifted education in the United States, estimates that there are about 3 million academically gifted children in grades K-12, which is about 6% of the total student population. Gifted education is different from a type of early learning, which is purportedly designed to create gifted children by investing a large amount of time and resources (e.g., using flash cards, DVDs, and electronic educational toys designed to stimulate the baby's brain). During the orientation for a gifted program in my daughter's school district, we learned about gifted children in comparison with bright children: Table 1 Difference between Bright Child and Gifted Learner |Bright Child||Gifted Learner| |Has good ideas.||Has wild, silly ideas.| |Works hard.||Plays around, yet tests well.| |Top group.||Beyond the group.| |6-8 repetitions for mastery.||1-2 repetitions for mastery.| |Enjoys peers.||Prefers adults.| |Enjoys school.||Enjoys learning.| |Enjoys straightforward, sequential presentation.||Thrives on complexity.| |Is pleased with own learning.||Is highly self-critical.| In Table 1, the characteristics of gifted students are described as highly curious, full of originality, excellent memorization skills, and capable of mastering skills with few repetitions. Such students don't work very hard to achieve high academic excellence. Yet, they appear to be perfectionist and highly self-critical. Because their cognitive development is more advanced than typically developing children, but often not their socio-emotional development, they often have a difficult time getting along with children of the same age. They often prefer to spend time with adults. However, there is some variability among the characteristics of gifted children. 2. Criteria for gifted programs How will gifted students be identified? It depends on the definition of giftedness by the school district and the state. Many schools use a wide range of assessments, such as IQ test scores, achievement tests, questionnaires from the classroom teacher and parents, classroom observation, documentation, and interviews. Although a high IQ has been the most common measure for placement, there is some skepticism regarding the reliability of IQ test scores in identifying giftedness. Yet, in the field of education practice, the data from IQ tests are frequently used because quantitative results make it easier to compare one student to another. Here are some examples of criteria that are used to identify gifted students at the school districts that the author's daughter attended*2. 1) School District A Identification Process: WISC-IV (intelligence test), Questionnaire to the parents and classroom teacher Explanation: This district uses the criteria from the WISC-IV score at or above the 98 percentile*3. At the state level, the 95 percentile was the cut off. Because this district was located in a college town, the standards were higher. The questionnaire was intended to evaluate the child's language ability (e.g., "Has a large vocabulary"), learning style ("Learns and retains skills rapidly, easily, efficiently, and with little repetition"), motivation ("Becomes easily impatient with drill and routine procedures"), socio-emotional development ("Is sensitive to feelings of others or to situations"), and creativity ("Has several ideas about something instead of just one"). 2) School District B Identification Process: The Measures of Academic Progress Test (MAP), Otis Lennon School Ability Test (OLSAT*4, intelligence test), Structure of Intellect (SOI, creativity test), Questionnaire to the parent and classroom teacher Explanation: The first step of the screening process is that a student is nominated by the parent or the teacher, but he or she has to meet the academic achievement at or above the 98 percentile in reading and math as measured by the Measures of Academic Progress tests. Then, the nominated student will take the Cognitive/intellectual test (OLSAT). If the score is at or above the 98 percentile, the student will be eligible for the Highly Capable Program. If not, the student will take the creativity test (SOI). If it meets the requirement of at or above the 98 percentile, the student will be accepted to the Highly Capable Program. 3) School District C Identification Process: Group or individual ability tests (e.g., Otis Lennon School Ability Test, Reynolds Intellectual Screening Test, Naglieri Nonverbal Ability Test, Cognitive Ability Test), Stanford Achievement Test, Questionnaire to the parent and classroom teacher (higher grades), Teacher observations (lower grades). Explanation: In this district a student is qualified for the gifted program, if he or she meets the following two requirements, (1) Achievement Measurements - Students scores at or above the 96 percentile for one of the subjects on the Stanford Achievement Test, (2) Ability Measurements - Students score past the first standard deviation to the right of the norm on ability measures (about 84 percentile). By looking at the gifted program placement for each school district, the common denominator is intelligence tests and questionnaires for the parent and the teacher. However, there is some variance in the level of students that are targeted. For example, School District A and B require the students score to be at or above the 98 percentile on the intelligence test, and therefore, eligible students are far fewer than School District C, which only requires at or above the 84 percentile. For example, as for School District B, the number of gifted students in my daughter's grade was a little less than 10%. School District B appears to have the strictest rule for determining eligible students. The target students must be in the top 2 percent in both intellectual ability and academic achievement scores. Furthermore, the nominated student must be at or above the 98 percentile in reading and math achievement. Table 1 was distributed during the school district orientation for the gifted programs. The program is targeting children who are "beyond the group." According to a gifted program coordinator at School District B, because the district is located in a college town, they have to put the bar really high. Many of the children are from families of academics or university administrators. In fact, this school district uses one grade above math curriculum for all district children. Therefore, in order to provide sufficient service for gifted children, they had to reduce the number by creating high standards for eligibility. By contrast, selection criteria for School District C are rather lenient. They don't require a student to excel in both reading and math. In addition, the children's ability test scores, such as on the OLSAT, only have to be above the 84 percentile. According to a gifted program teacher in School District C, this district also takes into account the educational philosophy of multiple intelligences by Howard Gardner. They intentionally select students, not only based on intellectual ability, but on other intelligences. In fact, at my daughter's school, among 13 classes in her grade, 4 of them were gifted classrooms. The gifted program teacher explained that they sometimes enroll students who are not really gifted but high achieving due to pressure from parents*5. 3. Types of Gifted Programs There are several types of gifted programs in the United States. Gifted students are pulled out of a regular classroom to spend a portion of their time in a gifted class or school. For example, at School District B, gifted students meet together once a week for about 2 hours in one school, which is often different from their home campus. They spend time engaging in academic tasks which are more complicated and difficult than those in their regular classrooms. They also participate in a project of their own interest. At District A, gifted students in grades 3-5 are transported to and from the Center for Gifted Education for one full day of differentiated instruction per week. They take interdisciplinary classes, which focus on a topic or theme, such as "psychology" and "WWII." Students also work in homeroom groups throughout the school year in order to foster their socio-emotional growth. As for District C, gifted students go to gifted classrooms for the core subjects (language arts, math, science, social studies), while they share the class with other students in other subjects (e.g., physical education, music). Enrichment programs provide a broad range of advanced-level enrichment experiences. For example, gifted students sometimes receive different and more challenging homework from the classroom teacher. Or, based on their strengths and interests, they participate in events such as Spelling Bees, Science Fairs, and Math Olympics. Some schools provide extracurricular activities, such as Chess and Math Club for students and other learners. Acceleration may take the form of skipping grades or school. For example, some children start kindergarten early. This is based on the view that school readiness is not dependent on the chronological age of the child but the abilities of the child. In addition, some colleges offer early entrance programs that give gifted individuals of junior high and high school age the opportunity to attend college early. Many schools, especially junior highs and high schools, in the U.S. use homogeneous grouping, which means that classes are organized by ability and preparedness. For example, all of the three school districts (A, B, C) offer PreAP (PreAdvanced Placement) courses for junior high and high school students, and AP (Advanced Placement) courses for high school students. AP courses offer college level curriculums and exams for high school level students. The students can earn college credit and advanced placement through AP courses. PreAP courses offer an advanced and rigorous curriculum, which prepares students for AP classes and other challenging coursework. This system allows students to graduate college in a shorter period of time. Summer Enrichment Programs In the summer there are many programs and camps for gifted children in the United States. A well-known gifted education program is The Center for Talented Youth (CTY) at Johns Hopkins University. Their summer programs are held on many university campuses throughout the United States and in other nations. Eligibility for CTY Young Students summer program courses is determined by scores earned on the above-grade-level SCAT (School and College Ability Test), either alone or in combination with CTY's Spatial Test Battery (STB). 4. Challenges of Gifted Students The purpose of gifted education is not only to provide educational content based on the abilities of gifted students. It is also designed to reduce the challenges and difficulties they face on a daily basis. Some of the challenges are uneven development (advanced intellectual ability and more age-level socio-emotional and physical development), perfectionism, high adult expectation, and intense sensitivity. Here are some examples of challenging behaviors among gifted students: - Easily gets "off task" and "off topic" - Is easily bored - Can become disruptive in class - Shows strong resistance to repetitive activities and memorization - Completes work quickly but sloppily - May resist working on activities apart from areas of interest - Takes on too much and becomes overwhelmed - Challenges authority - Does not handle criticism well - Does not work well in groups - Forgets homework assignments - Can be very critical of self and others - Is a perfectionist and expects others to be perfect as well - Easily gets carried away with a joke - Has a tendency to become the "class clown" - Demonstrates strong expressive skills - Sometimes perceived as a "know-it-all" by peers - Is sometimes "bossy" to peers in group situations One possible reason of such difficulties is because the class is "too easy" for them. Additionally, the topic is not something that the gifted student may be interested in. Therefore, the student is unable to focus on the task or appears to be bored, which leads others to misinterpret his or her behavior. In addition, isolation is one of the main challenges faced by gifted students because of characteristics such as extreme sensitivity, perfectionism or learning differences. In order to fit in, some gifted students try to hide their ability or underperform. Because of social isolation and sensitivity, gifted students sometimes demonstrate anxiety and depression more than their peers. According to research conducted in the U.S., about 15-25% of gifted students drop out of high school. The main reasons were reported as failing grades, disliking school, finding a job, or becoming pregnant. Most of them were not planning to go back to school. This tendency was much stronger for gifted students of low socio-economic status or from racial/ethnic minority groups. In particular, the experts and parents need to be aware that there are some similar traits between giftedness and ADHD. For example, characteristics of ADHD children such as, poor attention, low tolerance for persistence on tasks, power struggles with authorities, are also seen in gifted children who are bored. Therefore, there is the possibility that some gifted children are misdiagnosed with ADHD. The gifted teacher in School District C expressed her concern about misdiagnosis during the interview. Children who are raised in low income families are more likely to be misdiagnosed for ADHD because they typically do not receive enough educational stimulation at home. Those children might show some symptoms of ADHD such as hyper activity or excessive energy, and therefore, the experts recommend putting the child on the ADHD medicine. She often advises the parents of the child to stop giving the ADHD medicine. As a result, some of them demonstrate a rapid growth in academic achievement. Recently, some children have been identified as "Twice Exceptional." The term refers to gifted children who are also identified with diagnosable conditions, such as learning disabilities, mental health problems, and neurological disabilities (e.g., ADHD, Asperger Syndrome, Dyslexia). However, due to the masking of their strengths and weaknesses, those children are very difficult to identify. They often show greater asynchrony than average children (e.g., superior vocabulary, difficulty in written expression). Finding the right school for the twice exceptional child can be challenging for the parent. Professional and public awareness about twice exceptional is an urgent agenda. 5. What does Japan learn from the gifted education in the United States Finally, I would like to discuss what Japan can learn from gifted education in the United States. Among all Asian countries with top ranking PISA (international assessment that focuses on 15-year-olds' capabilities in reading literacy, mathematics literacy, and science literacy) scores (China, Korea, Hong Kong, Singapore, Japan), Japan is the only country, which has no formal policies supporting gifted education. Perhaps this is due to the Japanese educational philosophy that teaches that every child should receive an equal education and special treatment should not be allowed. Japanese also view gifted education negatively because it is associated with elitism. Furthermore, in Japan high academic achievement is viewed as the result of the student's effort and hard work and not the result of innate abilities. However, this cultural view is also found in other Asian countries such as China. In 2011 an autobiography on parenting, Battle Hymn of the Tiger Mother, by Yale law professor Amy Chua, prompted a huge response from readers, both positive and negative. As the daughter of a Chinese immigrant, she promoted the authoritarian, strict style of Chinese parenting, in contrast to the permissive, indulgent style of Western (United States) parenting. Through her book she explained how Chinese parenting style could produce academic high achievers and musical prodigies. Although there has been strong criticism toward her controlling and obsessive Chinese parenting style, there seem to be some fears arising that the United States is not adequately preparing their children to survive in the global economy against countries like China or other countries, which strive to make their children academically successful. Chua appears to believe that providing an optimal environment for the child to be successful and pressuring the child to work hard is the key for giftedness, instead of the child's innate ability, will or interest. However, there are possible side effects of overly strict and controlling parenting styles on children's mental health (e.g., depression, high anxiety, and suicide). I must add that highly devoted and sacrificial Japanese mothers who relentlessly drove their children to study received world attention, and much earlier than China. Yet, Japan gradually learned that too much academic pressure on children creates social problems such as school phobia, suicides or other behavior problems. Consequently, Japanese education reforms took place to provide "yutori kyoiku" (relaxed education) for students. This led to a decline in some basic academic skills for Japanese students in comparison to other industrial countries. Currently, Japan is once again moving toward "datsu yutori kyoiku" (anti-relaxed education) to improve academic ability. I believe that Japanese educators need to be aware of the fact that students with high capability and talent sometimes end up dropping out or failing, and/or experiencing difficulties and challenges at school as seen in the findings from research on gifted students in the United States. In particular, the evaluation of children with disabilities, such as ADHD, should be administrated comprehensively with the possibility of the child being gifted. Japanese policy makers need to keep in mind that postponing the implementation of gifted education may penalize children with high performance capability in the current education system with its' orientation toward conformity. - *1 The names for gifted education vary. For example, two of the school districts my daughter has attended have used the wording "highly capable" and "gifted and talented." - *2 I didn't have any idea that my daughter was gifted until she was nominated for her school district's gifted program by her classroom teacher at the end of second grade. She was raised in a family environment that prioritized care and intervention programs for her brother who was diagnosed with autism. In addition, when she was age 3, she took Developmental Indicators for the Assessment of Learning - Third Edition (DIAL-3) as part of the requirement for a university affiliated preschool. She was evaluated as mild delayed, specifically in the areas of physical and sensory function. The evaluator recommended her for Title 1 Preschool, which is a federally funded program providing services to children with developmental needs, ages three to five (non-kindergarten) years. However, because of my work schedule, she was only able to attend this compensatory program for 2 months. - *3 According to the Merriam-Webster dictionary, "percentile is a value on a scale of 100 that indicates the percent of a distribution that is equal to or below it." In percentile scores, 50 percentile is average. - *4 Because the OLSAT test measures the student's cognitive processing instead of his or her knowledge, the school told us that no preparation was necessary. - *5 This may be related to the demographic characteristics of District C. The annual household income in this town is $87,670 (The average household income in the U.S. is $50,221). About 96% of the adult residents have an educational background beyond high school (The average percentage in the U.S. is 85%). - Dai, D. Y. (2010). The nature and nurture of giftedness: A new framework for understanding gifted education. New York: Teachers College Press. - Hoagies' Gifted Education Page (2011). - Ibata-Arens, K. C. (2012). Race to the future: Innovations in gifted and enrichment education in Asia, and implications for the United States. Administrative Sciences, 2(1), 1-25. - Johns Hopkins University Center for Talented youth (2011). - National Association for Gifted Children (2008). - Renzulli, J.S. (2005). The three-ring conception of giftedness: A developmental model for promoting creative productivity. In R. J. Sternberg & J. E. Davidson (Eds.), Conceptions of giftedness (2nd ed., pp. 246-279). New York: Cambridge University Press. - Robertson, E. (1991). Neglected dropouts: The gifted and talented. Equity & Excellence, 25, 62-74. - Ryser, G. R. (2004). Qualitative and quantitative approaches to assessment. In S. K. Johnsen (Ed.), Identifying gifted students: A practical guide. (pp. 23-40). Waco, Texas: Prufrock Press. - Sternberg, R. J., Jarvin, L., & Grigorenko, E. L. (2011). Explorations in giftedness. New York: Cambridge University Press. - Teachers and Families (2011). The exceptional child.
<urn:uuid:cc666759-2d3a-46a5-b9b9-86dcbc08a926>
CC-MAIN-2022-33
https://www.childresearch.net/projects/special/2012_01.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00499.warc.gz
en
0.957281
4,529
3.875
4
We have learned how the brain routes (working memory) information from the neocortex through the thalamus and back to the neocortex to create the “inner voice” or “mind’s eye.” We have also seen how the working memory is used to transfer high-level information (like “the red apple is on the table”) to the prefrontal cortex. Yet the still unanswered question is how consciousness would arise from that loop. Simply claiming that information passing through the thalamus results in our subjective experience is not a real explanation. To approach this issue, let us first examine when we think someone else is conscious. It actually requires very little proof for us to initially think of someone or something as conscious. Just having similar capabilities of the superior colliculus (tracking a moving object with its head and eyes) can be enough for us to automatically attribute a type of “consciousness” to a person, animal, or thing. An example is the robot “Pepper” (see Figure 6.22) which (who?) is programmed to recognize faces and move its (her?) head. Now, to be convinced that the person (or machine) in front of us is conscious we need more than just observing the person’s (or machine’s) ability to follow an object with his (or its) eyes. In this section, we will look at the how a brain (or machine) can learn and adapt to the environment instead of just reacting in very predictable ways (for example, following an object with one’s eyes). What components are required for a machine that can learn and react to its environment? For our examination of learning, we will look at a basic thermostat that regulates a room’s temperature. It starts heating by switching on the heating unit when the detected temperature drops below a certain limit, and switches off the heating unit once the temperature reaches the limit (see Figure 6.23). Obviously, if the room temperature drops quickly or if the connected heating unit takes a long time to warm up or cool down, this might lead to significant deviations from the desired temperature. Figure 6.24 shows an example in which the thermostat is initially switched on (t2t2), and heats the room and switches off when the desired temperature V2V2 is reached. As the heating unit itself still emits heat after the heater is turned off, the thermostat will overshoot the target temperature. Then, it undershoots the target temperature because the heating unit needs time to fully heat up. It keeps over- and undershooting the target until the desired temperature (within the margin of error in the error band) is reached. If our brain worked like that, we would, for example, have trouble grabbing a glass of water as we would reach too far or not far enough. To improve the programming of the thermostat, we can add a simple feedback loop which adapts internal values when noticing that the expected measurements are too high or too low. For example, the thermostat could use two variables to switch on its heating unit earlier (when it predicts the temperature to drop) or switch off its heating unit earlier (when it predicts it will overshoot the desired temperature): - Cooldown time: This variable records the time it takes for the heating unit to cool down. If the measured room temperature ended up being too high after the heater was switched off, then the cooldown time was too short; the thermostat was switched off too late. The thermostat could increase the cooldown time a bit and perform better next time. - Heat-up time: This variable records the time it takes for the heating unit to heat up. If the temperature ended up being too low, the heat-up time was too short; the heater should have started heating earlier or switched off the heating unit later. Again, the thermostat could adapt the heat-up time. With the cooldown timer and heat-up timer in place, we have created a basic system that can learn over time (see Figure 6.25). The thermostat could keep the room temperature closer to the desired value (and save energy) by starting or stopping heating earlier or later. For example, if the desired temperature is 65F, the thermostat probably has to switch off the heater some time before the room temperature reaches 65F because the heating unit is still hot when switched off. Using variables and adapting them depending on the outcome of an action can be applied to problems of any size. As it requires interaction with and feedback from the environment, this type of learning is also called supervised learning as some sort of “supervisor” is involved to decide whether or not a decision was productive. In the case of the thermostat, the supervisor was the function that checked whether or not the desired temperature has been reached (“Did the temperature in the room rise above the desired value?” and “Did the temperature in the room fall below the desired value?”, respectively). If the desired temperature was missed, the thermostat failed in its process and the heat-up or cooldown timers need to be adjusted. Supervised learning Using supervised learning, a brain (or computer) can improve its response to a situation with each new encounter. For example, a dog can learn to sit or roll over on command by getting positive rewards for doing so during training. The same principle that applies to the thermostat also applies to, for example, learning to throw a ball at a target. The brain records which chain of neurons are responsible for an action. If you observe yourself missing the target, the erroneous chain of neurons is weakened and you might throw the ball differently the next time. If you hit your target, the successful chain is strengthened and you will be more like to throw the ball the same way the next time. That means that the next time, you are less likely to miss or more likely to hit the target, respectively. The major downside of supervised learning is that it always requires interacting with the environment and observing the outcome. The two variables represent the behavior of the thermostat (when to start or stop heating), not the behavior of the room’s air temperature. Hence, while the feedback loop of the learning thermostat we discussed above reduces the time to reach a stable temperature in a room, it requires a trial-and-error approach. As such, a system based on trial and error is like a black box into which you cannot look. You could neither open it to learn the reasoning behind each activation or deactivation of the heating unit, nor could you tell the thermostat to update its heat-up and cooldown time according to the new coordinates, room size, or window configuration. In our brain, one of the “supervisors” is the amygdala, which has mapped sense data to positive and negative emotions, with positive emotions strengthening a chain of neurons, and negative emotions weakening it. Vice versa, the amygdala itself learns this mapping of sense data to positive and negative emotions with its own supervisor. The amygdala’s supervisor is the experience of physical pain or pleasure, a hardwired mechanism in our body. In turn, pain and pleasure evolved as each proved to be advantageous for the fitness of lifeforms to prevent damage and encourage procreation and locating food. In summary, decisions of our neocortex are supervised by the emotional mappings in our amygdala. The amygdala learns those emotional mappings with the help of the pain and pleasure mechanism of the body, which in turn evolved over generations by selection. The neurons in the amygdala (and in other parts of the brain) learn by what is called “backpropagation,” which is comparable to a bucket brigade where items are transported by passing them from one (stationary) person to the next. This method was used to transport water before hand-pumped fire engines; today, it can be seen at disaster recovery sites where machines are not available or not usable. To encourage this behavior, everyone in the chain is later honored, not just the last person of the bucket brigade who is actually dousing the fire. In neuronal learning, the last neuron back propagates the reward from end to start, allowing for the whole “bucket brigade” or chain of neurons to be strengthened. This encourages similar behavior in the future—it is the core of learning. A good example for supervised learning is training a dog to sit on command. The learning cycle is as follows: First, you give the dog treats. Using the pain and pleasure mechanism, the dog’s amygdala connects the sense data (receiving a treat) with a positive emotion. She has learned that your treats are good. Next, you let her stand while you wait until she sits down. While waiting, you repeatedly tell her to sit. Of course, she does not understand what you are saying. Eventually, she will get tired and sit down. Then, you give her a treat. Her amygdala will link the command “sit,” sitting down, and receiving a treat. In time, she will feel happy about not only receiving a treat, but also sitting on command. What would happen if we programmed such a system into a computer and gave it a language module to speak with us? In the wake of the development of computer technology, Alan Turing asked this question in 1950 [Oppy and Dowe, 2019]: How could we test whether a computer is as intelligent as or indistinguishable from a human? He proposed the so-called Turing test, in which a person is put in a room with a computer terminal. Using only a textual chat interface, the person has to figure out whether or not his chat partner is a computer program or a human being. If the person interacting with the computer cannot give a clear answer (or is convinced that the computer program is human), the machine has passed the Turing test. Turing test In 1950, Alan Turing proposed the Turing test to assess whether or not a machine is intelligent. In the test, a human participant would observe a text chat between a computer and a human. The machine would pass if the observer could not tell who was the machine and who was the human. What questions would you ask to determine whether you are talking with another person or a computer? You could run through questions of a basic intelligence test but that would not answer the question of whether the computer has human-level intelligence and a sense of “self,” only that it is intelligent. Likewise, if you straightforwardly asked whether or not your chat partner has an inner experience, the computer could simply respond with a programmed response of “Yes, I am conscious of my inner experience.” With this line of questioning, it seems that all we can find out is how well-prepared or well-programmed the other side is, not whether or not the chat partner is conscious. It seems that our intuitive understanding of consciousness depends primarily on the observer attributing consciousness to the person or entity. Given that with the Turing test, the arbiter of whether or not a computer is conscious is always the subjective opinion of a test person, this is not surprising. Because answers to pre-determined questions could be prepared beforehand, we could test the flexibility of our conversation partner by referring to the conversation itself. For example, we could talk about food preferences we like and then ask the chat partner what dish it (he?) would recommend. To answer this question, there is no general rule as each person is different. The machine (or person) would have to evaluate our preferences throughout the conversation to build an idea about what we like or dislike. This is similar to programming a chess computer: you can program the opening moves into the computer, but once the game diverts from the database of memorized positions, the computer needs to rely on actually playing chess and predicting your moves. For example, we could bring up a statement like “When you ordered me a pepperoni pizza yesterday, I told you how great it was. I lied. I actually prefer pizza with tuna.” A computer program with supervised learning based on neural committees with backpropagation (see Figure 6.26) could not do anything with updated information during a conversation. It could only correlate two different sense data and evaluate whether or not that response was positive or negative. For example, it could take into consideration your immediate positive reaction when it had ordered you a pepperoni pizza. However, modifying its own neural committees based on specific new information (that you like tuna instead of pepperoni) is impossible as it is a black box. All it could do is to evaluate its action of ordering you a pepperoni pizza in general as negative and then (hopefully) find out through trial and error that you actually like tuna. It could not order you specifically a tuna pizza because it can only learn through action and reaction, not through abstract information. It could only decide to order you another random pizza. This of course sounds anything but intelligent. Why could the computer not just order a tuna pizza? Well, the same question could be asked about the thermostat: why could the thermostat not just understand that we moved it to another room? The challenge is that we are quick to attribute abilities to a machine that it does not have. Just because the computer could speak to us does not mean it can react to information like we do. In summary, supervised learning alone is not enough to react flexibly within a conversation and pass the Turing test. This leads us to a different approach to learning, which does not need direct interaction with the environment, namely the so-called “unsupervised learning” method. How can we gain new knowledge about the world without relying on trial and error? The difference between supervised learning and unsupervised learning is that the former requires trial and error, while the latter just needs sense data to build a model of the world. Scientists developing automated machines have shown that to control a certain variable, “every good regulator of a system [needs to run] a model of that system” [Conant and Ashby, 1970]. For example, to keep the room at a certain temperature (the variable), the thermostat (the regulator) needs an internal model of the system (the heating unit and the room it heats—the system). Our initial action depends on whether or not we are familiar with the involved entities. When encountering a new object, we are first inclined to go around it, touch it, or put things on it until we have formed a basic idea what that object is. As discussed in Philosophy for Heroes: Knowledge, we create the concept of, for example, a table by looking at many different tables, ending up with properties like number of legs, size, material, and shape. When deciding whether or not a table would fit into our living room, our brain relies on the spatial understanding of the particular table. For this, our brain adds the specific measurements or values of the properties of the concept, forming a model of the table in our mind. Model The model of an entity is a simplified simulation of that entity. It consists of the entity’s concepts and its properties, as well as some of the entity’s measurements. We have already discussed examples for the brain building such models, for example, of the external world (object permanence), a body schema, as well as the inner world of yourself and others (theory of mind). The body schema answers the simple question “Where are my limbs?” Without a body schema, we could still do anything we can do without it, but it would be much harder. This becomes apparent when we, for example, try to manipulate something with our hands when our vision is warped. When we can observe our hands only through a mirror, we have to translate our movements consciously (left becomes right); we cannot rely on our body schema and simply grab an object. In principle, mammals use some sort of body schema to incorporate changes in their physiology (body size, limb length, etc.) as they are growing up. Similarly, we have learned how the superior parietal lobe maintains a mental representation of the internal state of the body. While the current state is provided by the internal and external senses, it is significantly more accurate to have an internal model that is constantly updated by the input from the senses [Wolpert et al., 1998]. We see the same model-building when using tools. Our brain sees tools as temporary extensions of our limbs. To do this, our brain manages several different body schemas (models), one for each type of tool. While humans are typically able to do this, many other animals have difficulties with this task. For example, dogs often fail to incorporate things into their body schema. Let us consider a dog with a stick: when holding a stick in her mouth, she might struggle to get through openings because, in her mind, the opening is wide enough for her body, but she might fail to include in her calculation the space needed for the stick. There are two possible ways for the dog to deal with this situation. One way would be that she learns through trial and error to tilt her head when walking through an opening while holding a stick. The downside of this approach is that it might work for some openings and sticks but not for others. Whenever she faces difficulties, she would again have to rely on trial and error until she turns her head in a way so that she can pass through with the stick. A better way would be that she creates a model in her mind of herself with the stick in her three-dimensional environment. This way, she could understand why turning her head is important, and to what degree she has to turn her head depending on the dimensions of the stick and the opening through which she is moving. Unsupervised learning Using unsupervised learning, a brain (or computer) can build a concept by analyzing several sense perceptions, finding commonalities, and dropping measurements. For example, unsupervised learning could be used to form the concept “table” by encountering several different tables and finding out that they share properties like having a table-top, the form, material, and size of the table-top, and the number of table legs. Our brain uses both supervised and unsupervised learning, one to decide upon an action, the other to model the world and make predictions. We use unsupervised learning to create a model of the world to predict what will happen, evaluate it, and feed it back to our supervised learning as “reward” or “punishment.” Applied to the example of the thermostat, we would know how quickly it cools down, how long it takes to heat up the whole room, differences between summer and winter, habits of the people living there, etc. during installation. With this information, the thermostat can calculate the optimal time to switch the heating unit on or off (see Figure 6.27) without having to learn it by trial-and-error, significantly speeding up the progress of adapting to new environments. But the approach of having a model of a situation to which you can apply parameters allows more than just quicker learning. It enables you to provide reasons for your decision and explain it to others using your model. For example, in the case of the thermostat, it might suddenly start the heating unit during the day even though the sun is shining. Without additional information from the thermostat, we might start wondering if it is defective. In such a case, a smart thermostat could inform us, for example, that the window is open or that the weather service reports an upcoming snow storm. We could also tell the thermostat that we have just installed automated blinds that open in the morning (allowing the sun to heat up the room). The thermostat could take that information, update its model and start heating the room correctly (maybe with some minor adjustments) on the first day without having to spend weeks of learning the new environment. The thermostat could even start giving us hints about how to reduce the energy bill. This requires an understanding of the underlying model, and the ability to translate it into language. Basically, this refers to the ability to teach the user a simplified model the thermostat itself is using. Ultimately, a thermostat can only be seen as being as smart as we are if it is able to program (“teach”) another thermostat. In a social setting, this would be the equivalent of explaining our own behavior to others. If you are, for example, at a lecture and all your brain is doing is to make you jump up and rush home, what would the people around you think? Instead, if you are conscious of the steps leading up to the decision (for example, remembering the electric iron you forgot to switch off, combined with a self-image of being forgetful), you can provide a reasonable explanation for your behavior. Another example for an application of unsupervised learning is in classifying images using artificial intelligence. A computer system based on unsupervised learning still requires that it is presented with inputs, but it does not have to interact with its environment to adapt to new (but similar) conditions. It simply learns how to describe or differentiate the input according to a number of variables, not to make actual decisions. For example, if we provided the system with pictures of faces, with the unsupervised learning method, it would return variables relating to age, sex, hair color, facial expression, eye color, facial geometry, and so on. This approach can be powerful because when encountering new objects that it has not yet observed (but that fit into its schema of properties), it can derive other properties from them. For example, if it has learned how faces change with age, it can make predictions about how the face might have looked in the past or will look in the future. In summary, it could be said that unsupervised learning allows you to imagine how things would be in a different situation. In the case of the thermostat, we could “ask” it about its heating plan if we were to move to the North Pole, the equator, the basement, or to a room with a lot of windows. With some additional programming, we could even inquire, for example, the heating costs we would save by moving to a smaller house or apartment. Applied again to humans, it seems clear that those of our ancestors who had more in-depth insight into the inner workings of their minds were able to form better relationships. Being able to arrive at logical decisions and to communicate how they came to those decisions made them reliable members of society. Others could better predict how they might act—assuming the explanations are not created after the fact as rationalizations for their behavior. Let us now apply what we have learned to the Turing test and the example of ordering a pizza. If the machine had an internal model of your preferences, it could recall the situation when it originally ordered you the pizza, include your new assessment of your preferences, and run another backpropagation on the original neurons that led to the decision. This way, it would lead to a higher evaluation of your preference for tuna pizza, and a lower evaluation of your preference for pepperoni pizza. And that is the power of learning using a model (see Figure 6.28) versus trial and error. Once we have conceptualized a situation and learned how to handle it, we can easily apply the knowledge to similar situations with different parameters (measurements). And in the context of a conversation, we can directly incorporate new information in our response. This concludes our examination of the elements of consciousness. It is now time to collect our findings and form a coherent theory of consciousness. So far, we have learned: - We learned how different brain parts evolved over time, and how they relate to decision-making. We still need to take a look at how each step contributes to evolutionary fitness. Our goal is to determine what use consciousness has for us. - This article led us through our evolution, comparing humans to other primate species. With what we have learned in the other parts of the book, we still need to precisely point out the differences between humans and apes when it comes to consciousness (assuming there are any). - Then, we discussed how the brain tries to predict the future, and how it builds a body schema. Similarly, the brain learns to create a theory of mind. - We found that it makes the most sense to look at consciousness from a monist perspective (materialism). This helped us to draw a diagram explaining the process of consciousness step by step. - We covered what effects various cognitive defects can have on our perception of self. We clarified the difference between a lack of awareness and blindness and discovered that it is not only our senses that are pre-processed, but also our attention. We cannot react to things we are not being made aware of by our brain, but we can learn strategies to overcome such a lack of consciousness. - We learned what the “inner voice” and “mind’s eye” are, and how these relate to the working memory and the prefrontal cortex. This also allowed us to clarify our understanding of the process that creates consciousness. The remaining question was where the subjective experience ultimately comes from. - Finally, we saw how the brain can build models of the world and in what way they are advantageous to us. We tied this in to our discussion of concepts in Philosophy for Heroes: Knowledge. In summary, what is still missing is an explanation for the subjective experience of consciousness and a discussion of the evolution of consciousness in comparison to that of apes. In the course of this discussion, we will develop a new theory, the awareness schema, which we will discuss in the next section.
<urn:uuid:f85a2a77-7248-4084-a484-7898812787a8>
CC-MAIN-2022-33
https://www.lode.de/blog/model-building
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.37/warc/CC-MAIN-20220809003642-20220809033642-00698.warc.gz
en
0.953125
5,356
3.359375
3
Strategic readers know that the purpose of reading is to understand, and that there are a number of comprehension strategies that they can adopt in order to build knowledge from reading. Strategic readers are able to apply such strategies in “the process of simultaneously extracting and constructing meaning through interaction and involvement with written language,” as the Rand Reading Comprehension Study Group defines reading comprehension. Strategic readers process information by constantly monitoring their understanding. In that process, strategic readers, when faced with understanding challenges, engage in problem solving and self-correction. Buehl (2007) provides insight into the seven cognitive processes of proficient readers, beginning with making connections to one’s prior knowledge – which is regarded as the one most critical strategy for learning to take place, since no new knowledge or understanding is constructed in a vacuum – to generating questions, creating mental sensory images, inferencing, prioritizing, and synthesizing the information being read. Add intrinsic motivation to that process and the result is a strategic and engaged reader. Importantly, the more one reads to understand, the more motivated one becomes, and the more social interaction ensues, for engaged readers are prone to sharing and socially connecting around what they are learning. The more one interacts, the more strategies are mobilized, and the more one’s knowledge base grows, leading to the desire to read more. That is the engagement cycle, as defined by Swan (2003). Teachers and librarians not only can but indeed they must foster engagement and self-regulation as critical ingredients for strategic reading by means of a balanced comprehension instruction approach that encompasses a supportive classroom context and a model of comprehension instruction that models and supports the development of reading strategies for learners. (Duke & Pearson, 2002) The seven comprehension processes of proficient readers (Buehl, 2007) are mirrored in the six individual comprehension strategies that excellent reading teachers are intent at scaffolding and modelling for and with learners (Duke & Pearson, 2002). Think-alouds represent a strategy for activating schemata and generating questions. Inferencing and determining importance are engendered in text structure analysis. The ability to synthesize is exercised by means of summarization. The creation of visual and sensory representations is both a process and a strategy which boosts one’s synthesis capacity, also helping in the self-monitoring and self-correction process. RAND (2002) provides us with the heuristic for thinking about reading comprehension, defining its components: the reader, the text, and the activity, all of which are nestled within a sociocultural dimension, which is by and large overlooked by NAEP (2015/2019). Indeed, how does a standardized assessment account for a virtually infinite sociocultural variance? If RAND gives us the reader, the text, and the activity, Duke & Pearson (2002) give us the teacher, and CORI (Swan, 2003) pulls in the sociocultural dimension by articulating skills and strategies, knowledge, motivation, and social collaboration. Concept-Oriented Reading Instruction gives us the ‘how’ by leveraging the three basic needs for intrinsic motivation, namely competence, autonomy, and belonging. In other words, learners need a sense of self-efficacy, choice, and opportunities for social interaction and human connection. CORI then represents the supportive classroom context articulated by Duke & Pearson, one in which learners read a lot, read for real reasons, engage in high quality talk about text, and ultimately strive for the construction of conceptual knowledge. I find that one of the greatest challenges in my own teaching context, as well as in the Brazilian educational system, is making instruction coherent as opposed to fragmented, as proposed in CORI. Such coherence and transdisciplinarity entails a whole set of very unique beliefs that have not necessarily been cultivated by teacher training programs, and certainly not in traditional and mainstream educational settings. The shift from a fragmented to a relational and systemic view of and approach to knowledge construction and instructional design (as opposed to lesson planning) requires major transformations in teacher education and school culture. Fragmented teaching is not time productively spent, at least not for learners. It may make teaching and planning less complex for the teacher, but it is ultimately disengaging for all involved in the educational experience. I was particularly struck by the difference between lesson planning and instruction design: intention. Johnson (2014), referring to the implications of the TPACK framework, says: “The framework supports and deepens literacy practices, allowing teachers to become thoughtful instructional designers. Washburn (2010) writes, ‘Instructional design differs from lesson planning, the term we traditionally use to describe a teacher’s pre-instruction preparation. Designers communicate by intentionally combining elements’ (pp.2-3)” I wonder what a teacher development program that enables teachers to operate that shift from fragmented to systemic, from instructor to coach, from planner to designer, looks like? I wonder what will it take for teachers of all subject areas to realize that they are, first and foremost, literacy teachers? Duke, N.K. & Pearson, P. D. (2002). Effective practices for developing reading comprehension. In What Research Has to Say About Reading Instruction, 3rd edition. International Reading Association. Swan (2003). Why is the North Pole Always Cold? In Concept-Oriented Reading Instruction (CORI): Engaging Classrooms, Lifelong Learners Johnson, D. (2014) Reading, Writing and Literacy 2.0. Teaching with Online Texts, Tools, and Resources, K-8 (chapters 1 & 2) Buehl, D. (2014). Fostering Comprehension of Complex Texts (Chapter 1 pages 3-11) in Classroom Strategies for Interactive Learning (4th Edition). International Literacy Association. RAND Reading Study Group. (2002). Reading for understanding: Toward an R&D program in reading comprehension. Santa Monica, CA: Rand. National Assessment of Educational Progress [NAEP] Abridged Reading Framework for the 2015/2019 My purpose in this paper is to reflect on my experience in the Summer Institute in Digital Literacy in terms of how the learning experience impacted my role as an educator, digital storyteller, and as a leader engaged in promoting innovation processes in my pedagogical practice with teacher professional and digital literacy development. In my roamings within my original academic field of study, Cultural Anthropology, as well as in my existential rumblings through adulthood, I have embodied my life experience around the theme of story, inspired by the concept of bliss, as proposed by american Mythologist Joseph Campbell. Campbell traces back the notion of bliss to the Sanskrit phrase sat chit ananda, which he translates as being, consciousness, and rapture. He articulates his interpretation of this transcendental phrase in his famous quote (2004), and one which has been a source of personal and professional inspiration: “Follow your bliss and the universe will open doors where there were only walls.” Campbell talks about one-to-one conferences with his college students and how he was able to notice a student’s eyes light up when an idea or theme sprang up into the conversation. He would make a point of encouraging his students to pursue those ideas further, to allow themselves to be driven by their curiosity and follow their bliss. Thus, it is in pursuing a journey of inquiry, driven by themes that enrapture our senses, our intellect and spirit, that we perceive value in our existence. People are not seeking the meaning of life per se. What we seek, according to Campbell, is an experience of being alive. We create meaning from acting on our experience, and also from reflecting on the consequences and the value that results from reflecting on experience. (Dewey 1916, 92) In that sense, I would like to explore the meaning making that is achieved in the process of storytelling by sharing some of my digital literacy learning background, for it is in the process of creating and sharing our stories, our own personal myths with others, that we gain a deeper sense of identity, confidence, and purpose. Importantly, reflection on experience, as expressed in stories, operates change in how we view ourselves and in our being in the world. We change, therefore, we learn through sharing our personal experience of embodied life with those around us. My central thesis in this reflective piece is that digital storytelling is a rich means for empowerment, playing a critical role in scholarly, professional and personal identity formation in educators. My story as an inhabitant of the digital universe began in 2014, when I explored the concept of rhizomatic learning in participation within a community of global educators in a cMooc (connectivist massive open online course) called Rhizomatic Learning: the Community is the Curriculum. At the time, the feeling was that of diving into the deep end of the digital literacy swimming pool. Driven by my curiosity and a longing for interaction in a new learning space, I immediately began engaging with other members of this rhizomatic community, who responded to Dave Cormier’s course provocation-assignments in the form of blogs and digital art, which were also shared on Twitter via the hashtag #Rhizo14. My first digital creations took the shape of writing in my own, then newly-created blog, and also in interacting with other participants in their digital spaces on their blogs and on social media. This was the year I began building my professional learning network on Twitter, which plays a critical part in my everyday digital literacy and professional learning habits. Notably, the more participants engaged in collaborating to cocreate new meaning by commenting on one another’s blog posts and tweets, the deeper our appreciation for collaborative inquiry became, which is one of the three core design elements of the Summer Institute in Digital Literacy. My Rhizo14 cMooc experience was intense, generating dense human connections, despite the fact that it was fully online. One of such strong bonds were with Egyptian scholar Bali Maha, whose prolific digital scholarship has been an inspiration in my own learning in digital spaces. My digital literacy learning journey both in Rhizo14 and in SIDL have helped me increase my confidence as a digital creator and storyteller, for they were both instances which enabled me to exercise my voice and choice. Indeed, my sense of agency as a learner was increased in such inquiry-driven, collaborative educational contexts. After all, it is: “(…) by choosing how to creatively express ideas and create media, as well as explore different ways of taking social action, (that) learners may explore their identities as citizens who can improve their communities and society.” (Hobbs et al. 2019) The intensive face-to-face dimension of SIDL brought to the fore the interactive and relational aspect of digital creation. The minds-on, hands-on work with my dyad partner, Carla Arena, and the interplay between our different modes of collaboration (I am a “southwest” and she is a “northeast” as identified in the compass points dynamics we engaged in during SIDL) shifted the focus from skills with digital tools to interpersonal and time management skills in managing the complexity which naturally emerged from our collaboration. Furthermore, our knowledge with the digital tools used in our design studio project shifted our challenge focus from digital skills to digital literacy, in the sense that we were concerned with how the audience of our project – educators – would engage and make meaning with the digital artifacts we were creating together. In other words, we found ourselves more focussed on the who, when, where, why and how learners would make sense of the digital materials we were engaged in co-creating. This is the difference between digital skills and digital literacies: “We often hear people talk about the importance of digital knowledge for 21st-century learners. Unfortunately, many focus on skills rather than literacies. Digital skills focus on what and how. Digital literacy focuses on why, when, who, and for whom.” Maha Bali (2016) Another core design element of SIDL is motivation as a primer for learning and development. A powerful connection with motivation is the concept of the ‘Golden Circle’, as proposed by Simon Sinek. Agency is heightened by a clear sense of purpose, and that is what we experienced in the SIDL opening event. Participants were prompted to create their own meanings of digital and media literacy, and share those with other participants and faculty members in order to spark communal dispositions among all people in the learning space. Moreover, the Digital Learning Motivation Profiles list supported participants in making connections between their educator identities and their motivations for engaging in digital and media literacies developmental work with learners. This horoscope-style self-assessment served as a catalyst for people to orbit towards others with similar motivations, fostering further connection among participants, who also felt valued and respected in their diversity of motivations in approaching the work of digital and media literacies. Importantly, this illustrates the very nature of the digital literacy mosaic created by the engagement of people coming from multiple knowledge areas and interests. The third design element at the core of SIDL is the most powerful, and the one which most resonates with my work as a change agent and social practitioner in the field of innovation in education. As an advocate for human-centered innovation, I share the concern expressed by Hobbs et al. (2019) with regards to the reduction of digital literacy pedagogies to the practice of the so-called personalization of learning which is driven by software algorithms. Such device-centered approach to digital literacies development disengages and disenfranchises both educators and learners, promoting a dangerous power shift which puts the machine in the center of learning, rendering the human element of the experience less important and peripheral in the essentially human process of learning though meaning making and the construction of understanding. Rather than personalized, learning is personal in that it is built in inquiry-driven cooperation among people. According to Dewey, as argued by Dyehouse, “shared understandings are the consequence, not the cause of cooperative action.” Dyehouse continues citing Biesta (2006, 30): “For Dewey, education is more basically a matter of ‘those situations in which one really shares or participates in a common activity, in which one really has an interest in its accomplishment just as others have.” Dyehouse (2016, 175-176) These are the participatory situations in which successful collaborative activity results in learning and understanding. Dyehouse concludes by saying that “(…) for Dewey, the real key to understanding is in doing things together.” This view of making learning personal validates the networked and collaborative practices I have adopted in the design of professional development opportunities for educators both with my dyad partner Carla Arena in Amplifica, and in my role as innovation specialist in my school, Casa Thomas Jefferson. In the first Amplifica seminar for educators, in which I participated as a presenter in 2015, my talk was titled “The Power of Connections”. This was an inspirational talk in which I shared the design principles informing the technology integration and digital literacy development practices adopted in one of my early projects as technology integration coach in my school. Similarly to Hobbs et al. (2019, 408), I believe that the work of digital literacy development requires the intentional design of professional development opportunities that: “(…) foster teacher agency so educators gain confidence in designing their own lesson plans and instructional units for inquiry-based digital learning. We see teachers as eminently capable of supporting and scaffolding student learning through inquiry and collaboration.” Hobbs et al. (2019, 408) Bali (2016) mentions the 8 elements of digital literacies proposed by Belshaw (2014). Interestingly, she points out the element of confidence is an important one among the elements. Belshaw explains that the element of Confidence requires a slightly different approach to its development in comparison to the other elements, for Confidence is a transversal element to all others. He refers to the process of Confidence development in digital literacies as the act of connecting the dots. According to Belshaw (2014, 52): “Developing the Confident element of digital literacies involves solving problems and managing one’s own learning in digital environments. This can be encouraged by the kind of practices that work well in all kinds of learning experiences. Namely, self-review focusing on achievement and areas of development, paired with mentoring. I believe P2PU’s ‘schools’ to be an extremely good example of an arena in which the Confident element of digital literacies can be developed. Not only are learners encouraged to reflect on their practices, but to form a community. Such communities can help build confidence.” Belshaw (2014, 52) Finally, in SIDL, we had the opportunity to experience the Personal Digital Inquiry model proposed by Coiro et al. (2016) scaffolding our knowledge building and the development of participants’ digital literacy skills. The PDI model was clearly articulated throughout the SIDL immersive learning experience, its power notably evident as the Design Studio unfolded. Dyad partners dove deeply into the inquiry process by wondering and discovering, accessing, analyzing and evaluating knowledge and ideas in collaboration and discussion, then taking action and creating digital artifacts with which to promote learning. Reflection pushed us forward and back into the PDI Framework for Teaching and Learning, eliciting the refinement of our final projects. Keynotes and workshops in SIDL were instances of teacher-driven action quadrants illustrated below the (green) line of inquiry in the image, in the giving and prompting stages of technology for knowledge building. We were then gradually released into the upper, learner-driven quadrants of making and reflecting as the inquiry was sustained until the end culminating event where dyads proudly shared their learning artifacts with the whole community. Circling back to the element of confidence in the development of digital literacies, I find myself wondering about the interplay between one’s process of confidence development and the development of one’s leadership persona. I am intrigued by the inner workings of the identity formation of a digitally literate individual, learners and educators alike, in such collaborative learning environments. The experience of exploring imagery that would represent ourselves as digital literacy leaders in our own contexts was a very powerful one to me, in particular. I gravitated towards a picture of a mystic crossing the threshold of visible reality in order to unveil the inner workings in the backstage of the universe. This symbolic exercise provided me with new language to articulate how I sense my calling to lead change in my educational context. I was left feeling a sense of potency and intentionality with regards to the leader in me. Interestingly enough, I am now engaging in the design and facilitation of a leadership academy for middle managers in my educational organization. Ever since my experience in SIDL, I have gained a renewed sense of agency, self-efficacy, and even courage to tackle this great challenge. SIDL has made feel validated in my rhizomatic and communal approach to learning, leaving me with a sense of belonging and sustained curiosity for what is to come. Lands End, San Francisco, CA. June 21, 2019. This video is how I felt after the three days I spent engaging in PBL World 2019. Clouds dissipated, and I could clearly see ahead, a new horizon – it had always been there. This was when I came to the realization that I had been going at innovation in education from peripheral perspectives – educational technology, technology integration, active learning methodologies, digital citizenship, media literacy, deep learning, 21st century learning, maker-centered learning, social-emotional skills development – all terms that we hear being thrown around when innovation in education is being discussed and advocated. Those are all great, but they are all peripheral. They orbit around a core which is pedagogical, and that is project-based learning. PBL is the pedagogy that naturally pulls all those components. Sustained inquiry generates critical thinking as a natural byproduct of collaboration and communication for an authentic purpose, to solve an authentic problem. Technology serves a concrete purpose, that of documenting, demonstrating and showcasing learning. Tools for student creation, though not for the sake of learning a new cool tech tool, but to make learning visible. PBL mobilizes the whole individual – teacher and students alike. Projects is how people work together to create things in the world. However, PBL requires a very specific type of teacher, a true educator, awakened and moved by the vision of equity in education. Meeting each student where they are, hands on, minds on work. Beautiful work. I am about to get further down the rabbit role. Moved by this insight of PBL as the core pedagogy for all things innovative about education, I am looking to explore this idea: what does professional develop that will inspire teachers to become PBL educators look like? How might we support teachers in their journey towards the development of the refined pedagogical skills that will enable them to sustain inquiry-based learning in partnership with their students?
<urn:uuid:034b70ba-7dee-48a9-b6b2-a5a8772151d3>
CC-MAIN-2022-33
https://clarissabezerra.com/category/rhizo14/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.78/warc/CC-MAIN-20220817001643-20220817031643-00298.warc.gz
en
0.956073
4,431
3.765625
4
Report on Etna (Italy) — February 1992 Bulletin of the Global Volcanism Network, vol. 17, no. 2 (February 1992) Managing Editor: Lindsay McClelland. Etna (Italy) Continued flank lava production Please cite this report as: Global Volcanism Program, 1992. Report on Etna (Italy) (McClelland, L., ed.). Bulletin of the Global Volcanism Network, 17:2. Smithsonian Institution. https://doi.org/10.5479/si.GVP.BGVN199202-211060 37.748°N, 14.999°E; summit elev. 3357 m All times are local (unless otherwise noted) The following is from a report by the Gruppo Nazionale per la Vulcanologia (GNV) summarizing Etna's 1991-92 eruption. 1. Introduction and Civil Protection problems. After 23 months of quiet, and heralded by ground deformation and a short seismic swarm, effusive activity resumed at Etna early 14 December. The eruptive vent opened at 2,200 m elevation on the W wall of the Valle del Bove, along a SE-flank fracture that formed during the 1989 eruption. Since the eruption's onset, the GNV, in cooperation with Civil Protection authorities, has reinforced the scientific monitoring of Etna. Attention was focused on both the advance of the lava flow and on the possibility of downslope migration of the eruptive vent along the 1989 fracture system. The progress of the lava flow has been carefully followed by daily field inspections and helicopter overflights. Because of its slow rate of advance, the lava did not threaten lives, but had the potential for severe property destruction. The water supply system for Zafferana (in Val Calanna; figure 43) was destroyed in the first two weeks of the eruption ($2.5 million damage). On 1 January, when the lava front was only 2 km from Zafferana, the Minister for Civil Protection, at the suggestion of the volcanologists, ordered the building of an earthen barrier to protect the village. The barrier was erected at the E end of Val Calanna, where the valley narrows into a deeply eroded canyon. The barrier was conceived to prevent or delay the flow's advance, not to divert it, by creating a morphological obstacle that would favor flow overlapping and lateral expansion of the lava in the large Val Calanna basin. The barrier, erected by specialized Army and Fire Brigade personnel in 10 days of non-stop work, is ~ 250 m long and ~ 20 m higher than the adjacent Val Calanna floor. It was built by diking the valley bottom in front of the advancing lava and accumulating loose material (earth, scoria, and lava fragments) on a small natural scarp. On 7 January, the lava front approached to a few tens of meters from the barrier, then stopped because of a sudden drop in feeding caused by a huge lava overflow from the main channel several kilometers upslope. A decrease in the effusion rate has been observed since mid-January. There is therefore little chance of further advance of the front, as the flow seems to have reached its natural maximum length. The eruptive fracture is being carefully monitored (seismicity, ground deformation, geoelectrics, gravimetry, and gas geochemistry) to detect early symptoms of a possible dangerous downslope migration of the vent along the 1989 fracture, which continues along the present fracture's SE trend. Preparedness plans were implemented in case of lava emission from the fracture's lower end. Many scientists and technicians, the majority of whom are from IIV and the Istituto per la Geochimica dei Fluidi, Palermo (IGF) and are coordinated by GNV, are collecting information on the geological, petrological, geochemical, and geophysical aspects of the eruption. 2. Eruption chronology. On 14 December at about 0200, a seismic swarm (see Seismicity section below) indicated the opening of two radial fractures trending NE and SSE from Southeast Crater. Very soon, ash and bombs formed small scoria ramparts along the NE fracture, where brief activity was confined to the base of Southeast Crater. Meanwhile, a SSE-trending fracture extended ~ 1.3 km from the base of the crater (at ~3,000 m asl) to 2,700 m altitude. Lava fountaining up to 300 m high from the uppermost section of the SSE fracture continued until about 0600, producing scoria ramparts 10 m high. Two thin (~ 1 m thick) lava flows from the fracture moved E. The N flow, from the highest part of the fracture, stopped at 2,750 m altitude, while the other, starting at 2,850 m elevation, reached the rim of the Valle del Bove (in the Belvedere area), pouring downvalley to ~ 2,500 m asl. At noon, the lava flows stopped, while the W vent of the central crater (Bocca Nuova) was the source of intense Strombolian activity. The SSE fracture system continued to propagate downslope, crossing the rim of the Valle del Bove in the late evening. During the night of 14-15 December, lava emerged from the lowest segment of the fracture cutting the W flank of the Valle del Bove, reaching 2,400 m altitude (E of Cisternazza). Degassing and Strombolian activity built small scoria cones. Two lava flows advanced downslope from the base of the lower scoria cone at an estimated initial velocity of 15 m/s, which dramatically decreased when they reached the floor of the Valle del Bove. The SSE fractures formed a system 3 km long and 350-500 m wide that has not propagated since 15 December. Between Southeast Crater and Cisternazza, the fracture field includes the 1989 fractures, which were reactivated with 30-50-cm offsets. The most evident offsets were down to the E, with right-lateral extensional movements. Numerous pit craters, <1 m in diameter, formed along the fractures. Lava flows have been spreading down the Valle del Bove into the Piano del Trifoglietto, advancing a few hundred meters/day since 15 December. The high initial outflow rates peaked during the last week of 1991 and the first few days of 1992, and decreased after the second week in January. Strombolian activity at the vent in the upper part of the fracture has gradually diminished. Lava flows were confined to the Valle del Bove until 24 December, when the most advanced front extended beyond the steep slope of the Salto della Giumenta (1,300-1,400 m altitude), accumulating on the floor of Val Calanna. Since then, many ephemeral vents and lava tubes have formed in the area N of Monte Zoccolaro, probably because of variations in the eruption rate. These widened the lava field in the area, and decreased feeding for flows moving into Val Calanna. However, by the end of December, lava flows expanded further in Val Calanna, moving E and threatening the village of Zafferana Etnea, ~2 km E of the most advanced flow front. This front stopped on 3 January, on the same day that a flow from the Valle del Bove moved N of Monte Calanna, later turning back southward and rejoining lava that had already stopped in Val Calanna. Since 9 January, lava flows in Val Calanna have not extended farther downslope, but have piled up a thick sequence of lobes. Lava outflow from the vent continued at a more or less constant rate, producing a lava field in the Valle del Bove that consisted of a complex network of tubes and braiding, superposing flows, with a continuously changing system of overflows and ephemeral vents. 3. Lava flow measurements. An estimate of lava channel dimensions, flow velocity, and related rheological parameters was carried out where the flow enters the Valle del Bove. Flow velocities ranging from 0.4-1 m/s were observed 3-7 January in a single flow channel (10 m wide, ~ 2.5 m deep) at 1,800 m altitude, ~ 600 m from the vent. From these values, a flow rate of 8-25 m3/s and viscosities ranging from 70-180 Pas were calculated. Direct temperature measurements at several points on the flow surface with an Al/Ni thermocouple and a 2-color pyrometer (HOTSHOT) yielded values of 850-1,080°C. 4. Petrography and chemistry. Systematic lava sampling was carried out at the flow fronts and near the vents. All of the samples were porphyritic (P.I.»25-35%) and of hawaiitic composition, differing from the 1989 lavas, which fall within the alkali basalt field. Paragenesis is typical of Etna's lavas, with phenocrysts (maximum dimension, 3 mm) of plagioclase, clinopyroxene, and olivine, with Ti-magnitite microphenocrysts. The interstitial to hyalopitic groundmass showed microlites of the same minerals. 5. Seismicity. On 14 December at 0245, a seismic swarm occurred in the summit area (figure 44), related to the opening of upper SE-flank eruptive fractures. About 270 earthquakes were recorded, with a maximum local magnitude of 3. A drastic reduction in the seismic rate was observed from 0046 on 15 December, with only four events recorded until the main shock (Md 3.6) of a new sequence occurred at 2100. The seismic rate remained quite high until 0029 on 17 December, declining gradually thereafter. At least three different focal zones were recognized. On 14 December, one was located NE of the summit and a second in the Valle del Bove. The third, SW of the summit, was active on 15 December. All three focal zones were confined to <3 km depth. Three waveform types were recognized, ranging from low-to-high frequency. As the seismic swarm began on 14 December, volcanic tremor amplitude increased sharply. Maximum amplitude was reached on 21 December, followed by a gradually decreasing trend. As the tremor amplitude increased, the frequency pattern of its dominant spectral peaks changed, increasing within a less-consistent frequency trend. Seismicity rapidly declined and remained at low levels despite the ongoing eruption. 6. Ground deformation. EDM measurements and continuously recording shallow-borehole tiltmeters have been used for several years to monitor ground deformation at Etna. The tilt network has recently grown to 9 flank stations. A new tilt station (CDV) established on the NE side of the fracture in early 1990 showed a steady radial-component increase in early March 1991 after a sharp deformation event at the end of 1990 (figure 45), suggesting that pressure was building into the main central conduit. Maximum inflation was reached by October 1991, followed by a partial decrease in radial tilt, tentatively related to magma intrusion into the already opened S branch of the 1989 fracture system, perhaps releasing pressure in the central conduit. The eruption's onset was clearly detected by all flank tilt stations, despite their distance from the eruption site. The signals clearly record deformation events closely associated in time with seismic swarms on the W flank (before the eruption began) and on the summit and SW sector (after eruption onset). The second swarm heralded the opening of the most active vent on the W wall of the Valle del Bove. S-flank EDM measurements detected only minor deformation, in the zone affected by the 1989 fracture. Lines crossing the fracture trend showed brief extensions in January 1992. The levelling route established in 1989 across the SE fracture was reoccupied 18-19 December 1991. A minor general decline had occurred since the previous survey (October 1990), with a maximum (-10 mm) at a benchmark near the fracture. 7. Gravity changes. Microgravity measurements have been carried out on Etna since 1986, using a network covering a wide area between 1,000 and 1,900 m asl. A reference station is located ~ 20 km NE of the central crater. Five new surveys were made across the 1989 fissure zone during the eruption (15 & 18 December 1991, and 9, 13, and 18 January 1992). Between 21 November and 15 December, the minimum value of gravity variations was about -20 mGal, E of the fracture zone. On 9 January, the gravity variations inverted to a maximum of about +15 mGal. Amplitude increased and anomaly extension was reduced on 13 January, and on 18 January gravity variations were similar to those 9 days earlier. Assuming that height changes were negligible, a change in mass of ~2 x 106 tons (~2 x 107 m3 volume), for a density contrast of 0.1 g/cm3 was postulated. However, if gravity changes were attributed to magma movement, a density contrast of 0.6 g/cm3 between magma and country rock could be assumed and magma displacement would be ~ 3 x 106 m3. 8. Magnetic observations. A 447-point magnetic surveillance array was spaced at 5-m intervals near the fracture that cut route SP92 in 1989. Measurements of total magnetic field intensity (B) have been carried out at least every 3 months since October 1989. Significant long-term magnetic variations were not observed between February 1991 and January 1992, although the amplitude of variations seems to have increased since the beginning of the eruption. 9. Self-potential. A program of self-potential measurements along an 1.32-km E-W profile crossing the SE fracture system (along route SP92 at ~ 1,600 m altitude) began on 25 October 1989. Two large positive anomalies were consistently present during measurements on 5 and 17 January, and 9, 18, and 19 February 1992. The strongest was centered above the fracture system, the second was displaced to the W. Only the 5 January profile hints at the presence of a third positive anomaly, on its extreme E end. The persistent post-1989 SP anomalies could be related to a magmatic intrusion, causing electrical charge polarizations inside the overlying water-saturated rocks. A recent additional intrusion was very likely to have caused the large increase in amplitude and width of the SP anomaly centered above the fracture system, detected on the E side of the profile on 5 January 1992. 10. COSPEC measurements of SO2 flux. The SO2 flux from Etna during the eruption has been characterized by fairly high values, averaging ~ 10,000 t/d, ~ 3 times the mean pre-eruptive rate. Individual measurements varied between ~6,000 and 15,000 t/d. 11. Soil gases. Lines perpendicular to the 1989 fracture, at ~ 1,600 m altitude, have been monitored for CO2 flux. A sharp increase in CO2 output was recorded in September 1991, about 3 months before the eruption began (figure 46). Measurements have been more frequent since 17 December, but no significant variation in CO2 emission has been observed. Samples of soil gases collected at 50 cm depth showed a general decrease in He and CO2 contents since the beginning of January. Soil degassing at two anomalous exhalation areas, on the lower SW and E flanks at ~ 600 m altitude, dropped just before (SW flank—Paternò) and immediately after (E flank—Zafferana) the beginning of the eruption, and remained at low levels. A significant radon anomaly was recorded 26-28 January along the 1989 fracture, but CO2 and radon monitoring have been hampered by snow. |Figure 46. CO2 concentrations measured along Etna's 1989 fracture, late 1990-early 1992, showing a strong increase about 3 months before the December 1991 eruption. Courtesy of the Gruppo Nazionale per la Vulcanologia.| The following, from R. Romano, describes activity in February and early March. The SE-flank fissure eruption was continuing in early March, but was less vigorous than in previous months. An area of ~ 7 km2 has been covered by around 60 x 106 m3 of lava, with an average effusion rate of 8 m3/s. The size of the lava field (figure 43) has not increased since it reached a maximum width of 1.7 km in mid-February. Lava from fissure vents at ~ 2,100 m asl flowed in an open channel to 1,850 m altitude, then advanced through tubes. Flowing lava was visible in the upper few kilometers of the tubes through numerous skylights. Lava emerged from the tube system through as many as seven ephemeral vents on the edge of the Salto della Giumenta (at the head of the Val Calanna, ~ 4.5 km from the eruptive fissure). These fed a complex network of flows in the Salto della Giumenta that were generally short and not very vigorous. None extended beyond the eruption's longest flow, which had reached 6.5 km from the eruptive fissure (1,000 m asl) before stopping in early January. Ephemeral vent activity upslope (within the Valle del Bove) ceased by the end of February. Lava production from fissure vents at 2,150 m altitude has gradually declined and explosive activity has stopped. Degassing along the section of the fissure between 2,300 and 2,200 m altitude was also gradually decreasing. Small vents were active at the bottom of both central craters. Activity at the west crater (Bocca Nuova) was generally limited to gas emission, but significant ash expulsions were observed during the first few days in March. High-temperature gases emerged from the E crater (La Voragine). Collapse within Northeast Crater, probably between 26 and 27 February, was associated with coarse ashfalls on the upper NE flank (at Piano Provenzana and Piano Pernicana). After the collapse, a new pit crater ~ 50 m in diameter occupied the site of Northeast Crater's former vent. Activity from Southeast Crater was limited to gas emission from a modest-sized vent. Seismic activity was characterized by low-intensity swarms. A few shocks were felt in mid-February ~ 12 km SE of the summit (in the Zafferana area). Reference. Barberi, F., Bertagnini, F., and Landi, P., eds., 1990, Mt. Etna: the 1989 eruption: CNR-Gruppo Nazionale per la Vulcanologia: Giardini, Pisa, 75 p. (11 papers). Geological Summary. Mount Etna, towering above Catania on the island of Sicily, has one of the world's longest documented records of volcanism, dating back to 1500 BCE. Historical lava flows of basaltic composition cover much of the surface of this massive volcano, whose edifice is the highest and most voluminous in Italy. The Mongibello stratovolcano, truncated by several small calderas, was constructed during the late Pleistocene and Holocene over an older shield volcano. The most prominent morphological feature of Etna is the Valle del Bove, a 5 x 10 km caldera open to the east. Two styles of eruptive activity typically occur, sometimes simultaneously. Persistent explosive eruptions, sometimes with minor lava emissions, take place from one or more summit craters. Flank vents, typically with higher effusion rates, are less frequently active and originate from fissures that open progressively downward from near the summit (usually accompanied by Strombolian eruptions at the upper end). Cinder cones are commonly constructed over the vents of lower-flank lava flows. Lava flows extend to the foot of the volcano on all sides and have reached the sea over a broad area on the SE flank. Information Contacts: GNV report:F. Barberi, Univ di Pisa; L. Villari, IIV. February-early March activity:R. Romano and T. Caltabiano, IIV; P. Carveni, M. Grasso, and C. Monaco, Univ di Catania. The following people provided information for the GNV report. Institutional affiliations (abbreviated, in parentheses) and their report sections [numbered, in brackets] follow names. F. Barberi (UPI) [1, 2], A. Armantia (IIV) , P. Armienti (UPI) [2, 4], R. Azzaro (IIV) , B. Badalamenti (IGF) , S. Bonaccorso (IIV) , N. Bruno (IIV) , G. Budetta (IIV) [7, 8], A. Buemi (IIV) , T. Caltabiano (IIV) [8, 10], S. Calvari (IIV) [2, 3], O. Campisi (IIV) , M. Carà (IIV) , M. Carapezza (IGF, UPA) , C. Cardaci (IIV) , O. Cocina (UGG) , D. Condarelli (IIV) , O. Consoli (IIV) , W. D'Alessandro (IGF) , M. D'Orazio (UPI) [2, 4], C. Del Negro (IIV) [7, 8], F. DiGangi (IGF) , I. Diliberto (IGF) , R. Di Maio (DGV) , S. DiPrima (IIV) , S. Falsaperla (IIV) , G. Falzone (IIV) , A. Ferro (IIV) , F. Ferruci (GNV) , G. Frazzetta (UPI) , H. Gaonac'h (UMO) [2, 3], S. Giammanco (IGF) , M. Grasso (IIV) , M. Grimaldi (DGV) , S. Gurrieri (IGF) , F. Innocenti (UPI) , G. Lanzafame (IIV) , G. Laudani (IIV) , G. Luongo (OV) , A. Montalto (IIV, UPI) , M. Neri (IIV) , P. Nuccio (IGF, UPA) , F. Obrizzo (OV) , F. Parello (IGF, UPA) , D. Patanè (IIV) , D. Patella (DGV) , A. Pellegrino (IIV) , M. Pompilio (IIV) [2, 3, 4], M. Porto (IIV) , E. Privitera (IIV) , G. Puglisi (IIV) [2, 6], R. Romano (IIV) , A. Rosselli (GNV) , V. Scribano (UCT) , S. Spampinato (IIV) , C. Tranne (IIV) , A. Tremacere (DGV) , M. Valenza (IGF, UPA) , R. Velardita (IIV) , L. Villari (IIV) [1, 2, 6]. Institutions: DGV: Dipto di Geofisica e Vulcanologia, Univ di Napoli; GNV: Gruppo Nazionale per la Vulcanologia, CNR, Roma; IGF: Istituto per la Geochimica dei Fluidi, CNR, Palermo; IIV: Istituto Internazionale di Vulcanologia, CNR, Catania; OV: Osservatorio Vesuviano, Napoli; UCT: Istituto di Scienze della Terra, Univ di Catania; UGG: Istituto di Geologia e Geofisica, Univ di Catania; UMO: Dept de Géologie, Univ de Montréal; UPA: Istituto di Mineralogia, Petrologia, e Geochimica, Univ di Palermo; UPI: Dipto di Scienze della Terra, Univ di Pisa.
<urn:uuid:2929eb38-58af-45b7-bed8-40f9415c2bbd>
CC-MAIN-2022-33
https://volcano.si.edu/showreport.cfm?doi=10.5479/si.GVP.BGVN199202-211060
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571987.60/warc/CC-MAIN-20220813202507-20220813232507-00299.warc.gz
en
0.933958
5,277
3
3
PS/Tk stands for a portable Scheme interface to the Tk GUI toolkit. It has a rich history going all the way back to Scheme_wish by Sven Hartrumpf in 1997. Wolf-Dieter Busch created a Chicken port called Chicken/Tk in 2004. It took on its current name when Nils M Holm stripped it of Chicken-isms to make it portable amongst Scheme implementations in 2006. If you've ever tried to write portable Scheme, you know that, except for the most trivial of programs, it is much easier said than done. Holm's pstk.scm had a configurable section titled NON-PORTABLE that you had to configure for your chosen implementation. It came full circle and was repackaged as a Chicken egg. Chicken is a popular Scheme implementation that compiles Scheme to C. Eggs are Chicken-specific extenstion libraries that are stored in a centralized repository (like CPAN but for Chicken Scheme). Instead of building yet another calculator, let's build a GUI for generating a tone. You'll need Chicken installed. It's available in the repositories of most Linux distros. PS/Tk interfaces with Tk, not with C library bindings, but with a named pipe to tclsh8.6. The TCL package in most Linux distros will provide this. For Debian, I did sudo apt install chicken-bin tcl tk. Once Chicken is installed, you can use the chicken-install utility that comes with it to install the PS/Tk egg. $ chicken-install -sudo pstk When you think of Tk, you may think of something that looks like this: Tk has come a long way in recent years. Tcl/Tk 8.5 and later comes with a new set of widgets built in called Tile or Ttk that can be themed. These widgets are available alongside the classic widgets, so you have to explicitly tell your app to use Ttk or else it will end up looking like it was designed for a 1980s Unix workstation. (import pstk) (tk-start) (ttk-map-widgets 'all) ; Use the Ttk widget set (tk/wm 'title tk "Bleep") (tk-event-loop) All PS/Tk function names begin with ttk- for the few Ttk-specific functions). The doc directory in the PS/Tk GitHub repo unfortunately has not been updated since this convention was adopted. One example from the docs is start-tk which is now ttk-map-widgets function is what tells Tk to use the Ttk widgets instead of the classic widgets. Tk comes with a few built-in themes. The default themes on Windows and macOS supposedly do a decent job of approximating the look of native widgets on those platforms. I don't use either of those platforms, so I can't verify this first hand. For some reason, the default theme on Linux is vaguely Windows 95ish. It comes with a built-in theme called clam that is supposed to provide "a look somewhat like a Linux application". You can set this theme with (ttk/set-theme "clam"), but it's really not that much of an improvement. Ideally, something like gtkTtk that has GTK do the actual drawing would be integrated into Tcl/Tk and become the default on Linux. In the meantime, there are third party themes that imitate the look and feel of the most popular GTK and Qt themes. I use MATE with the Arc GTK theme, so I went with the Arc theme. There was even a Debian package for it ( sudo apt install tcl-ttkthemes). We can then apply the theme system wide ( echo '*TkTheme: arc' | xrdb -merge -), so that all Tk apps such as git-gui also inherit the theme. It is probably better to give your Linux users instructions on how to install their own theme instead of hard coding one with ttk/set-theme, so they can choose one that matches their system theme (KDE users might pick Breeze while Ubuntu users might opt for Yaru). The screenshots in this tutorial use the Arc theme. We set the window title with tk/wm and start the event loop with tk-event-loop. We now have an empty window. Now let's add some widgets to this window. (define slider (tk 'create-widget 'scale 'from: 20 'to: 20000)) (slider 'set 440) (tk/grid slider 'row: 0 'columnspan: 3 'sticky: 'ew 'padx: 20 'pady: 20) Widgets are organized hierarchically. This is done by invoking a parent widget with the sub-command create-widget. PS/Tk associates a widget named tk with the top-level window, so most widgets will start as a call to (tk 'create-widget 'label 'text: "Hello, World!")). Options are quoted and get a trailing colon (e.g. 'text: "Hello, World!"). Creating a widget returns a Scheme function. If you give this function a name, you can call it with sub-commands such as set. Just creating a widget doesn't make it appear on screen. For that you need a geometry manager, of which Tk has three: the packer, the gridder, and the placer ( tk/place in Scheme, respectively). The range of frequencies audible by humans is typically between 20 Hz and 20 KHz (we lose the ability to hear some of those higher frequencies as we age). The musical note A above middle C>) is 440 Hz. Since A4 serves as a general tuning standard, it seems like a sensible default, but if you run the above in Chicken, this is what you'll see: The scale of 20 to 20,000 is so large that 440 doesn't appear to move the slider at all. Ideally, 440 would fall about the middle of the slider. To achieve this, let's use a logarithmic scale. ; Scale used by slider (define *min-position* 0) (define *max-position* 2000) ; Range of frequencies (define *min-frequency* 20) (define *max-frequency* 20000) ; Logarithmic scale for frequency (so middle A falls about in the middle) ; Adapted from https://stackoverflow.com/questions/846221/logarithmic-slider (define min-freq (log *min-frequency*)) (define max-freq (log *max-frequency*)) (define frequency-scale (/ (- max-freq min-freq) (- *max-position* *min-position*))) ; Convert slider position to frequency (define (position->frequency position) (round (exp (+ min-freq (* frequency-scale (- position *min-position*)))))) ; Convert frequency to slider position (define (frequency->position freq) (round (/ (- (log freq) min-freq) (+ frequency-scale *min-position*)))) I added some global parameters to the top of the script. The variable name *min-position* is just a Lisp naming convention for global parameters. I came up with the range of 0-2,000 by trial and error. It seemed to strike the best balance between each step of the slider making a noticeable change to the frequency while still allowing the user to narrow in on a specific frequency with just the slider. Then we create two functions: one that takes the position on the slider and returns the frequency ( position->frequency) and another that takes a frequency and returns the position on the slider ( frequency-position). Now let's set the initial position of our slider with the (define slider (tk 'create-widget 'scale 'from: *min-position* 'to: *max-position*)) (slider 'configure 'value: (frequency->position 440)) Underneath the slider is a spin box showing the current frequency and buttons to increase/decrease the frequency by one octave. ; Create a spin box with a units label ; Returns frame widget encompassing both spin box and label and the spin box ; widget itself. This way you can access the value of the spin box. ; e.g. (define-values (box-with-label just-box) (units-spinbox 1 12 6 "inches")) (define (units-spinbox from to initial units) (let* ((container (tk 'create-widget 'frame)) (spinbox (container 'create-widget 'spinbox 'from: from 'to: to 'width: (+ 4 (string-length (number->string to))))) (label (container 'create-widget 'label 'text: units))) (spinbox 'set initial) (tk/pack spinbox label 'side: 'left 'padx: 2) (values container spinbox))) (define lower-button (tk 'create-widget 'button 'text: "<")) (define-values (frequency-ext frequency-int) (units-spinbox *min-frequency* *max-frequency* 440 "Hz")) (define higher-button (tk 'create-widget 'button 'text: ">")) (tk/grid lower-button 'row: 1 'column: 0 'padx: 20 'pady: 20) (tk/grid frequency-ext 'row: 1 'column: 1 'padx: 20 'pady: 20) (tk/grid higher-button 'row: 1 'column: 2 'padx: 20 'pady: 20) The frame widget is an invisible widget that helps with layout. Since all I need to arrange within the frame is a spin box and a label, I used pack them side by side. The frame is then organized in a grid with the rest of the widgets. I created a function that I can reuse later to generate the spin box, label, and frame all together. At this point, we are starting to have a nice looking interface, but it doesn't do anything. If you click the buttons or slide the slider, nothing happens. The widgets have a command option that wires the widget up to a function. If we add a command to the slider, that command will be called each time the slider is moved. (define slider (tk 'create-widget 'scale 'from: *min-position* 'to: *max-position* 'command: (lambda (x) (frequency-int 'set (position->frequency x))))) The command for the slider takes one argument that indicates the new value of the slider. The spin box does have a command option, but the command is only called when the value is changed by clicking the up or down arrow, not when the value is changed by other means such as typing a frequency into the field. Tk has a bind command (the Scheme tk/bind function) that allows binding functions to an event on a widget. We'll bind our callback to the KeyRelase event. The tk/bind function takes up to three arguments. The first is the widget to bind to (or a tag created with tk/bindtags to apply the binding to multiple widgets). The second is the event pattern. The event pattern is surrounded by angle brackets and can specify modifiers, event types, and more. You can find detailed documentation on the event pattern in the Tcl/Tk documentation. The third is a lambda expression to associate with the event. (tk/bind frequency-int '<KeyRelease> (lambda () ; If frequency value is a valid number, set slider to current value (let ((numified (string->number (frequency-int 'get)))) (when numified (slider 'configure 'value: (frequency->position numified)))))) Wire the buttons up to callback functions called increase-octave. An octave is "the interval between one musical pitch and another with double its frequency." ; Set frequency slider and display (define (set-frequency freq) (when (and (>= freq *min-frequency*) (<= freq *max-frequency*)) (slider 'configure 'value: (frequency->position freq)) (frequency-int 'set freq))) ; Buttons increase and decrease frequency by one octave (define (adjust-octave modifier) (set-frequency (* (string->number (frequency-int 'get)) modifier))) (define (decrease-octave) (adjust-octave 0.5)) (define (increase-octave) (adjust-octave 2)) If you slide the slider, the text field updates accordingly. If you type a number in the text field, the slider updates accordingly. All good, right? What if a user (and you know they will) enters a number higher than 20,000 or a letter? Let's extend the function that returns our labeled spin box to bind a validation function to the FocusOut event on the spin box. The spin box does have a validatecommand option, but I wasn't able to get it working. I looked through the examples that have come with the various variations of PS/Tk and couldn't find a single example of a spin box with a validatecommand. I even looked at the source code for Bintracker, a chiptune audio workstation written in Chicken Scheme with a PS/Tk GUI and developed by the current maintainer of the PS/Tk egg. Even it binds a validate-new-value function to the FocusOut events of the spin box rather than using ; Create a spin box with a units label ; Returns frame widget encompassing both spin box and label and the spin box ; widget itself. This way you can access the value of the spin box. ; e.g. (define-values (box-with-label just-box) (units-spinbox 1 12 6 "inches")) (define (units-spinbox from to initial units) (let* ((container (tk 'create-widget 'frame)) (spinbox (container 'create-widget 'spinbox 'from: from 'to: to 'width: (+ 4 (string-length (number->string to))))) (label (container 'create-widget 'label 'text: units))) (spinbox 'set initial) (tk/bind spinbox '<FocusOut> (lambda () (let ((current-value (string->number (spinbox 'get)))) (unless (and current-value (>= current-value from) (<= current-value to)) (spinbox 'set from) ; Also reset slider position to make sure it still matches display (slider 'configure 'value: (frequency->position (string->number (frequency-int 'get)))))))) (tk/pack spinbox label 'side: 'left 'padx: 2) (values container spinbox))) We'll also use this function to create a field to specify the duration of the beep in milliseconds: (define-values (duration-ext duration-int) (units-spinbox 1 600000 200 "ms")) (tk/grid duration-ext 'row: 2 'column: 0 'padx: 20 'pady: 20) Frequency is rather abstract. Let's also give the user the ability to select a musical note. We can store the corresponding frequencies for A4-G4 in an association list. ; Notes -> frequency (middle A-G [A4-G4]) ; http://pages.mtu.edu/~suits/notefreqs.html (define notes '(("A" 440.00) ("B" 493.88) ("C" 261.63) ("D" 293.66) ("E" 329.63) ("F" 349.23) ("G" 292.00))) We'll give the user a drop-down menu. Whenever a note is selected from the drop-down menu, we'll look up the frequency in the association list and set it using the set-frequency helper function we created for the octave buttons. (define note-frame (tk 'create-widget 'frame)) (define note (note-frame 'create-widget 'combobox 'width: 2 'values: '("A" "B" "C" "D" "E" "F" "G"))) (tk/bind note '<<ComboboxSelected>> (lambda () (set-frequency (cadr (assoc (note 'get) notes))))) (define note-label (note-frame 'create-widget 'label 'text: "♪")) (tk/pack note-label note 'side: 'left 'padx: 2) (tk/grid note-frame 'row: 2 'column: 2 'padx: 20 'pady: 20) Now, let's make some noise. There are Chicken Scheme bindings to the Allegro>) library. Allegro is a library primarily used by games for cross-platform graphics, input devices, and more. What we're interested in is the audio addon that can be used to generate a tone with a sine wave. You'll need to install the Allegro library. Make sure you also install the header files. In some Linux distros, these are split into a separate package (e.g. liballegro5-dev on Debian). Also, install the Allegro egg ( chicken-install -sudo allegro). I added the following lines near the top to import the Allegro bindings (and the chicken memory module, which we'll also use) and initialize Allegro. (import (prefix allegro "al:")) (import (chicken memory)) (define +pi+ 3.141592) ; Initialize Allegro and audio addon (unless (al:init) (print "Could not initialize Allegro.")) (unless (al:audio-addon-install) (print "Could not initialize sound.")) (al:reserve-samples 0) The Allegro egg is accompanied by a couple of examples but no examples showing the use of the audio addon. The Allegro library itself comes with an example showing how to generate a saw wave, but being a C library, the example is, of course, in C. I ported that example to Scheme. I would have contributed the example back to the Allegro egg, but the repo is marked as "archived by the owner" and read-only on GitHub. I've included the example in the repo alongside the rest of the code for this tutorial in case someone finds it useful. Allegro is very low-level. You create an audio stream. In this case, the stream buffers eight fragments of 1,024 samples each at a frequency (often called sampling rate) of 44,100 Hz (the sampling rate of an audio CD), which means there are 44,100 samples per second. Each sample is a 32-bit float (what is called the bit depth of the audio), and we only have one channel to keep things as simple as possible. ; Generate a tone using Allegro (define (generate-tone frequency duration) (let* ((samples-per-buffer 1024) (stream-frequency 44100) (amplitude 0.5) (stream (al:make-audio-stream 8 samples-per-buffer stream-frequency 'float32 'one)) (queue (al:make-event-queue)) (event (al:make-event))) (unless (al:audio-stream-attach-to-mixer! stream (al:default-mixer)) (print "Could not attach stream to mixer.")) (al:event-queue-register-source! queue (al:audio-stream-event-source stream)) (let event-loop ((n 0)) ; Grab and handle events (when (and (< n (/ (* (/ duration 1000) stream-frequency) samples-per-buffer)) (al:event-queue-wait! queue event)) (case (al:event-type event) ('audio-stream-fragment (let ((buffer (al:audio-stream-fragment stream))) ; If the stream is not ready for new data, buffer will be null. (if (not buffer) (event-loop n) (begin (fill-buffer buffer n) ; Placeholder ; Repeat (event-loop (+ n 1))))))))) (al:audio-stream-drain stream))) An event loop waits for the audio stream to ask for another buffer. Our job is to fill that buffer with 1,024 32-bit floats at a time. In the code listing above, this is done by fill-buffer. That was just a placeholder, so I could break the code up into shorter, more easily explainable chunks. This is what goes in the place of (fill-buffer buffer n): (let ((adr (pointer->address buffer))) (let loop ((i 0)) (when (< i samples-per-buffer) (let ((time (/ (+ (* samples-per-buffer n) i) stream-frequency))) ; al:audio-stream-fragment returns a C pointer. Use (chicken ; memory) module to operate on foreign pointer objects. ; Iterate over array four bytes at a time since 32-bit depth. (pointer-f32-set! (address->pointer (+ adr (* i 4))) (* amplitude (sin (* 2 +pi+ frequency time)))) (loop (+ i 1))))) (unless (al:audio-stream-fragment-set! stream buffer) (print "Error setting stream fragment"))) The Allegro egg is a pretty thin wrapper of the Allegro library. The audio-stream-fragment procedure in the egg just passes along the C pointer that the corresponding al_get_audio_stream_fragment function from the C library returns. It would have been nice if the egg had offered some Scheme conveniences atop Allegro like allowing us to pass a Scheme list or array to Allegro to provide the buffer of samples. Since it doesn't, we'll use the chicken memory module to fill the C array starting at the C pointer returned by audio-stream-fragment. We use pointer->address to get the address of the pointer. A pointer refrences a byte of memory. We can reference the preceding or following byte by subtracting or adding 1 to the address. Since we are filling the array with 32-bit floats, and 32 bits is 4 bytes, we want to increment the address by 4 each time. Then we can set the value of the current location with pointer-f32-set!. Then you just need to feed Allegro buffers of 1,024 samples at a time. The basic formula for a sine wave is A sin(2πft) where A is amplitude, f is frequency, and t is time. (* amplitude (sin (* 2 +pi+ frequency time))) Wire this up to a play button, and you're ready to make some noise. (define play-button (tk 'create-widget 'button 'text: "Play" 'command: (lambda () (generate-tone (string->number (frequency-int 'get)) (string->number (duration-int 'get)))))) (tk/grid play-button 'row: 2 'column: 1 'padx: 20 'pady: 20) Tk has been around a long time, and it shows. While it is stable and highly portable, even with recent improvements, it just looks a little dated. At least on Linux, none of the themes I tried really fit in. There were always differences that made the Tk GUI stick out like a sore thumb. If you're building an internal tool where it doesn't really matter how pretty it is, you can get Tk to work with a variety of Schemes in a variety of places. You can check out the entire example on GitHub. This started as a personal learning project to explore the state of GUI programming in Lisp and has become a series of tutorials on building GUIs with various dialects of Lisp.
<urn:uuid:2c6f072c-1422-4545-a827-117851cb68dd>
CC-MAIN-2022-33
https://blog.matthewdmiller.net/learn-scheme-by-example-tk-gui-with-chicken-scheme
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00499.warc.gz
en
0.85066
5,254
2.5625
3
This story was updated on Sept. 23, 2019. In the early 1800s, an estimated 50,000 grizzly bears inhabited the land stretching from the Great Plains to the Pacific Ocean — terrain that, at that time, was largely unpopulated. Hunted to the edge of extinction, grizzlies found refuge in the Endangered Species Act (ESA). Since being listed in 1975, some grizzly bear populations have experienced significant growth, with those in the Greater Yellowstone Ecosystem (GYE) restored to the point of delisting in 2017 — a feat led by the U.S. Fish and Wildlife Service (FWS) in conjunction with state wildlife agencies in Idaho, Wyoming and Montana. This recovery, however, was not without its growing pains. The American West, although still largely rural, is not the same unsettled territory it was when grizzlies dominated the landscape 200 years ago. Today, thousands of people live in and around GYE, along with a grizzly bear population that numbers in the 700s. As their distribution expanded, FWS “interpreted that as the bear population inside the monitoring area had reached carrying capacity,” says Hilary Cooley, grizzly bear recovery coordinator for FWS. “We have specific boundaries within which we monitor for practical reasons, and … over time, their population was increasing in there, and it leveled out,” she says. “But bears have expanded their distribution outside of these areas and will continue to do so.” FWS’s decision to delist GYE grizzlies in 2017, however, was and remains controversial. Some have argued that a population of 700 is not sustainable and that climate change and other factors are contributing to the loss of key food sources for the bears — specifically, whitebark pine. This argument is one FWS has heard before. “We had attempted to delist in 2007 — at that time, we thought the species was recovered — but we lost in litigation, mostly over … the whitebark pine; it produces pine nuts that are high in fat and pretty valuable to bears,” Cooley says. “There had been some big declines in those tree populations, especially in Yellowstone. But they produce pine nuts every other year, so bears were used to dealing with periods without them. “They also have more than 200 food species that they eat, and we didn’t feel that — we didn’t have evidence, either — the loss of these pine trees was going to be a threat to bears.” But with a ruling in favor of the plaintiffs, the GYE grizzly remained on the endangered species list. Further study of the whitebark pine and its effects on the population revealed that while the tree’s numbers had declined, Cooley says, “we didn’t see any negative population-wide effects to bears in that ecosystem that we can say are due to whitebark pine.” FWS’s decision to delist the bear in 2017 was once again met with resistance as both Idaho and Wyoming decided to facilitate hunts in 2018, which would have allowed sportsmen to harvest up to 23 animals — one in Idaho and up to 22 in Wyoming; Montana decided not to host a hunt. Grizzlies have been called “national treasures” and “icons”; sportsmen “trophy hunters”; and the hunt itself “inhumane,” “cruel” and “senseless.” In reaction, some wildlife organizations and groups as well as individuals employed a variety of tactics to turn the tide of public opinion against the hunts. One movement, called “Shoot ’Em With a Camera,” encouraged wildlife photographers to apply for one of the available permits in order to keep them out of the hands of hunters. In another effort, the Center for Biological Diversity posted a billboard that pictured a grizzly bear in the crosshairs of a rifle and read “I’m a Bear, Not a Trophy” — part of a larger campaign to garner petitions to restore the species to the endangered species list. Further inflaming an already tense debate is the language oft used by opponents: Grizzlies have been called “national treasures” and “icons”; sportsmen “trophy hunters”; and the hunt itself “inhumane,” “cruel” and “senseless.” Yet the reality of the need to coexist with bears requires diligent management — a notion that some repudiate, particularly those who aren’t directly affected by the expanding population. “We have to recognize that these predators do cause problems and, in some cases, bears actually kill people,” says Cooley. “The people who live in that ecosystem are experiencing the hardships that sometimes come with wolves or with bears, and so it’s a tough balance.” “Management of bears is needed in human occupied areas,” she adds. “And whatever that management is, people will disagree or have different ideas of [how to handle] it. My job is to make sure that we have brought the species back to a recovered level as required under the ESA and then allow the states to manage it.” But FWS’s decision to delist was once again challenged in the courts; one of the plaintiffs in the case was the Center for Biological Diversity. On August 30, 2018, two days before Wyoming’s first scheduled hunt, U.S. District Judge Dana Christensen granted a temporary restraining order to delay the hunt; he issued a second restraining order on Sept. 13, 2018, allowing him additional time to consider the case and make his final determination regarding whether the bears should be reinstated to the endangered species list. While Christensen reportedly expressed a desire to hear out and understand both sides of the issue, according to Montana Public Radio, the Wyoming Game and Fish Department expressed disappointment at that time with his temporary ruling. “Grizzly bears in the GYE have been above biological recovery goals since 2003, and Wyoming has a robust grizzly bear management program with strong regulations, protections and population monitoring for grizzly bears.” “Grizzly bears in the GYE have been above biological recovery goals since 2003, and Wyoming has a robust grizzly bear management program with strong regulations, protections and population monitoring for grizzly bears,” Renny MacKay, communications director with the department, said in an email. On Sept. 24, 2018, Christensen ruled in favor of environmental groups to reinstate federal ESA protections for the GYE grizzly bear — effectively blocking the hunts — stating that the bears still faced significant threats. He also argued that FWS erred when it attempted to delist one population without considering the recovery of other grizzly populations in the Lower 48. This passed the buck back to FWS, which announced in December its decision to appeal Christensen’s ruling, which was followed by appeals from Idaho, Wyoming and Montana. Further complicating matters, in February 2019, Montana introduced a joint resolution that would transfer management of all grizzly bears in Montana to the state and would block any court review of this action. FWS officials have been critical of the move — which proposes creating a distinct population segment of grizzlies in the state — noting that the creation of six separate recovery zones was intentional and that the animal in each region is at varying levels of recovery. Passed by Congress in 1973, the ESA “provide[s] a framework to conserve and protect endangered and threatened species and their habitats,” according to the FWS website. Created to focus on recovery, the act is not meant to provide indefinite protections. “I think a lot of people forget or don’t realize that … when we delist, we’re not saying we don’t think there should be any more bears or they shouldn’t be in any more places,” says Cooley. “We’re saying we should delist because we’ve met what the ESA requires us to do — and it’s not to recover bears everywhere. That’s not our decision. We just get them out of the emergency room.” In 1975, when grizzly bears were added to the endangered species list, small populations remained in six areas of the U.S.: Greater Yellowstone, Northern Continental Divide, Cabinet-Yaak, Selkirk, North Cascade and Bitterroot. It was in these regions that FWS decided to focus its recovery efforts. In these zones, FWS established boundaries for what it refers to as demographic monitoring areas (DMAs). Larger than a recovery zone, Cooley says, these areas provide a buffer between “suitable habitat” for the bears and those that may not be so suitable (i.e., those with a high density of humans) as well as a consistent area in which to compare population size. When a decision is made to evaluate a population for delisting — an action FWS can initiate itself or that another entity can petition the agency to do — there are a number of criteria that must be assessed. “There are things like numbers of bears, distribution — we don’t want all the bears in one part of the ecosystem, … and we want reproducing females in a certain number of cells. Then there’s mortality limits, because human-caused mortality was one of the main reasons grizzly bears were listed in the first place,” Cooley says. “There is also habitat criteria. We quantify all of those things, and then we go through the list [to determine whether] they have met it or not.” Another key consideration is threat assessment. This process includes examining factors such as habitat destruction or modification, predation (preying on other animals), disease, and regulatory mechanisms. “So is there any legally enforceable mechanism that can ensure that mortality will not rise to a certain level?” says Cooley. “So [those are the] factors we go through [to discern the] current situation,” she adds, “is that still a threat to bears, has anything changed, what things do we have in place to make sure that won’t become a threat in the foreseeable future? If those all look good, that is what makes our decision. It is based on the best available science.” In GYE, Cooley says a bear population in excess of 700 — an estimate she calls “conservative”; evidence of animals expanding beyond the DMA; and other factors prompted FWS to evaluate the species for delisting. “[The population’s] been fairly stable for a number of years,” she says. “We had reproducing females within our distribution requirements. We had mortality levels controlled to the levels that we have identified in the recovery plan. So the population was just doing well, really well.” As the 2017 GYE Final Rule points out, under the ESA, “species recovery is considered to be the return of a species to the point where it is no longer threatened or endangered.” Contrary to some perceptions of the ESA, “Recovery under the act does not require restoring a species to carrying capacity, historic levels, or even maximizing density, distribution, or genetic diversity.” “Recovery under the [ESA] does not require restoring a species to carrying capacity, historic levels, or even maximizing density, distribution, or genetic diversity.” Once a species is removed from the endangered species list, management is turned over to the states. However, according to Cooley, FWS continues to do post-monitoring for a minimum of five years, and in this case, she says the agency anticipates monitoring for a longer period of time as bears are “sensitive species.” “We have to write a post-delisting monitoring plan, and usually we do that in cooperation with the state or whoever’s going to be doing most of the monitoring on the ground … to make sure that once a species is delisted, it remains recovered,” she says. “So it includes things like mortality limits, continuing to monitor those] and produce reports. We’ll take a close look at their hunting [regulations]. We review those and decide whether [or not] we think it is going to be a problem. “There’s a whole suite of things the states are going to continue to monitor, and when those reports come out, we’ll evaluate them.” Should they see a spike in mortalities, or one of a number of other criteria, FWS will step in. “We have some specific triggers that are in our delisting rule that say if this happens or if this threshold is met, we’ll do an automatic status review,” says Cooley. While delisting removes certain protections for the bears and opens up the prospect of a hunt, Cooley says that many people seem to think that the handing over of management to the states means that “bears are just going to be shot left and right.” “Bear are listed as game species, and that means they can only be taken with a permit with special regulations. It’s not a free for all,” she says. “The ESA protections go away, but the states do a great job of managing them. They want to maintain bears, too.” The rise of the grizzly bear population in the last several decades has meant an increase in bear-human conflicts. According to the book Yellowstone Grizzly Bears: Ecology and Conservation of an Icon of Wildness— published by Yellowstone Forever, the official nonprofit partner of Yellowstone National Park — from 2002 to 2014, there were 2,497 conflicts reported in GYE; this is an average of approximately 200 per year. More than three-quarters of these incidents included bears killing livestock or damaging property while obtaining human foods, and all conflicts were distributed evenly among public and private lands. From 2002 to 2014, there were 2,497 bear-human conflicts reported in GYE. “There are certain areas in the GYE that we have bears interacting with livestock, especially the Green River; we have a lot of livestock depredation there,” says Cooley, adding that in the worst cases, FWS may euthanize. “We have a special rule attached to the bear delisting that we can euthanize bears, given certain situations, if depredation is a continued problem.” Although euthanizing is often a last resort, there is a specific procedure that must be followed when a producer is having an issue with a bear. In most cases, a staff biologist with a state’s wildlife agency would visit the site to determine whether the predator was a grizzly. “They make a decision depending on a lot of things specific to that situation: has this guy had problems before, is this new, do we know anything about the bear involved? And they’ll go and try to trap it,” explains Cooley. “If it’s a first-time offense, sometimes they’ll just move the bear, or sometimes they can just harass the bear out of there, make him go away.” To try to prevent these situations, FWS and state agencies also work with producers to eliminate any attractants on their property, such as grain or bone piles. “We try and make sure that … even before people have problems, they clean up all of that so there are no attractants for bears,” Cooley says. “If the problem continues, then you trap the bear, and if this bear has depredated multiple, multiple times, the decision may be to euthanize it. It all just depends on the situation, and we really are judicious with euthanizing. Nobody likes to have to do that.” Some have questioned whether relocation of problem bears is a viable solution. While this is a tactic Wyoming Game and Fish has used for decades, MacKay said they have “limited options for places to move bears” as they cannot relocate them to the national parks or the Wind River Reservation. However, there may be the potential to move them to other ecosystems or tribal lands. “These are potential options we would be open to after a thorough discussion with those groups and other potentially affected entities, who have not approached us,” he said, “and the relocation would need to be to suitable habitat.” Others have expressed concern over human safety in and around Yellowstone National Park and Grand Teton National Park as the bear population grows. Most recently, in September 2019, three hunters were attacked by grizzlies in two separate incidents in Montana. To manage the rise in population levels following FWS’ last attempt at delisting the species, Wyoming Game and Fish decided to open a hunt — a decision that followed a series of public meetings in and around GYE. “[We] traveled around the state to get the public’s opinions on the future of grizzly bear management, including hunting,” said MacKay. “The public told us that they supported hunting in Wyoming.” “They’re not going to have a hunt that takes more bears than is sustainable for the population.” Taking all public input into consideration, the department developed hunting regulations, which were available online for public review and comment for 53 days. Wyoming Game and Fish also hosted eight public meetings across the state to present the regulations and answer questions. “We incorporated as much public comment as possible into the hunting regulations while maintaining the sideboards that are required for maintaining a recovered grizzly bear population,” MacKay said. “[We] worked with Idaho and Montana to determine that a hunt could go forward and bear numbers would remain above objectives. These discussions also occurred with representatives of the U.S. Forest Service, Yellowstone National Park, Grand Teton National Park and Wind River Reservation. After hunting regulations were finalized by the department, they were presented to our commission and approved in May of 2018.” The state then issued licenses for seven designated areas. In six of these — those within the DMA — the state issued a combined total of 10 tags. “The total quota across all those areas is one female or 10 males,” MacKay said. “Only one hunter would be allowed out at a time, and so the hunt would stop when either one female is killed or 10 males.” However, in area seven, which is outside of the DMA and thus is not considered suitable habitat for bears, Wyoming Game and Fish issued 12 licenses, allowing for the harvesting of up to 12 bears of any sex. “So the total would be a maximum of 22, and a minimum of 13 hunters who had licenses,” MacKay explained. According to Cooley, the states “have agreed to look at all mortalities, [including] human caused, natural, accidental, management removals and … will limit those to a certain level depending on the population size.” “They’re not going to have a hunt that takes more bears than is sustainable for the population,” she adds. Wyoming Game and Fish considers regulated hunting not only a “pragmatic and cost-effective tool for managing populations at desired levels,” but it also “generates public support, ownership of the resource and funding for conservation,” according to the agency’s hunting plan. For Cooley, the most important factor in grizzly bear deaths is not the causes but the bigger picture. “People might have a problem with hunters taking bears. I understand that, but whether it is a hunt or a management removal or some other mortality, that’s not really the issue,” she says. “We have to limit mortalities period.” “I don’t think we are going to know if it’s going to have any effect … until after the hunt,” Cooley adds. “This is a really emotionally charged issue, so I have to say, let’s look at it scientifically, and scientifically, we look at those mortalities no different from, say, management removals; we would allow those mortalities in certain situations for conflict bears.” Some critics, however, argue that the decision to delist the GYE grizzly in the first place was in reaction to pressure from the oil, gas and timber industries — and the politicians they helped elect. In an article on thewildlifenews.com titled “Why Delisting of Grizzly Bears is Premature,” ecologist George Wuerthner says, “Without ESA listing, environmentally destructive practices will have fewer restrictions, hence greater profits at the expense of the bear and its habitat.” Others have expressed dismay over what they claim would be the murder of an icon, while others say that the killing of grizzlies would lessen their enjoyment when visiting the area. Furthermore, critics have argued that allowing a hunt could cause tourism in the area to take a hit; National Park Service data indicate that, in 2016, visitors to Yellowstone spent more than $500 million in the surrounding communities. Todd Mangelsen, a Wyoming-based nature photographer who won one of the hunting tags and plans to only shoot grizzlies with his camera, told NPR, “The public has the right to see bears, and the hunters do not have the right to take that away from the public.” Others contend that hunters have done more for conservation than many other Americans. Even President Theodore Roosevelt, who was an avid outdoorsman, hunter and conservationist, recognized the contributions of hunters: Although the fate of the GYE grizzly was ultimately decided by a federal judge, Cooley stands by FWS’s decision to delist the bears. “There are a lot of people [who oppose], and then you have the science in the middle,” says Cooley. “And the ESA is a little tricky because everything is not completely defined. There’s some subjectivity, and so the agencies have some discretion. We have to make a good scientific argument, and some people don’t think it’s good enough. For bears, a lot of people … want federal protections continuously.” “We have to make a good scientific argument, and some people don’t think it’s good enough. For bears, a lot of people … want federal protections continuously.” But such protections come at a high cost. While an exact figure regarding the total amount of recovery for the GYE grizzly bear is hard to come by, the 1993 Grizzly Bear Recovery Plan estimated that the effort to restore populations in all six regions would cost a total of $3,773,685 — an amount largely covered by taxpayer dollars. For GYE alone, MacKay said Wyoming Game and Fish dedicated $50 million toward recovery efforts; that’s not including the money paid in by Idaho or Montana. Overall, data from the Government Accountability Office puts the average cost for the restoration of an endangered species at around $15.9 million. Bears, Cooley says, are an expensive species to manage. “We spent millions of dollars on [their recovery]. These emotional species that are getting a lot of public attention get all the money,” she says. “We also have other species that people don’t even think about that we don’t have any money to recover. So it comes back to, we need to meet the ESA requirement and then hand it over to the states because the ESA is not for long-term management. We need that funding to get other species out of the emergency room.” But for these forgotten species, the future looks bleak. At the same time that concern over the impact of climate change, pollution and other factors on animals and their habitat is growing, FWS is faced with the continued cost of litigation to defend its decision to delist species such as the grizzly bear. In fiscal year 2010, according to a 2011 Washington Post article, “The Fish and Wildlife Service spent so much of its $21 million listing budget on litigation and responding to petitions that it had almost no money to devote to placing new species under federal protection.”
<urn:uuid:59337e4d-6663-4976-873b-04efd00151aa>
CC-MAIN-2022-33
http://modernconservationist.com/recovery-of-the-yellowstone-grizzly/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572163.61/warc/CC-MAIN-20220815085006-20220815115006-00697.warc.gz
en
0.962608
5,140
3.71875
4
A subcontractor agreement is a legal document that a general contractor uses to employ a subcontractor on a building project. This contract is a legally binding agreement that describes the parameters of a minor task inside the larger project that will be done by an entity other than the general contractor. Subcontractor agreements are often entered into between the contractor and another firm, with no involvement from the customer. If you are working as or with a subcontractor, you must know what exactly you will deal with before commencing. Many chief contractors that appoint third-party contractors to complete their work sign-up with these agreements for assurance. These papers or bonds are contractual agreements between the two parties that enable smooth functioning. With the aid of this agreement, if something goes wrong, you can easily sort disputes according to the terms of the bond. Read on and identify more insights about a subcontractor agreement. You will grasp many details about the agreement in this guide for the finest assistance. A subcontractor agreement is similar to the bonds drafted between the employees and employers. The most prominent variation between them is that the two have a different scope of work and responsibilities to meet. The most basic idea behind drafting this agreement is to mention the tasks are being subcontracted. The bond must also mention the materials that the subcontractor must supply and the ones that thesupplier will extend. For instance, a subcontractor bond between a trainer and an outsourced trainer will comprise details about where and when the training will occur. The agreement would also contain the number of trainers that would serve. The responsibilities of each trainer and other provisions like training areas are also clearly listed in these agreements. A chief contractor might subcontract some other tasks like electrical needs to an electric firm for construction industries. The agreement would usually mention this bond's light sockets, installations, and other details. It can also mention supply details like cabling and more. Altogether, the basic idea of creating a subcontractor agreement is to outsource the work efficiently. In these agreements, tasks are mentioned clearly; both the parties need to agree to the terms before starting work. Doing so reduces risks and even lays down clear terms in case of disputes. A subcontractor is a person or company who undertakes work for a contractor as part of a bigger project. Contractor subcontractor agreements are most popular in the construction industry, where specialised abilities are rented out to accomplish large construction projects. A hired project manager, for example, may employ a bricklaying business to construct a new school in a new suburban development. A subcontractor agreement is thus a legal document between a contractor and a subcontractor that lays out the terms and conditions of the work to be completed. Subcontracting offers contractual work to third-party individuals and is commonly found in transport, construction, and similar industries. The role can extend some exciting benefits to main contractors. First of all, hiring a subcontractor can save you on heavy expenses. You can hire these individuals on a short-term basis while the contractor operates a large-term project. Subcontractors can also be a cheap alternative in comparison to full-time workers. Next, these contractors also have expertise in specific niches or extend expert quality services. Subcontractors are highly beneficial for your work, and they deliver superior services that regular employees cannot provide. When a contractor employs a contractor, who then contracts another subcontractor to conduct work as part of a huge project, it may get quite complex. The danger here is that information will be misread, money will be missing, and work will not be finished on schedule. All of these scenarios have the potential to severely ruin a project. A subcontractor agreement ensures that all parties involved keep operations tight, focused, and consistent. Depending on the size and scope of your company and projects, you may need to employ subcontractor agreements on a regular basis. A subcontractor agreement will be required each time you need to outsource services to finish a task. As a contractor or project manager, you most certainly have a solid sense of the gaps that need to be filled to complete a project. Even before taking on a new customer, you should go out to your network to get subcontractors to ensure the job is completed on time. A subcontractor agreement will also be required if your present subcontractors are no longer able to execute their duties and must be replaced. Finally, if extra time is needed to finish the task, you may need to employ subcontractor agreements to prolong contracts. A qualified project manager understands the project's goal while also keeping an eye on all of the moving components that make the project a reality. Every subcontractor agreement must be tailored to the services that are requested. When there is too much information, the job of the subcontractor becomes muddled. There is insufficient information, and it is unclear what each subcontractor is expected to do. The key to a high-quality subcontractor service agreement is that it articulates the scope of work required from the subcontractor precisely. upon completion? Or will you pay in instalments? Whatever way you pick, make it very clear in your agreement and keep your commitments. The details of this agreement can differ from industry to industry. Certain individuals that hire the main contractor may not need to revise their agreements for long. So, preparing a one in all bond is the most favorable key for them. Individuals may need a detailed and specific agreement to lay out the job details in other cases. The must-haves usually come down to both parties' needs and requirements. If you need more details in your subcontractor agreement, you might need expert assistance. So, go for a well-skilled lawyer to assist you in completing the task with skill and perfection. Tips for writing a Subcontractor Agreement Mention all the necessities of the project Outlining the portion of the assignment that comes under the subcontractor's duties is a must. List details of how the subcontractor's duties will fit perfectly well in the agreement. For instance, if the need is to write a copy for a magazine, introduce the design and other layouts to understand better the content needed. Mention the provisions and due dates As you will have to attach a copy of the agreement to your work, you must prefer to complete it at least five days before the due date. It permits time for reviewing and editing as well. Mention the payment terms You will first have to receive the payments from your clients to pay the subcontractor. Thus, if the terms in your agreement with the client are 14 days, then you prefer to keep the time duration for the subcontractor 20 to 30 days. Doing so gives you ample time to receive the client's payments, deposit them if needed and pay the contractor within time. Draft the contract and send it for review The contract should not be very long; it can be crisp paperwork that lists all necessary details. Differentiate both parties well in the contract so that all your terms and aspects are clear. Remove confusion and review it well before finalizing the contract. Mention how well you manage disputes Rather than merely relying on legalities, which are quite costly, you can settle disputes and breaches in your unique way. Doing so gives both parties enough space and time to work calmly and agree on a settlement. Final Review Once you have completed the document, a final review is a must. You and the opposite party must agree to all the listed terms to fully understand the subcontractor agreement. Look for errors and give it a final touch-up to proceed. If you're a busy contractor, you'll almost certainly use the same subcontractors again and again. Your connections are one of your most valuable assets, allowing you to charge your clients extra for access to your database of trustworthy professionals. That is why it is crucial to develop these connections and guarantee that they are paid regularly and within a fair time frame. Nothing will wreak havoc on your relationships more than failing to pay your subcontractors. When creating a subcontractor agreement, be specific about what you're willing to pay and when. Will you, for example, pay a percentage upfront and a percentage Hiring subcontractors is quite beneficial. It can assist you in saving additional expenses and getting the finest possible services for your work. However, making a suitable subcontractor agreement is quite necessary for success. Use the given guidelines and create the finest possible agreement for seamless work. Acquire the best details and make the most benefit from your business. Use Awesome Sign to get started, and sign your pdf documents effortlessly. This Master Subcontractor Agreement (this “Agreement” or this “Subcontractor Agreement), is entered into and made effective as of (add a corresponding date) (the “Effective Date”), by and between: [Sender.Company], the company with offices located at ................................................................ (add a corresponding address) (“Prime”), and [Client.Company], the company with offices located a ............................................................... (add a corresponding address) (“Subcontractor”). Prime has existing or prospective customer contracts for which Prime may require support; and Subcontractor has been identified by Prime as a potential subcontractor as it has certain expertise and capabilities which may be required under such contracts; and The parties wish to set forth the terms and conditions upon which any Subcontractor support may be provided to Prime. NOW THEREFORE, in consideration of the foregoing, and of the mutual covenants and agreements set forth herein, the receipt and sufficiency of which is hereby acknowledged, the parties, intending to be legally bound, agree as follows: The following capitalized terms will have the following definitions under this Agreement: “Contract” means Prime’s contract with the Customer for which the Subcontractor may provide support pursuant to Task Orders issued under this Agreement. “Customer(s)” means customers of Prime for whom Services or Deliverables are to be performed under a Task Order. “Deliverables” means those items, products and materials to be provided to Prime by Subcontractor, as specified on a Task Order. “Firm Fixed Price (FFP)” means an agreed upon fixed price for the Services and Deliverables to be provided pursuant to a Task Order. “Intellectual Property Rights” means world-wide, common-law and statutory rights associated with (i) patentable inventions, patents and patent applications, divisions, continuations, renewals, reissuance and extensions, thereof, (ii) copyrights, copyright applications and copyright registrations, “moral” rights and mask work rights, (iii) the protection of trade and industrial secrets and confidential information, and (iv) trademarks, trade names, service marks, and logos (collectively “Trademarks”). “Open Source” means any software having license terms that require, as a condition of use, modification, or distribution of the software that such software or other software combined or distributed with such software be (i) disclosed or distributed in source code form, (ii) licensed for the purpose of making derivative works, and (iii) redistributable at no charge. “Other Direct Costs” means costs normally incurred in the operation of a business, such as postage, telephone and internet charges, office supplies and overhead. “Party or Parties” means the signatories to this Agreement when referred to, respectively, individually or collectively. “Pre‐Existing Intellectual Property” means any Intellectual Property that has been conceived or developed by either party or any third party before Subcontractor renders any services under this Agreement or any Task Order or that is conceived or developed at any time wholly independently of the Services and Deliverables. “Services” means all work performed by Subcontractor under this Agreement pursuant to a Task Order, as well as materials used by Subcontractor in performing its obligations under a Task Order. “Task Order” means a written document executed by the Parties authorizing Subcontractor to perform Services and/or provide Deliverables in accordance with such Task Order. For clarity, any contract for services entered into through an online freelance or similar website shall be construed as a Task Order under the terms of this Agreement. “Time and Materials (T&M)” means Services performed at an hourly rate wherein the actual cost of hours worked and materials used in the performance of the Services are charged to Prime. Equipment and other depreciable assets are not to be charged. Prime shall have no obligation to award any work or Task Order under this Agreement. However, should any work be awarded to the Subcontractor, the parties agree that such work will be subject to the terms and conditions of this Agreement. The Subcontractor shall, in accordance with Task Orders issued by Prime and agreed to by Subcontractor, perform work assignments to provide expert Services, advice, and/or Deliverables. A Task Order shall be considered in effect and duly authorized only upon written agreement of Each Task Order shall provide, at a minimum, the following data: All Task Orders incorporate the terms and conditions of this Subcontract, whether stated explicitly or not. In the event of conflict or inconsistency between a Task Order and this Agreement, the terms and conditions of this Agreement shall take precedence, unless specifically stated otherwise in the Task Order. Unless otherwise terminated as provided herein, the term of this Subcontractor Agreement shall start on the Effective Date and end of the year (add a corresponding number) thereafter. Should a Task Order be authorized during the term of this Agreement, which provides for completion subsequent to the end date of this Agreement, then the Task Order shall be additionally construed as a written modification of this Agreement, which extends the end date of this Agreement to coincide with the Task Order completion date. Prime shall have the right at any time to set-off any amounts now or hereafter owing by Subcontractor to Prime under any Task Order or otherwise, against amounts which are then or may thereafter become due or payable to Subcontractor under this Agreement. Upon notice to Subcontractor, Prime may change any requirement in a Task order relating to undelivered Services and/or Deliverables. If such change reasonably affects the price or schedule, the Subcontractor will notify Prime within number business days of such, and the parties will negotiate an equitable adjustment in the fees, charges and/or schedule and make appropriate amendments to the applicable Task Order. Prime shall have no obligation to the Subcontractor for any changes to a Task Order that were not authorized in writing by Prime. Subcontractor understands that by signing this Agreement, it is appointing Prime as an exclusive representative with respect to Customers to whom Subcontractor is introduced and/or to whom Subcontractor is assigned by Prime, as to the subject matter of Prime’s retention of Subcontractor hereunder. Subcontractor agrees that the relationship between Subcontractor and any such Customers, for purposes of this Agreement and whether or not this Agreement or any Task Orders hereunder is/are terminated, begins upon the initial disclosure of a potential assignment to Subcontractor by Prime. During the term of this Agreement and for number months following termination of this Agreement, Subcontractor shall not, directly or indirectly, either as an organization, as an individual, as an employee or member of a partnership, or as an employee, officer, director or stockholder of any corporation, or in any other capacity, solicit or accept, or advise anyone else to solicit or accept, any business that competes directly with Prime from any such Customers, or from the personnel of any Customers to whom Subcontractor was introduced pursuant to this Agreement. In addition, Subcontractor shall not directly or indirectly use or make available to any person, firm, or corporation the knowledge of the business of Prime gained by Subcontractor during the term of this Agreement. and liability incurred by Prime and any Customer due to failure of Subcontractor to meet any of the requirements in any of the third party licenses. information, knowledge, or data disclosed by Subcontractor must be made known to Prime as soon as practicable and in any event agreed upon before execution of a Task Order. Subcontractor represents that its execution and performance of this Agreement does not conflict with or breach any contractual, fiduciary or other duty or obligation to which Subcontractor is bound. Subcontractor shall not accept any Task Order from Prime or work from any other business organizations or entities which would create an actual or potential conflict of interest for the Subcontractor or which is detrimental to Prime’s business interests. Subcontractor may not subcontract, either in whole or in part, Services authorized by a Task Order without prior written consent of Prime. If Prime Contracts consents to subcontracting of any portion of the work to be performed under a Task Order, the Subcontractor must first obtain, from each subcontractor, a written agreement that is the same as, or comparable to, the following Sections of this Agreement: Customer Interactions, Exclusivity, Intellectual Property Rights, Confidentiality, Conflict of Interest, Subcontracting, Warranties, Indemnification, Limitation of Liability, Insurance and any other flow-down provisions contained in the applicable Task Order. Subcontractor warrants that: Subcontractor shall defend, indemnify, protect and hold harmless Prime, the Customer, and each of their officers, employees and agents from and against any and all losses, demands, attorneys’ fees, expenses, costs, damages, judgments, liabilities, causes of action, obligations or suits resulting from (1) any negligent act or omission or willful misconduct of Subcontractor, its personnel or approved subcontractors, (2) the breach of any provision of this Agreement by Subcontractor or its personnel or any approved subcontractors of Subcontractor, or (3) any claim that Intellectual Property provided by the Subcontractor under this Agreement infringes or misappropriates any third party Intellectual Property Right. Subcontractor shall maintain adequate insurance coverage and minimum coverage limits for its business as required by any applicable law or regulation, including Workers’ Compensation insurance as required by any applicable law or regulation, or otherwise as determined by Subcontractor in its reasonable discretion. Subcontractor’s lack of insurance coverage shall limit any liability Subcontractor may have under this Agreement or any Task Order issued hereunder. 1. Assignment. Subcontractor shall not assign any rights of this Agreement or any Task Order issued herein, and no assignment shall be binding without the prior written consent of Prime. Subject to the foregoing, this Agreement will be binding upon the Parties’ heirs, executors, successors and assigns. 2. Governing Law. The Parties shall make a good-faith effort to amicably settle by mutual agreement any dispute that may arise between them under this Agreement. The foregoing requirement will not preclude either Party from seeking injunctive relief as it deems necessary to protect its own interests. This Agreement will be construed and enforced in accordance with the laws of the Province of province, Canada, including its recognition of applicable federal law, but excluding such jurisdiction’s choice of law rules. The Parties consent to the exclusive jurisdiction and venue in province, Canada for the enforcement of any arbitration award or other judicial proceeding concerning this Agreement. Any judgment issued by such court shall award the prevailing Party its reasonable attorney’s fees and related costs. Both Parties agree that the occurrence of a dispute shall not interfere with either Party’s performance or other obligations under this Agreement. 3. Notice. All notices required under this Agreement will be in writing and will be sent to the address of the recipient specified above. Any such notice may be delivered by hand, by overnight courier or by first class pre‐paid letter, and will be deemed to have been received: (1) if delivered by hand ‐ at the time of delivery, (2) if delivered by overnight courier ‐ 24 hours after the date of delivery to courier with evidence of delivery from the courier, (3) if delivered by first class mail – three (3) business days after the date of mailing. 4. Injunctive Relief. Subcontractor acknowledges it would be difficult to fully compensate Prime for damages resulting from any breach by Subcontractor of the provisions of the following Sections of this Agreement: Exclusivity, Intellectual Property Rights, Confidentiality, Subcontracting, and Warranties. Accordingly, in the event of any actual or threatened breach of such provisions, Prime will, in addition to any other remedies that it may have, be entitled to temporary and/or permanent injunctive relief to enforce such provisions. 5. Severability. The Parties recognize the uncertainty of the law with respect to certain provisions of this Agreement and expressly stipulate that this Agreement will be construed in a manner that renders its provisions valid and enforceable to the maximum extent possible under applicable law. To the extent that any provisions of this Agreement are determined by a court of competent jurisdiction to be invalid or unenforceable, such provisions will be deleted from this Agreement or modified so as to make them enforceable and the validity and enforceability of the remainder of such provisions and of this Agreement will be unaffected. 6. Independent Contractor. Nothing contained in this Agreement shall create an employer and employee relationship, a master and servant relationship, or a principal and agent relationship between Subcontractor and/or any Subcontractor employee(s) and Prime. Prime and Subcontractor agree that Subcontractor is, and at all times during this Agreement shall remain, an independent Subcontractor. The Subcontractor shall at all times be responsible for all Subcontractor’s employees’, agents, and subcontractor’s actions, shall be responsible for any applicable taxes or insurance, and shall comply with any applicable public laws or regulations. 7. Force Majeure. Neither Party shall be liable for any failure to perform under this Agreement when such failure is due to causes beyond that Party’s reasonable control, including, but not limited to, acts of state or governmental authorities, acts of terrorism, natural catastrophe, fire, storm, flood, earthquakes, accident, and prolonged shortage of energy. In the event of such delay the date of delivery or time for completion will be extended by a period of time reasonably necessary by both Subcontractor and Prime. If the delay remains in effect for a period in excess of thirty days, Prime may terminate this Agreement immediately upon written notice to Subcontractor. 8. Entire Agreement. This document and all attached or incorporated documents contains the entire agreement between the Parties and supersedes any previous understanding, commitments or agreements, oral or written. Further, this Subcontractor Agreement may not be modified, changed, or otherwise altered in any respect except by a written agreement signed by both Parties. IN WITNESS WHEREOF, this Subcontractor Agreement was signed by the Parties under the hands of their duly authorized officers and made effective as of the Effective Date. [Sender.FirstName] [Sender.LastName] [Client.FirstName] [Client.LastName]
<urn:uuid:8c1f7e5f-eebd-4aef-90f1-6a7d7bd8e593>
CC-MAIN-2022-33
https://www.awesomesuite.com/sign/contract-templates/Subcontractor-Agreement
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571989.67/warc/CC-MAIN-20220813232744-20220814022744-00499.warc.gz
en
0.929357
5,015
2.640625
3
Note that there are several function keys that can be used during viewing and editing. F1-F9 are hard coded and cannot be reassigned like bindkeys. On some platforms e.g. Mac, the function keys by default are assigned to special actions by the OS (for example raising/lowering the brightness of the display). It is possible to switch to normal Fn key mode operation (e.g. on the Mac by the Settings->Keyboard dialog). Return key - completes a Create Path, Create Polygon, Create MPP, Create Wire or Reshape command. The current cursor position is used as the last point. This is usuallly easier than double clicking to complete these commands. F2 key - toggles the Selection Options 'Gravity Mode' on/off. F4 key - toggles the Selection Options Full and Partial selection modes. F5 key - shows the Enter Coordinate dialog. For any command that normally takes a mouse click to enter a coordinate, F5 allows the user to specify the coordinates through the Enter Coordinates dialog box instead. For example, if you want to create a rectangle with coordinates (0.0, 0.0) (2.0, 3.0), click on the Create Rectangle icon, then press F5 and enter the first pair of coordinates and press OK. Then press F5 again and enter the second pair of coordinates. F6 key - toggles the Selection Options 'Display Connectivity' mode on/off. F7 key - toggles the Selection Options 'Selection Type' mode between Object and Net. F8 key - toggles the Display Options 'Immediate Move' mode on/off. Also note that double clicking the left mouse button will add a final path/mpp point, or add a final polygon point, or terminate the Reshape command. undos the last edit made. Multiple undos can be carried out. Currently the only operations that can be undone are Delete, Move, Move Origin, Copy, Rotate, Stretch, Create, Merge, Chop, Flatten, Align, Reshape. redos the last undo. Multiple redos can be carried out. Currently the only operations that can be redone are Delete, Move, Move Origin, Copy, Rotate, Stretch, Create, Merge, Chop, Flatten, Align, Reshape. Copies the selected set into a yank buffer, for use with the Paste command. Pastes the contents of the yank buffer into the current cellView. deletes the current selected set. Deletes can be undone. Copies the current selected set. The F3 key will toggle the Copy options dialog. Copy Net info if checked will copy a shape's net connectivity. Snap Mode can be set to Manhattan, Diagonal or Any Angle. If a shape is being copied, Change Layer will allow the layer of the new shape to be changed. If Rows or Cols is set to a number greater than 1, an array of objects will be copied with the spacing set by Row Spacing and Col Spacing. If Mirror X is pressed (or the 'x' key) during a copy, the object is mirrored in the X axis. If Mirror Y is pressed (or the 'y' key) during a copy, the object is mirrored about the Y axis. If Rotate is pressed (or the 'r' key) the object is rotated 90 degrees anticlockwise. If infix mode is on, the current cursor position is used for the reference coordinate. Else you will be prompted to enter the reference coordinate. During a copy operation, the object(s) are shown as outlines and delta coordinates (dX/dY) from the initial position are shown on the status bar. Moves the current selected set. The F3 key will toggle the Move options dialog. Snap Mode can be set to Manhattan, Diagonal or Any Angle. If a shape is being moved, Change Layer will allow its layer to be changed. If moving instances, Snap insts to rows will snap instances to row objects if they exist. If Mirror X is pressed (or the 'x' key) during a copy, the object is mirrored in the X axis. If Mirror Y is pressed (or the 'y' key) during a copy, the object is mirrored about the Y axis. If Rotate is pressed (or the 'r' key) the object is rotated 90 degrees anticlockwise. If infix mode is on, the current cursor position is used for the reference coordinate. Else you will be prompted to enter the reference coordinate. During a move operation, the object(s) are shown as outlines and delta coordinates (dX/dY) from the initial position are shown on the status bar. Move the current selected set by the distance specified in the Move By dialog. Nudges the current selected set in small increments in X or Y. The distance to nudge by is set according to the Nudge By choice. User deltaX/Y takes the values of X delta and Y delta as the nudge distance. Snap Grid sets the nudge distance to the current X and Y snap distances. Minor Grid sets the nudge distnance to the current X and Y display grid. Major Grid sets the nudge distance to the minor display grid times the major grid multiplier. After invoking the command, the left/right/up/down arrow keys moce the selected set according to the nudge distance, until the command is aborted by pressing the ESC key. While the nudge command is active, zooming, selecting/deselecting etc is available - but not panning via the arrow keys. Stretches the current selected edge or vertex. The F3 key will toggle the Stretch options dialog. Snap Angle sets the allowed stretch direction. If objects as well as edges or vertices are selected, they are moved by the stretch distance. If Lock diagonals is checked, diagonal edges will be locked to 45 degrees. Otherwise moving an edge adjacent to a diagonal may make the diagonal edge become any angle. If Lock endpoints is checked, then stretching a path segment at the beginning or end of the path will split the path at the start or end vertex, keeping the start/end vertex fixed and stretching the other part of the split segment. If infix mode is on, the current cursor position is used for the reference coordinate. Else you will be prompted to enter the reference coordinate. During a stretch operation, the object edge(s)/vertex(vertices) are shown as outlines and delta coordinates (dX/dY) from the initial position are shown on the status bar. Reshapes the currently selected edge of a polygon or path. First select an edge of a polygon or the centreline of a path (in edge selection mode). Then enter vertices you wish to add to the edge. The original start and end points of the edge remain the same, vertices can be added with diagonal, manhattan or any angle snapping. Double click or press return to complete reshaping the edge. Pressing backspace will back up one vertex. Although Reshape only works with paths and polygons, you can convert any object e.g. a rectangle to a polygon using the Edit->Convert to Polygon command. Rounds the corners of a rectangle or polygons. You must first select a shape to round. Corner radius sets the radius of curvature in microns. Number of segments per corner sets the precision of the generated curve which is made up of segments (straight lines). Snap Grid sets the manufacturing snap grid to avoid off-grid vertices; if no snapping is required set the value to the user database resolution (usually 0.001um). If Delete original shape is checked (the default), the original shape is deleted, else it is kept. Adds a vertex to a selected path or polygon at the point given by the cursor. The vertex that has been added is selected so it can be moved using a Stretch command. Rotates the current selected set about a point, which the user is prompted to enter. Mirror about Xaxis mirrors the objects in X, Mirror about Y axis mirrors the objects in Y. Rotate CCW rotates the objects counterclockwise. Rotate (instances and shapes) rotates instances according to the transform selected. Rotate by angle rotates shapes by any angle from -360.0 to +360.0 degrees; a positive angle corresponds to a clockwise rotation. Only shapes can be rotated by any angle; rectangles and squares get converted to polygons and are then rotated, while paths and polygons are maintained and their vertices are rotated. Note: cell placement orientation can be changed by querying the instance's properties and changing the orientation. Moves the origin of the current cell. Click on the point that you want to make the new origin, and all object coordinates in the current cell will be changed to make this point (0, 0). Converts selected shapes into polygons. Useful in conjunction with the Edit->Reshape command above. Perform boolean operations on layers. Layer1 and Layer2 are input layers, and Output Layer is the output layer. By default layer data is processed from the current open cellView, however it is possible to set the lib/cell/view names of the cells containing Layer1 data and Layer2 data.Operations that can be performed are two layer AND, two layer OR, single layer OR (merge), single layer NOT, two layer NOT, two layer XOR, sizing and up/down sizing (first size up by a given amount, then size down by the same amount - useful for removing small gaps or notches) and selection (select all shapes on a layer that touch shapes on another layer). Mode allows either selected shapes on Layer1 and, if used, Layer2 to be processed only, else all shapes for the layer(s) will be processed. If Output data as trapezoids is checked, the resulting layer is converted into trapezoids rather than complex polygons. If Hierarchical is checked, the design hierarchy is flattened and all shapes on the layer(s) are processed; else just shapes in the top level cellview are processed. The Output Cellview is the destination for the generated data. By default this is set to the current cellview, but can be any cellview; if the cellview does not exist it will be created. If Size Output Layer is checked, the output layer can be also sized by an amount (except for the operation Size lyr1). Performs boolean operations on layers. Useful when the data is too large to process with Edit->Boolean Operations... as it uses a tiling algorithm to process the data tile by tile. Layer1 data and Layer2 data specify the input layer sources. The cellView for each layer defaults to the current displayed cellView, but can be changed e.g. to compare two cells using an XOR operation on the same layer, for example. Operation specifies the boolean operation to be performed. Currently only merge (single layer OR), OR, AND, ANDNOT, NOT, XOR and SIZE operations are supported. The Output Cellview specifies the cellView that output shapes will be created on, according to the output layer specified. If Hierarchical is checked, the cellView's data is flattenned before the operation. Tile size can be determined automatically if Tiling Mode is set to Auto. Else the tile width and height can be specified if Tiling Mode is set to Manual. The larger the tile size, the more physical memory will be used. For large designs with many levels of hierarchy, computing the best tile size can take a long time - so in this case manually setting the tile sizes is preferable. Typically a starting point of 500-1000um should be acceptable. Seting smaller tile sizes will use less memory, but may run longer. Multithreaded specifies that the tiles are split and run on a multiple number of threads, which may speed up overall runtime at the expense of somewhat more memory usage. # threads defaults to the maximum number of threads that are feasible on your system. For example, a 4 core Intel i7 processor will support 8 threads. Speed improvement is not linear with the number of threads due to IO and memory bottlenecks. Typically with 4 cores, about 3.5x speed improvement is gained. Merges all selected shapes into polygons. Layers are preserved, i.e. only shapes on the same layer are merged. If you want to merge shapes on different layers, use the Edit->Booleans command with operation Layer1 OR layer2. Chops a rectangle out of a selected shape. First, select a shape. Then invoke the chop command and draw a chop rectangle. The shape will have the rectangle chopped out of it. If Convert paths to polygons is checked, paths will be converted to polygons before the chop takes place. Otherwise paths will be maintained and will be cut. If Keep chopped shapes is checked, the chop shapes from polygons are not deleted. Aligns objects and optionally spaces them in the direction perpendicular to the alignment edge. Alignment Direction is used when Align Using is set to Object Origin and can be horizontal or vertical. Horizontal will align objects horizontally e.g. by their left edges, and Vertical will align them vertically e.g. by their bottom edges. Alignment can be by Object Origin, Object bBox or Layer bBox. Spacing Type sets whether the objects should be spaced apart as they are aligned. It can be set to either None, Space or Pitch. Click on Set Reference Object to align to to set the alignment reference object. Then click on the objects you wish to align; cancel or OK the dialog to finish an alignment sequence, or click again on Set Reference Object to start a new alignment. Note: cancel or OK the Align command before carrying out another command. Scales all objects in the current cellview by a simple linear scale factor. If Scale all cells? is checked, all cells in the library will be scaled. Coordinates are snapped to the Snap Grid. Bias can be either Bias by Layer, which biases all shapes on the specified Layer to bias, or Bias Selected (selected shapes can be on any layer). Bias by sets the bias. A positive bias causes shapes to grow in size; a negative bias causes them to shrink. If Bias all cells? is checked, all cells in the library will have the bias applied. Coordinates are snapped to the X Snap Grid and Y Snap Grid. Note that polygons with collinear or coincident points will not be biased correctly and a warning will be given. Sets the selected shape(s) net. The Net Name combo box is filled with any existing net names in the cellview, or you can type in a net name to create that net. If Pin? is checked, the shape(s) will become pin shapes. Creates pin shapes based on text labels. All label layers are shown in the dialog with a second layer chooser to allow control of the layer than pins will be generated on. Pins are created as rectangles centred on the label origin with the specified width and height. Pins are only created if the Use? option is checked. Ascends one level of hierarchy, assuming you have previously descended into a cellview's hierarchy. Descends down into the selected instance, or tries to find an instance under the cursor to descend into if nothing is selected. View is the view of the instance to descend into; for example a schematic instance may have both a symbol view and a schematic (lower level of hierarchy) view. Open In controls the window used to display the cellView; Current Window uses the existing window, and the Ascend command can be used to return to the previous cellView in the hierarchy. New Window opens a new window for the cellView, levaving the previous cellView window open. Library, Cell Name and View Name specify the new cellView to be created. By default the selected objects are deleted from the current cellView, and an instance of the new cellView is placed in the current cellView to replace them. If Replace? is checked, then the selected objects are deleted. Flattens the current selected instances into the current cellView. Flatten level controls the flattening process; with Full checked the complete hierarchy from the current level down to leaf cells is flattened. If This Level is checked, then only instances at the current level of hierarchy are flattened; lower levels of the hierarchy are preserved. Adds objects to an existing group. Add To Group prompts the user to select a group to add to, or uses an existing selected group. Then the user is prompted to select objects to add to the group. As objects are added, the group bounding box shape is modified to enclose the objects. To create a group, use the Create->Group command. Removes objects from a group. The group must be editable in transparent mode; use the View->Display Options dialog to set groups to transparent mode. Remove From Group prompts the user to selct objects, and removes them from the group. The group shape bounding box is adjusted to enclose the remaining shapes. Ungroups objects. The Ungroup command prompts the user to select a group to ungroup, or uses an existing selected group. Allows editing a cell in place. First select an instance that you want to edit. The Edit in place command will cause all subsequent selection and editing will be done in the master cell for that instance, but with the original top level cell displayed. The edit in place cell will be shown with layers of normal intensity, whereas all other shapes (of non-editable cells) will be shown dimmed. Edit in place is hierarchical, i.e. you can choose to edit in place a cell within another cell you are currently editing in place. Returns to the parent cell of the current edit in place cell. Displays the Select Inst dialog which allows selection of instances based on their instance name. Displays the Select Net dialog which allows selection of nets based on their name. Selects all currently selectable objects. Deselects all the selected set. Displays the Search dialog. Find searches for objects, either in the current cellView, or hierarchically from the current cellView down, and optionally can replace them, or their attributes and/or properties. Names can be matched by Wildcards (e.g. VDD* matches VDD1, VDD2, VDD) or by RegExp (regular expressions). Currently you can search for instances, nets, shapes, ellipses/circles, labels, MPPs, paths, polygons, rectangles and viaInsts. For each object type you can add a criteria. For example instances can have cell name, lib name, inst name, view name, orientation and properties as the criteria to match them. You can have one or more criteria, and match All criteria (i.e. the AND of each) or Any criteria (i.e. the OR of each). Objects that match the search criteria can be added to the selected set or highlighted according to the Find Action. If Find Action is Select, objects will be selected, and further control is given by Select Action. New Selection clears any existing selected objects. Add to Selection adds the found selected ites to the selected set. Remove from Selection removes found objects from the selected set. In the case of highlighted nets, they can be displayed either as the actual net shapes highlighted if Highlight Shapes is checked, or by a Spanning Tree between the instance pins of the instances the net connects to, or as a Steiner tree. This is useful, for example, in highlighting the connectivity of unrouted nets; the spanning tree is a good approximation to the path an autorouter will take; the Steiner tree is even better although can be slow on nets with many pins. The highlight colour can be chosen using the Highlight Colour button; the Highlight Fill can be Solid or Hollow and the linewidth can be specified for hollow fills. Optionally the display can Zoom to Selected object(s) and it is possible to clear all highlighted objects using the Clear highlighted button. Once objects have been found (by clicking on the Find button), then it is possible to replace them. The Replace combo box shows the possible replacements depending on the object searched for. For example you can replace the layer of a shape, or the property of an object. Note that it is not currently possible to undo the actions of the Replace command. Be sure to save first! Queries the selected object. Note you can select and query an object by double clicking it with the left mouse button. With nothing selected, the current cell's properties are queried. Otherwise you may query any selected object's properties and attributes, and cycle through the selected set using the Previous and Next buttons. You can delete a queried object using the Delete button. You can remove an object from the selected set with the Deselect button. If multiple objects are selected, Change all selected objects allows their common attributes to be changed. For example, if shapes are selected then the layer may be changed for all shapes. If the object has connectivity, a Net properties tab is added to the dialog. All objects may have user or system-defined properties which can be manipulated on the Properties tab page. Properties can be added as string, float, integer, boolean, list or orient. Click on the property name or value to change the text, or click on the type and select the type in the combo box that will appear. Click on the '+' button to add a (initially blank) property entry, or select a property and click on the '-' button to delete the property. There is currently no undo capability if you delete a property! All menus and toolbar buttons have actions. An action has a unique command associated with it; it also has an optional bindkey. For example the 'Open Cell' action by default has a bindkey Ctrl+O. Bindkeys may be redefined by the user using the Edit Bindkeys command. This shows a table of all current bindkey assignments. Clicking on the 'Bindkey' entry in the table allows editing the bindkey for that action. A single letter in uppercase indicates that key will be used. Modifier keys may be specified e.g. Shift+, Ctrl+, Alt+ and should precede the key, with no spaces. Bindkeys are saved in the preferences file (~/.gladerc) and are loaded automatically every time Glade is run. A local .gladerc file will override values specified in the global ~/.gladerc file, so you have a project-specific subset of settings. Copyright © Peardrop Design 2020.
<urn:uuid:17e385e1-74f9-4e90-8d8e-db9d8cd96f2c>
CC-MAIN-2022-33
https://peardrop.co.uk/docs/edit_menu.htm
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571745.28/warc/CC-MAIN-20220812170436-20220812200436-00499.warc.gz
en
0.860625
4,781
2.515625
3
In this article, we’ll be explaining all the terms that you need to know about airports and airline travel in general. If you’re a frequent traveler, chances are that you might not even know all of these terms. Plus, a lot of them are used in the wrong context, and over here, we’ll be breaking down their correct usage. In this airport terminology guide, we’ll cover everything there is to know – from booking flights to airplane-specific terms. Booking Flights (Purchasing Airline Tickets) A domestic flight means a flight that starts and ends in the same country, with no stops in other countries along the way. Domestic flights are usually much quicker and they’ll have fewer security checks along the way. An international flight means a flight that takes off from one country and lands in another one. These flights are more complicated because passengers need to go through Customs and Immigration in the arrival country. A direct flight refers to a flight that flies directly from one destination to another one, with no connections in the middle. A direct flight could include a refueling stop if it’s too long, but passengers won’t be able to exit the plane during this time. Direct flights are usually faster, but they’re more expensive than connecting flights. A non-stop flight is very similar to a direct flight – it flies directly from one destination to another one with no connections in the middle. The only difference between a direct flight and a non-stop one is that non-stop flights won’t even stop to refuel. The airplane will take off from the starting airport and land only at the final destination. A connecting flight refers to a flight that involves at least three airports. To get from the starting airport to the destination one, the airplane will stop in one or more airports somewhere along the way. Passengers will need to get off the plane there and wait for several hours for the next one. Connecting flights may include flights from multiple airlines as well. Usually, connecting flights take longer, but they’re also cheaper. Layover is a term that’s used for connecting flights. It refers to any of the connections along the way. As an example, you could say that you’re flying from California to New York, and you’ll have a layover in Dallas (where the plane will be stopping in the middle). During layovers, passengers need to exit the aircraft and wait in the transit area for the next flight. Layovers may last anywhere from 30 minutes to several hours. A stopover is a very similar term to a layover – it refers to the connections in connecting flights. The only difference is that stopovers are longer. There is no specific time limit when a layover becomes a stopover, but usually, airlines refer to stopovers when they’re talking about overnight connections (you have to spend the night in the airport waiting for the next flight). A transfer is another similar term to a layover because it also refers to the connections in connecting flights (when you’re transferring from one plane to another). The only difference is that “transfer” is usually used when talking about very short layovers, usually 2 hours or less. That said, this isn’t followed by everyone, and some people use “transfer” as a synonym to “layover”. Transit is a very similar term to a transfer because it also refers to short connections in connecting flights. The only difference is that when you’re transiting, you’re exiting and entering the same airplane, but when transferring, you’re entering a different airplane. For transits, usually, the same airline ticket is used. Split ticket refers to connecting flights when each part of the flight is bought separately on different tickets. Sometimes buying a combined ticket from a third-party provider (or an airline), which has all of the flights in a single booking, is more expensive than buying a ticket for each of these flights yourself. An interline agreement is another term that’s related to connecting flights. Airlines group together and form interline agreements, which allow each airline participating to sell tickets for other airlines in their connecting flights. For example, if you’re buying a connecting flight from American Airlines, and a part of this connecting flight is operated by United, then probably United has an interline agreement with American Airlines. This is better for passengers because they can purchase everything on a single ticket. Interline agreements also apply to checked luggage. If two airlines have interline agreements, then on connecting flights your checked luggage will automatically be transferred to the other airline during the layover, which means that you won’t have to do it yourself. Onward Ticket / Onward Flight An onward ticket means a booking with two or more international flights. Some countries require all incoming travelers to have onward flights, which basically shows proof that they are intending of leaving the country eventually. For example, if you’re flying from New York to London and after a week you’d be returning back from London to New York, it would be considered an onward flight. You could also not return to New York, and fly anywhere else, as long as it isn’t in England, and it would still be considered an onward flight. Outbound / Outward Flight Outward (or outbound) flight refers to bookings with return flights included in them, and it refers to the first flight. For example, if you’re flying from New York to Paris and after two weeks returning the same way, the outward (outbound) flight is from New York to Paris. Inbound Flight / Return Flight Inbound (other called return) flight is used when talking about bookings with return flights included in them, and it refers to the second flight. For example, if you’re flying from New York to Paris and after two weeks returning the same way, the inbound (return) flight is from Paris to New York. A Leg of the Flight When we’re talking about “legs of flights”, we’re talking about specific flights on connecting flights. For example, for a flight from Barcelona to New York with a connection in London, the first leg of the flight would be Barcelona – London, and the second leg from London to New York. Long-Haul flights just mean very long flights, usually 8 hours or more. All trans-Atlantic (crossing the Atlantic Ocean) and trans-Pacific (crossing the Pacific Ocean) flights are considered long-haul flights. Checking in online means finalizing your booking through the airline’s website or app, before arriving at the airport. For every booking that you purchase, you’ll need to check-in online (or at the airport), and during this process, you reserve a specific seat on the airplane and get a boarding pass, which you’ll need to print before arriving at the airport. Online check-in usually opens 24 hours – several weeks before the flight, and it’s cheaper to check-in online than to check-in at the airport. Boarding passes are essentially printed (or electronic) airline tickets. Passengers can get them after checking in online or checking in at the airport. They show the passenger’s personal details and flight details, including the flight number and the correct seat. Hidden City Ticketing Hidden city ticketing refers to purchasing a connecting flight, and internationally missing the last leg of the flight. Sometimes, purchasing a connecting flight and only flying the first part of the flight is cheaper than purchasing a direct flight. Frequent Flyer Points / Miles People who participate in airline advantage programs accumulate frequent flyer miles (or points) for every mile each flight does. These points can be redeemed for new bookings, upgrades, and other extras. The flight itinerary refers to the whole process of getting from one point to another. This includes all flights you’ll be taking, any taxis, trains, buses, or airport shuttles, and hotel bookings. Each passenger gets a unique booking number whenever they purchase a new flight, and it’s usually sent over email. It’s used for checking-in and any other places where the airline needs to understand which booking you’re talking about specifically. Each flight gets assigned a different flight number, which can be found in your booking or your boarding pass. It’s used to understand which terminal and gate each flight is departing from or arriving to. Flight numbers and their according gates are usually displayed on screens inside airports. When booking a flight, sometimes airlines will offer travel insurance. Some countries require all travelers to have one and some not. Even if they aren’t required, you should get travel insurance, which will do many things but most importantly cover any medical expenses if anything goes wrong during your vacation. Airlines usually offer passengers the option to pre-select their seats when checking in for an additional fee. This will let you choose which seat specifically you want to get. All other passengers will get assigned random seats. Baggage / Luggage When talking about air travel, baggage or luggage means all the bags that passengers are taking with them on the flight. This may include suitcases, backpacks, trunks, totes, duffel bags, purses, instrument cases, sporting equipment, and anything that’s within the right size restrictions. Tip: Looking for new luggage? Check out these 11 most durable luggage brands Luggage allowance refers to the size and weight requirements for luggage. Each airline has different size and weight restrictions, and they’re usually different for different types of luggage (checked luggage, carry-ons, and underseat luggage). Checked luggage is the largest and heaviest type of luggage that passengers can bring, which is handed over to the airline employees at the airport, and it’s then stored on the plane in the cargo area. Usually, it’s a paid service and passengers need to pay 20-50$ for each checked bag. Although the size and weight restrictions differ between airlines, usually checked luggage needs to be under 62 linear inches (height + depth + width) and under 50 or 70 lbs. Hand luggage is all luggage that passengers are allowed to bring on the plane, including carry-ons and underseat luggage. A carry-on is a larger type of hand luggage, and airlines usually allow each passenger to bring one free of charge. Carry-ons need to be stored in the overhead compartments on airplanes, and most commonly, they need to be under 22 x 14 x 9 inches in size. Tip: Looking for a new carry-on? I’ve been using the Travelpro Maxlite 5 carry-on suitcase for many years now, and it’s still going strong. It’s super lightweight, spacious, and I love its smooth-rolling spinner wheels. I couldn’t recommend it enough! Personal Item / Underseat Luggage Personal items (other called underseat luggage) are a smaller type of hand luggage, and each passenger is usually allowed to bring one free of charge. Personal items need to be stored under the passenger seats in front of each passenger, which means that they’re the most accessible type of luggage. The size and weight restrictions differ very heavily for them between different airlines, but most commonly they need to be under 16 x 12 x 6 inches in size. Oversized / Overweight Luggage Oversized luggage refers to luggage over the size limits and overweight luggage over the weight limits. Oversized/overweight hand luggage usually needs to be checked in for additional check-in fees, and oversized/overweight checked luggage sometimes is allowed, but for very expensive fees, ranging between 100-300$ for each bag. Due to potential security threats, the 3-1-1 rule limits the number of liquids each passenger is allowed to bring on the plane in their hand luggage. The 3-1-1 rule stands for “3.4 oz, 1 quart-sized bag, 1 person”, and it basically means the following: In hand luggage, each passenger has to store all liquids and gels in bottles no larger than 3.4 oz (100 ml), all of them must be stored in 1 quart-sized, transparent bag, and each passenger can have only 1 bag. This bag of liquids, other called “toiletry pouch”, needs to be taken out of the bag for separate screening when going through security. Tip: Instead of getting a new Ziploc for your toiletries for every new flight, get a dedicated, transparent toiletry pouch. When talking about baggage tags, usually people are referring to two things. The first one is personal baggage tags, which contain personal information about the passenger. Anyone can choose to attach them to their luggage in case it gets lost. The other one is luggage labels (or luggage stickers), which airlines attach to all checked bags whenever they’re checked in. These labels include information about who the bag belongs to and where is it heading. Baggage handling refers to moving checked luggage from the airport check-in desks to the airplane, and when the plane lands, unloading luggage and getting it to the baggage claim area. It’s done by the baggage handlers and transporting it on various luggage conveyor belts. Navigating the Airport Large airports are usually split into multiple terminals, sometimes even upwards of 4-6 different terminals. Each terminal has all the facilities needed to operate individually. You can find out the right terminal for your flight by checking your booking confirmation or looking up the flight number on the airport’s website. Gate number refers to the exact location within the airport where your flight is departing from. Each flight departs and arrives at a different gate. Gates are numbered and usually, each airport terminal contains about 20-100 gates in total. Pier / Concourse / Satellite Piers, Concourses, and Satellites are parts of airport terminals, and each one houses about 5-20 different airport gates. It’s just a way for airports to split their terminals into smaller pieces, so it’s easier for the passengers to find their right gates. Usually, each airport terminal has about 2-10 different piers/concourses/satellites, and they’re numbered with letters, such as A, B, C, and so on. When you find out which gate your flight departs from, you need to find out which pier/concourse/satellite it’s located in, and follow the directions within the airport. You’ll usually find check-in desks right after you enter the airport. Passengers who haven’t checked-in online need to check in at the check-in desks. They’re also used for checked luggage – passengers who have checked luggage need to go to the check-in desks and hand it over to the airline employees. Check-in desks usually open 2-4 hours before the flight departure. Luggage Drop-Off Points Some airports also have dedicated luggage drop-off points, which are useful for people who have checked luggage, but who’ve already checked in online. That way they don’t need to wait in the long lines at the check-in desks. Security is the part of the airport where all passengers are screened for dangerous and prohibited goods. Passengers go to security once they’ve gotten their boarding passes and dropped off their checked luggage. Only passengers with valid boarding passes are let through. During security, passengers need to go through screening machines and pass their luggage through x-ray scanners. After going through security, passengers enter the international, duty-free area of the airport. Baggage claim is the area of the airport where passengers can receive their checked luggage after landing. Conveyor Belt / Baggage Carousel Checked luggage is transferred through airports on a giant maze of conveyor belts, which removes the need for employees to carry it by hand. When talking about conveyor belts and baggage carousels specifically, usually airports are talking about the baggage claim area. Over there, all checked bags from a single flight are put on a single, spinning carousel, and the passengers can pick their own bags from them. Arrivals is an area in the airport accessible by the general public, where passengers arrive after leaving planes that recently landed, and where other people can come and meet them. Departures is the area of the airport which deals with outgoing flights. It contains check-in desks, baggage drop-off desks, security, gates, and the transit area. When you arrive at an airport for an upcoming flight, you need to go to departures. Lost Baggage is the part of the airport that deals with lost, damaged, delayed, and missing luggage. Customs and Immigration Customs and Immigration is a part of the airport, where passengers arriving from international flights are screened. The customs officers look for any goods that are prohibited from entering the country, illegal items, and any goods that the customer might have to pay a tax on. Airport lounges are luxury areas of the airport, and they’re only accessible by people participating in frequent flyer programs or for high entrance fees. Passengers can spend their time there waiting for upcoming flights. Airport lounges are usually equipped with showers, fine dining, sometimes even separate rooms for sleeping, massage chairs, and similar extras. Landside is the part of the airport that’s accessible by the general public. It includes everything up to security, including check-in desks, baggage drop-off points, ticket counters, info desks, and arrivals. Airside/ Transit Area / Secure Area Airside (other called transit or secure area) is the international, duty-free area of the airport. Passengers need to go through security to enter this area, which is why it’s also sometimes called a secure area. Often on connecting flights, passengers won’t need to exit the airside transit area, because this would mean that they’d need to get additional paperwork for entering the layover country. When talking about duty-free items, we’re talking about items that are purchased from the duty-free shops in the airside transit area of the airport. They’re called “duty-free” because this area is considered international, so no additional taxes have to be paid to governments, which makes them slightly cheaper. You can bring duty-free items on board the flight and they’re excluded from the 3-1-1 rule. An international airport has all the facilities needed for arriving and departing international flights. This usually just means that the airport has Customs and Immigration facilities and that their staff (sometimes) are trained to speak multiple languages. A domestic airport is an airport that only accepts domestic flights. It doesn’t have any customs and immigration facilities. Short-checking baggage is related to connecting flights. Most commonly on connecting flights with layovers, checked luggage is automatically routed to the final destination without the passenger needing to do anything. But sometimes, especially on very long layovers, the passenger might want to access their checked luggage during the layover, and that’s where short-checking your luggage comes in. When checking in your luggage, you can ask for the employee at the check-in desk to short-check your bag, which means that you will receive it when you land at the layover airport. Checking Baggage to Final Destination Checking baggage to the final destination is a term that’s used when talking about connecting flights. It means that checked luggage will automatically be sent over to the final destination and you won’t be able to access it during the layover. Rechecking baggage is also related to connecting flights and layovers. Sometimes, when you land in the layover country, you’ll have to pick up your checked luggage from a carousel, go through customs, and recheck it again for the next part of the flight at the check-in counters. This whole process is called rechecking luggage. Moving Sidewalk (Moving Walkway) Moving sidewalks, other called moving walkways are used in large airports to speed up the time it takes for passengers to arrive at their gates. Because airports are so large, it often takes 20-30 minutes to get to your gate, which is why airports use moving sidewalks, which essentially are long, vertical escalators, or extra-long treadmills. TSA stands for Transport Security Administration, and it’s the main airport security agency in airports within the United States. Over there, TSA is used as a synonym for “Security”, because they’re the agency that’s in control of security screening. TSA PreCheck / Global Entry TSA PreCheck and Global Entry are both paid programs used by frequent travelers. By participating, passengers can wait in shorter, expedited lines at the security, and take off fewer items when going through the scanners. FAA stands for Federal Aviation Administration, and it’s the main airline regulator in the United States. IATA stands for International Air Transport Association and they’re the main airline regulator worldwide. They govern about 80% of the total fights worldwide. Escort / Gate Pass An escort or gate pass is a special document that gives access to someone to enter the secure airside area of the airport, to accompany a minor, the elderly, or a person with special needs. They’re usually issued by airlines or airports. Import Tax (Customs Duty, Tariff) Import tax, other called customs duty or import tariff, is the tax that sometimes passengers need to pay for importing duty-free items. When passengers go through Customs and Immigration, the officers look at all the items each passenger is bringing into the country. If they’re over specific limits (different for each country, but, for example, 10 bottles of perfume or strong spirits), the officers will ask the passenger to declare them and pay Customs tax (usually, 5-30%). An airport shuttle is usually a taxi, minivan, or bus that offers a shared ride from the airport to the nearest city center. They’re usually cheaper than hiring a taxi or Uber, but more expensive than using public transport. The runway is a large stretch of tarmac, where the airplane lands / takes off from. Airspace means all the air directly above a certain country. You might hear the pilot say “We’re now entering China’s airspace”, which just means that you’re flying directly above China. Turbulence means a sudden shift in the airflow, which makes the airplane feel like it’s being shaken around. It’s completely normal, and when the pilot announces that some turbulence is to be expected, the seatbelt sign will turn on, and all passengers will have to fasten their seatbelts. Emergency Exit Seats Emergency exit seats refer to the row of seats directly next to the emergency exits. Passengers usually prefer these seats, because they offer much more legroom. Take-off refers to the airplane taking off from the tarmac and starting to fly. When the pilot announces “prepare for take-0ff” expect more shaking than usual, and everyone must be seated during take-off. Boarding refers to passengers boarding the airplane. Overhead compartments refer to the enclosed storage compartments directly above passenger heads, where carry-ons need to be stored. They must remain closed during take-off, landing, and turbulence, and passengers are able to access them once again when the seatbelt sign turns off. In-Flight Entertainment refers to the entertainment systems on airplanes. Usually, it’s just a built-in screen at the back of each seat, where you can watch movies and TV shows, read the news, listen to music, and so on. First / Business Class Passengers usually are split into multiple classes, with the lowest class being economy, then premium economy, then business class, and then first class. Each class above economy gets better upgrades, such as more leg-room (even horizontal beds), better entertainment systems, finer dining, and so on. Cargo hold refers to the area of the airplane below the main deck, where all the checked luggage is stored. Cabin refers to the area of the airplane accessible by the passengers, where they’re seated and their hand luggage is stored. Cockpit refers to the pilot’s cabin at the front of the aircraft.
<urn:uuid:b9e4e405-e192-4422-88fb-bc33d2e307c4>
CC-MAIN-2022-33
https://www.cleverjourney.com/airport-terminology-101/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572581.94/warc/CC-MAIN-20220816211628-20220817001628-00099.warc.gz
en
0.946032
5,404
3.03125
3
Medina County is immediately west of Bexar County in southwest Texas. Hondo, the county seat, is located near the geographic center of the county at 29°17' north latitude and 99°02' west longitude, 100 miles from the Mexican border at Eagle Pass. The Medina River, from which the county derives its name, traverses the northeastern portion of the county. The western part is drained by the Frio River. Medina County covers 1,331 square miles with elevations ranging from 1,995 feet in the northern Hill Country to as low as 635 feet in the southern region. The county is divided from east to west by the Balcones Escarpment, which separates the Edwards Plateau and Hill Country to the north from the Rio Grande Plains to the south. The climate is subtropical and subhumid; the summers are hot and dry. Annual rainfall averages 28.43 inches; average relative humidity is 81 percent at 6 A.M. and 49 percent at 6 P.M. The temperature averages a low of 42° F in the winter and a high of 96° in the summer. The annual growing season is 263 days. The northern Hill Country region has black waxy and limestone soils that support grasses, brush, junipers, mesquite, shinnery oaks, and live oaks. The larger southern region has sandy loam and clay soils that support bluestem, buffalo, and Arizona cottontop grasses, as well as post oak, live oak, and mesquite. Cypress and pecan trees are commonly found on the banks of rivers and creeks. Approximately 45 percent of the land in the county is considered prime farmland. Medina Lake Reservoir, completed in 1913 in the northeastern part of the county, furnishes impounded Medina River water for an extensive irrigation system throughout the eastern half of the county. Other man-made surface reservoirs have been built on Chacon, Parkers, Squirrel, Live Oak, and Elm creeks. Ranchers keep local stock tanks for water. Most subsurface or ground waters in Medina County are artesian; two major subsurface water belts are the Edwards Aquifer and the Carrizo Sand Aquifer. The county can be divided from north to south into three geological sections, the Lower Cretaceous of the Edwards Plateau, Lampasas Cut Plain, and the Comanche Plateau; the Upper Cretaceous of the Blackland Belt; and the older Tertiary of the Gulf Coast Plain. Mineral resources within the county include oil, gas, clay, sand, and gravel. High-quality clays for the production of bricks and tile are found in the D'Hanis area of western Medina County. Limestone, readily available and of good quality, is used extensively for buildings and hand-carved tombstones. Crushed limestone, flintstone, igneous pebbles, caliche, and clay are found in the county and are used widely as road materials. Bat guano is commercially mined in the limestone hills north of Hondo and marketed as a high-quality natural fertilizer. The guano mined at Ney's Cave, claimed to be one of the largest bat habitats in the world, was used in the manufacture of gun powder during the Civil War. Medina County is in an area that has been the site of human habitation for many thousands of years. Evidence of early man has been discovered at a site known as Scorpion Cave on the Medina River in the northeastern part of the county. Archeologists believe that ancestors of either Coahuiltecan or Tonkawa Indians occupied this cave continuously for several thousand years before the arrival of the first Europeans. The first Spaniard to set foot in the region was probably Alonso De León, governor of Coahuila, who passed through the area in 1689 en route to East Texas, and named the Medina River and Hondo and Seco creeks. Two years later Domingo Terán de los Ríos, the first provincial governor of Texas, tracked across southern Medina County, laying the foundation for El Camino Real (Old San Antonio Road). The Upper Presidio Road, as the Camino Real was known in 1807, purposely skirted the Indian strongholds of the Hill Country beyond the Balcones Escarpment. Throughout the 1700s the area was frequented by roving bands of Lipan Apaches and Comanches, whose seasonal raiding parties traveled south from the plains area of North and West Texas and New Mexico on their way to Mexico. From this vantage point the Apache-Comanche Indians would attack San Antonio with impunity. The Republic of Texas was convinced that if this large block of land were settled it would provide a protective zone against any invasion forces approaching San Antonio from the south and west. They negotiated an empresarial contract with Frenchman Henri Castro on February 15, 1842, to settle the area. One of Castro's land grants began four miles west of the Medina River. He purchased the sixteen leagues between his granted concessions and the river from John McMullen of San Antonio. The Old San Antonio Road to Laredo and the main road from San Antonio to Eagle Pass both crossed the land grant. Castro, with the assistance of German wine merchant Ludwig Huth and his son Louis August Ferdinand Huth, arranged the transport of German and French-speaking farmers from the Alsace region of northeastern France, brought them overland from the Texas coast to San Antonio, and on September 2, 1844, set out with them in the accompaniment of Texas Ranger John Coffee (Jack) Hays and five of his rangers to decide upon a site for settlement. Castroville, founded on September 3, 1844, was the westernmost settlement in Texas. It received the first post office in the county on January 12, 1847. In a relatively short time the settlements of Quihi (1845), Vandenburg (1846), New Fountain (1846), and Old D'Hanis (1848) were established. The layout of each of these settlements was similar to that of Castroville, in a pattern reminiscent of their European villages. The settlements were laid out in town lots surrounded by outlying twenty and forty acre farming plots. Settlers lived in the protective environs of their towns and farmed their nearby fields. The immigrants brought with them their unique culture and a distinctive architecture. By the mid-1850s buildings were being made of rough-cut native limestone, sandstone, or some combination of stone and timber; lime plaster was used to coat the exterior walls and adobe the interior walls. The colors found in the stone ranged from an offwhite, common in the Castroville area, to a rich blend of ochres and gold characteristic of the New Fountain and Quihi communities. Houses were designed with a characteristic rectangular shape, short in the front and long at the rear roofline, common to the rural structures of their homelands. Most homes and buildings had fireplaces built with internal angular flue systems. Medina County was separated from Bexar County by the legislature on February 12, 1848, and enlarged on February 1, 1850, again gaining lands from Bexar County. At this time the population of Medina County was estimated to be predominantly Catholic at a ratio of five out of every six people. The first church in the county, the Catholic Church of St. Louis Parish in Castroville, was completed in November 1846. The Lutherans organized churches at Castroville and Quihi in 1852 and 1854. The Catholic church organized a school in Castroville in 1845; the Protestants did likewise in 1854. The first public school in Medina County was also established in Castroville in 1854. By 1858 the county had five schools for 453 pupils and five churches, three Protestant and two Catholic. A short-lived Mormon community was established in northeastern Medina County in 1854. By 1858 stock raising and the cultivation of corn were the chief agricultural pursuits in the county. Much of the labor needed to clear the land for homes and farms was done by Mexican laborers. Statistics taken in that year show 10,000 acres of corn planted and 100 acres of wheat on 240 farms; there were 11,000 cattle, principally in the Castroville, D'Hanis, Quihi, and Vandenburg areas; sheep were raised principally in the northern hilly and rocky areas. Peach trees were abundant; cypress and pecan grew along streams and rivers; mesquite, live oak, post oak, and cedar were prevalent trees in the prairies. Castroville, with a population of 366, was the twelfth-largest town in Texas and an important commercial center by 1850. Fort Lincoln had been erected in 1849 near Old D'Hanis to furnish protection for the new settlements and commercial traffic between San Antonio and Mexico. Most settlers operated subsistence farms while they learned stock raising, which many realized was best suited to the area. The typical diet consisted of corn-meal mush, garden vegetables, and wild game. In 1850 there were only twenty-eight slaves in the estimated 909 citizens of Medina County. In 1858 the estimated population of 1,300 included 104 slaves. In 1860 there were 108 slaves. Two conditions in Medina County served as a deterrent to slavery; its proximity to Mexico offered sanctuary for runaway slaves, and the Unionist sentiment disfavoring the institution of slavery was general among the European settlers. In February 1861 the vote was 140 for and 207 against secession. August Santleben, a Union sympathizer, was one of many Medina County citizens who fled to Mexico to avoid recriminations at the hands of Confederate allies like Charles DeMontel, who as provost marshal was responsible for apprehending those who attempted to escape Confederate service. The value of Medina County land during the Civil War dropped by almost 50 percent. Education and schools suffered during the war as funding plummeted. However, communities like Castroville, situated on the commercial routes to Mexico, prospered. After the war a number of German immigrants arrived. By 1870 a majority of the 2,078 people in the county were of German or French origin; there were ninety-two Blacks, forty Mississippians, and thirty-three Mexicans. The arrival of barbed wire and the railroads during the 1880s was a significant turning point for Medina County. Cattle raising had more than doubled during the 1870s. Property values tripled during the same period. Barbed wire effectively ended the practice of free-range cattle ranching; disputes over grazing access led to many conflicts. Livestock raising was the dominant agricultural activity in 1882; large sections of land supported 33,000 cattle, 33,000 sheep, 8,000 horses and mules, and 4,000 hogs. The Galveston, Harrisburg and San Antonio Railway and the International and Great Northern Railroad extended their lines west and south through Medina County in 1881 and 1882, respectively. The towns of Hondo, La Coste, Dunlay, and New D'Hanis were established along the GH&SA; the towns of Devine and Natalia were established along the IG&N. The citizens of Castroville, after having been given the initial opportunity to have the GH&SA pass through their township, voted against the issuance of bonds necessary to support such a route. The rapid commercial and population growth of the newly established railroad towns, particularly at Hondo and Devine, significantly altered the future demographic makeup of the county. The number of county schools between the years 1882 and 1894 increased from four to thirty-six. The population increased from 4,545 in 1880 to almost 8,000 by 1900. After several unsuccessful attempts to move the county seat from Castroville to Hondo during the 1880s, the change was finally approved in 1892 by a vote of 767 to 687. By then Hondo and the other railroad communities had become the most convenient and economically accessible marketing centers. Changes in demographic influence were evident in the embryonic newspaper publishing industry as well. The county's first newspaper, the Castroville Era, began operations in 1876. This newspaper was changed to the Quill in 1879 and was sold to a group in Hondo in 1884, when it was renamed the Medina County News. Without a newspaper of its own and apprehensive of Hondo's attempts to gain the county seat, Castroville began publishing the Anvil in 1886. By 1903 the Anvil had been consolidated with Hondo's Herald to become the Hondo Anvil Herald. The developing railroad community of Devine was publishing the Devine Wide-Awake in 1892 and the Devine News in 1898. By 1900 more acreage was devoted to the production of cotton than corn (22,293 acres for cotton to 16,385 for corn). However, by 1905 the boll weevil had come in from Mexico and eventually devastated the cotton industry. In 1920, 44,061 acres were devoted to corn, and 32,196 acres were used for cotton production. By 1940 only 5,986 acres were devoted to the production of cotton. The completion of the Medina Dam in 1913, at that time the fourth-largest dam project in the United States, provided water sufficient to irrigate an estimated 60,000 acres. Six million dollars had been raised through the sale of bonds to British subscribers to finance the project. The onset of World War I cut off the flow of British capital, significantly reducing the sale of farmland dependent upon the irrigation system. The irrigation project was placed in receivership by the federal courts in 1914 and remained in this suspended condition until 1924, when it was ordered to be sold at public auction in Hondo. Dust bowl victims from Oklahoma and Kansas made up a large portion of the prospective buyers of farmland. Many of the landowners in the irrigation project were beset with heavy mortgages, but they were ultimately rescued by successful application to the Reconstruction Finance Corporation of the United States in 1934 that resulted in a reduction of the farmers' bonded indebtedness from $2.5 million to $250,000. Row crop tractors replaced the early hay burners and horses and mules that had been used into the 1930s. Pull-type combines replaced the labor-intensive threshers and reapers. Until the advent of the state and federal highway systems the railroads were the principle transportation of agricultural products and livestock; they also offered passenger service until the 1940s. The use of trucks to transport products to market increased in popularity, leading to the increased production of truck crops, such as spinach, sweet potatoes, cabbage, beans, turnips, tomatoes, Irish potatoes, and strawberries. Broom corn was one of Medina County's most lucrative cash crops in 1930s and 1940s. By 1945 farmers were producing broom and Indian corn on 42,774 acres, sorghum on 16,398 acres, oats on 14,549 acres, nuts (principally pecan) on 6,272 acres, and honey on sixty-seven farms. The honey produced from the numerous guajillo blossoms common to the southern regions of the county was reputed to be of excellent quality. More than half the farms (629 of 1,100) were operated by tenant farmers in 1945 (seeFARM TENANCY). Prodigious ranching operations in the county sustained 61,304 cattle, 34,191 sheep, 26,182 goats, and 6,882 hogs and swine in 1950. Poultry farms the same year had 105,652 chickens. By 1982 Medina County was one of the most prolific producers of Spanish peanuts in southwest Texas; Devine was recognized as the largest shipping point. That year 84 percent of the county was used for farming or ranching, and 21 percent of the farmland was under cultivation. Crops grown included sorghum, corn, grasses, wheat, carrots, watermelons, and pecans. Cattle, sheep, and hogs were the principal livestock. During the early decades of the twentieth century the population of Medina County fluctuated markedly. In 1910 the population was 13,415; in 1920 it fell to 11,679; and in 1930 it rebounded to 19,898. During the years of the Great Depression it fell again, and by 1940 it stood at 17,733. Throughout this period the Mexican population was significant, due to the lure of jobs in the cotton and corn fields, in railyards, at the lignite coal mines near Natalia, at the Medina Dam construction site, on ranches, and at the D'Hanis Brick and Tile Company and the Seco Pressed Brick Company around D'Hanis. Between 1900 and 1910 the number of Mexicans in the county jumped from 842 to 3,147 and represented almost one quarter of the county's citizenry. By 1930 Mexicans numbered 6,172. This trend, however, was dramatically reversed by the effects of the depression and by advances in agricultural mechanization, and by 1940 the number of Mexicans in the county dropped to 1,304 and represented less than 10 percent of the population. During the 1920s and 1930s a number of roads were built or upgraded. In 1921 the Old San Antonio Road was graded and designated State Highway 2; later it was widened and improved to become U.S. Highway 81, which served as the main north-south route until Interstate 35 was completed in 1964. State Highway 3, completed in 1922, was improved through Castroville, Dunlay, Hondo, and D'Hanis; it was later designated U.S. Highway 90 and serves as the main east-west route. The opening of the Army Aviation Navigation School in Hondo in late summer of 1942, the largest of its kind in the world at the time, provided an economic boom for Hondo and the rest of the county. As many as 3,000 people were employed by the H. B. Zachry Company of San Antonio during air field construction; over 5,300 military personnel were stationed at the base by November 1942. In 1950 all the common school districts in the county consolidated into seven independent districts: Devine, D'Hanis, Hondo, Natalia, Castroville, LaCoste, and Yancey. In 1960 the LaCoste and Castroville school districts combined to form the Medina Valley Independent School District. In 1960, 14 percent of adults twenty-five years and older in Medina County had completed high school, and 3 percent had completed four years of college. In 1984 the Devine School District, organized in 1902, had 1,450 pupils and 106 teachers; both Spanish and English were taught. The D'Hanis School District, formed in 1909, had 250 pupils and 21 teachers. The Hondo School District, organized in 1883, had three campuses, Meyer Elementary, McDowell Junior High, and Hondo High School. The Hondo School District had 1,724 pupils and 111 teachers. The Medina Valley School District had four campuses split between Castroville and LaCoste for 1,700 pupils and 108 teachers. The Natalia Independent School District had three campuses for 725 students and 55 teachers. Voter registration in 1982 showed that 94 percent of the county voters were Democrats. Election returns for presidential candidates since 1932 reveals a Medina County vote for the eventual victor in all but two races; a majority of voters selected Dewey in 1944 and Hubert Humphrey in 1968. The Republican candidate for president received a majority of the votes in Medina County in every election from 1980 through 2004. Medina County industries with the most employment in 1982 were agribusiness, tourism, general construction, and the manufacture of plumbing fixtures and aircraft engines and their parts. Crude oil production in 1984 amounted to 291,982 barrels; gas production in the same year amounted to 123,358,000 cubic feet. The four major oil and gas fields in the county are the Taylor-Ina field, the Adams field, the Bear Creak field, and the Chacon Lake field. The Taylor-Ina field produced 198,292 barrels of oil in 1984. Federal expenditures in the county in 1983 were $43,378,000, including $10,936,000 by the United States Department of Defense. After World War II the population of Medina County rose steadily, to 17,013 in 1950, 18,904 in 1960, 20,249 in 1970, 23,164 in 1980, and 27,312 in 1990. The largest minorities in 1990 were Hispanic (44.4 percent), Native American (0.4 percent), and African-American (0.3 percent). In 2014 the U.S. Census counted 47,894 people living in Medina County; about 45.3 percent were Anglo, 50.6 percent Hispanic, and 2.9 percent African American. Of residents twenty-five and older, 72 percent had graduated from high school and 13 percent had college degrees. In the early twenty-first century agriculture and tourism were important elements of the local economy; many residents commuted to jobs in San Antonio. In 2002 the county had 1,951 farms and ranches covering 804,941 acres, 64 percent of which were devoted to pasture, 29 percent to crops, and 5 percent to woodlands. That year farmers and ranchers in the area earned $60,742,000; livestock sales accounted for $37,571,000 of the total. Most of the area’s agricultural income derived from cattle, but harvests of corn, grains, peanuts, hay, and vegetables also contributed. The largest community in Medina County is Hondo (population, 9,080), the county’s seat of government.. Other sizable towns include Devine (4,622), Castroville (2,925), La Coste (1,179), and Natalia (1,506. Medina County offers visitors a wide range of recreational opportunities. Hunting and fishing are available throughout the county, and Medina Lake in northeastern Medina County is noted for its large numbers of large yellow catfish, black bass, white bass, walleyes, and bluegills. Hunting is available mostly through leasing arrangements with private land owners. The game most likely to be hunted in the county are white-tailed deer, wild turkey, and javelina, although leases are available for hunters interested in sika deer, axis deer, and mouflon sheep. Is history important to you? We need your support because we are a non-profit organization that relies upon contributions from our community in order to record and preserve the history of our state. Every dollar helps. Castro Colonies Heritage Association, The History of Medina County, Texas (Dallas: National Share Graphics, 1983). Houston B. Eggen, History of Public Education in Medina County, Texas, 1848–1928 (M.A. thesis, University of Texas, 1950). Cyril Matthew Kuehne, S.M., Ripples from Medina Lake (San Antonio: Naylor, 1966). Bobby D. Weaver, Castro's Colony: Empresario Development in Texas, 1842–1865 (College Station: Texas A&M University Press, 1985). The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this entry. Ruben E. Ochoa, Handbook of Texas Online, accessed August 14, 2022, Published by the Texas State Historical Association. All copyrighted materials included within the Handbook of Texas Online are in accordance with Title 17 U.S.C. Section 107 related to Copyright and “Fair Use” for Non-Profit educational institutions, which permits the Texas State Historical Association (TSHA), to utilize copyrighted materials to further scholarship, education, and inform the public. The TSHA makes every effort to conform to the principles of fair use and to comply with copyright law.
<urn:uuid:c66964fb-6d5a-4e9f-aca5-acd8a0ff5df2>
CC-MAIN-2022-33
https://www.tshaonline.org/handbook/entries/medina-county
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572063.65/warc/CC-MAIN-20220814173832-20220814203832-00696.warc.gz
en
0.968008
4,952
3.234375
3
Peripheral regulators and important brain areas of energy homeostasis. Long-term regulators are adipose tissue–derived food intake–inhibiting hormones leptin and adiponectin, whereas hormones produced in the gastrointestinal (GI) tract and pancreas are short-term food intake–inhibiting (peptide tyrosine-tyrosine [PYY], glucagon-like peptide 1 [GLP-1], oxyntomodulin [OXM], cholecystokinin [CCK], pancreatic polypeptide [PP], amylin) or stimulating (ghrelin) signals. Intact lines connect to the hypothalamus (pink); dashed lines, to the hindbrain (blue, solitary tract nucleus). Customize your JAMA Network experience by selecting one or more topics from the list below. Roth CL, Reinehr T. Roles of Gastrointestinal and Adipose Tissue Peptides in Childhood Obesity and Changes After Weight Loss Due to Lifestyle Intervention. Arch Pediatr Adolesc Med. 2010;164(2):131–138. doi:10.1001/archpediatrics.2009.265 Childhood obesity is a global epidemic and associated with an increased risk of hypertension, diabetes mellitus, and coronary heart disease, in addition to psychological disorders. Interventions such as bariatric surgery are highly invasive and lifestyle modifications are often unsuccessful because of disturbed perceptions of satiety. New signaling peptides discovered in recent years that are produced in peripheral tissues such as the gut, adipose tissue, and pancreas communicate with brain centers of energy homeostasis, such as the hypothalamus and hindbrain. This review discusses the major known gut- and adipose tissue–derived hormones involved in the regulation of food intake and energy homeostasis and their serum levels in childhood obesity before and after weight loss as well as their relationship to consequences of obesity. Since most of the changes of gastrointestinal hormones and adipokines normalize in weight loss, pharmacological interventions based on these hormones will likely not solve the obesity epidemic in childhood. However, a better understanding of the pathways of body weight– and food intake–regulating gut- and adipose tissue–derived hormones will help to find new strategies to treat obesity and its consequences. Research on energy homeostasis regulation in children is increasingly important as obesity is the most important risk factor for the development of the metabolic syndrome and type 2 diabetes mellitus (T2DM) in youth, and the rise in childhood obesity and childhood T2DM are nearly statistically parallel.1 In 2006, the prevalence of obesity in US children reached 17%, and if current trends continue, childhood obesity may reach 20% by 2010.2 Pediatric endocrinology is experiencing a wave of newly identified signaling peptides illuminating the gut-brain axis regulating the body's energy balance, or more thoroughly described as the gut–central and peripheral nervous systems–accessory organ (eg, pancreas, liver) axis. Gut hormones (enterokines) and adipose tissue–derived signaling molecules (adipokines) interact with brain centers in complex, overlapping mechanisms of energy intake and storage. These mechanisms evolved during long periods of hunger in the evolution of man to protect the species from extinction. However, today's modern sedentary lifestyle has turned this evolutionary advantage of storing energy as fat into a high risk of cardiovascular morbidity and mortality. To complicate matters, although body weight is precisely regulated, long-term nutrient excess may change set points of the body energy balance including the gut-brain and the adipose tissue–brain axes. In cases of obesity, this appears to make it even more difficult to reduce overweight, as active species-protecting defense mechanisms maintain an elevated level of body fat. Understanding the changes that occur in the particular regulators of homeostasis—hormones and peptides—relative to the timing and method of weight reduction in overweight children is fundamental to understanding pediatric obesity. We examined these homeostatic regulators before and after weight reduction in a 1-year lifestyle intervention “Obeldicks.”3,4 Briefly, this program is based on physical exercise (1 year), nutrition education and behavior therapy for children and parents separately (first 3 months), and individual psychological care of the child and his or her family (months 4-9). The exercise therapy consists of sports, instruction in physical exercise as part of everyday life, and reduction of the amount of time spent watching television. The nutritional course is based on the prevention concept of the “optimized mixed diet,” which is both fat and sugar reduced containing 30% energy intake fat, 15% energy intake proteins, and 55% energy intake carbohydrates including 5% energy intake sugar.4 This lifestyle intervention was effective to reduce overweight over a period of at least 3 years after intervention and improved cardiovascular risk factors such as dyslipidemia, impaired glucose tolerance, and hypertension.3,4 Furthermore, the intima-media thickness5 as a predictor of early cardiovascular changes was reduced in children who participated in this intervention. All these findings proved the clinical relevance of the achieved weight loss. This article will discuss the major known gut- and adipose tissue–derived peptides involved in the regulation of food intake and energy homeostasis, including changes in some of these signaling molecules observed during weight gain and loss in obese children participating in lifestyle interventions, and describe clues that may be helpful in both short-term and long-term management and treatment of childhood obesity. The discovery of the adipose tissue–derived hormone leptin in 1994 elucidated an important negative feedback mechanism of energy balance and how information about energy stores is conveyed to the central nervous system. This and other feedback mechanisms of energy metabolism have been intensively studied in rodent models. They demonstrate that peptides produced in the periphery, such as adipose tissue hormones (leptin, resistin, adiponectin, visfatin, retinol-binding protein 4) and appetite-restraining hormones from the gastrointestinal tract (peptide tyrosine-tyrosine [PYY], glucagon-like peptide 1 [GLP-1], oxyntomodulin, cholecystokinin) and pancreas (insulin, pancreatic polypeptide [PP], amylin), as well as the hunger-mediating hormone ghrelin, are important afferent signals that bind to receptors in the hypothalamus and hindbrain (Figure).6-9 Ghrelin is a peptide containing 28 amino acids and was identified in 1999 as a ligand for the secretion of growth hormone secretagogue receptor10 (Table). Ghrelin is produced principally by the stomach and, to a lesser extent, the duodenum32 and is the only known circulating orexigen. Endogenous levels of ghrelin increase before meals and decrease after food intake, suggesting its role in both meal initiation and weight gain.7,33,34 It has been postulated that both the hyperphagia and potentially the growth hormone deficiency in Prader-Willi syndrome may be related to ghrelin dysregulation, as high circulating levels of ghrelin have been observed in this disorder.35 Less clear is why serum ghrelin levels are decreased in nonsyndromic simple obesity, which may be due to overfeeding and a consequence of metabolic changes associated with obesity, such as insulin resistance.12,13,36-38 Important clues about why some children and not others are successful in maintaining long-term weight loss have been shown in recent studies. Specifically, increased ghrelin levels during weight reduction are considered to be a compensatory mechanism responsible for making weight reduction unsustainable.39 Krohn et al11 showed that the increase of ghrelin levels after weight loss in obese children is correlated with an increase in insulin sensitivity. In a Spanish study of obese children on a calorie-restricted diet, ghrelin levels increased significantly after 3 months of successful weight reduction.13 In the Obeldicks lifestyle intervention, we found no significant changes in ghrelin levels in the children who achieved substantial weight reduction.12 A slow reduction of weight that does not cause an immediate compensatory increase of ghrelin may help stabilize and maintain a lower body weight and prevent a fast regain of weight due to an increase of ghrelin. These are encouraging results because ghrelin is the only known circulating appetite stimulant.7,33,34 Obestatin is a recently identified peptide derived from the same gene (preproghrelin) as ghrelin and has the opposite effect on weight status, inhibiting food intake and gastrointestinal motility. Obestatin is postulated to antagonize ghrelin's actions on homeostasis and gastrointestinal function.40 Because obestatin and ghrelin are both derived from the same gene,40 one study hypothesized a possible cause of obesity to be an imbalance of circulating obestatin and ghrelin levels.41 Preprandial ghrelin to obestatin ratios were elevated in obese subjects compared with controls, suggesting that a higher ratio may be involved in the etiology and pathophysiology of obesity; however, results from other studies are not conclusive.42-44 A short-term summer camp weight reduction study of 46 obese children demonstrated increased ghrelin and obestatin levels and ghrelin to obestatin ratios after weight reduction.42 In Obeldicks, obestatin levels increased significantly after weight reduction while ghrelin levels did not change significantly, a pattern that may be important to stabilize the lower body weight and prevent recurrence of weight gain.38 Peptide YY is a 36–amino acid peptide originally isolated and characterized in 198045 (Table). There are 2 endogenous forms, PYY1-36 and PYY3-36, abundant in humans. Peptide YY is a gut-derived hormone released postprandially by the L cells of the lower intestine that inhibits gastric acid secretion and motility through neural pathways.46-50 Peptide YY belongs to the family of peptides that includes neuropeptide Y and PP, which mediate their effects via G protein–coupled neuropeptide Y2, Y4, Y5, and Y6 receptors and display different tissue distributions and functions.51 PYY1-36 binds to all known Y receptor subtypes, whereas PYY3-36 shows affinity for the Y1 and Y5 receptor subtypes and high affinity for the inhibitory Y2 receptor subtype. PYY3-36 binding to the Y2 receptor subtype inhibits the orexigenic neuropeptide Y in the hypothalamus, causing short-term inhibition of food intake, especially high-fat meals.8,52-54 Studies in rodents identified the hypothalamus, vagus, and brainstem regions as sites of action.55 Functional magnetic resonance imaging of normal-weight humans infused with PYY3-36 to circulating concentrations similar to those observed postprandially showed modulated neuronal activity within the hypothalamus, brainstem, and midbrain regions involved in food reward processing.56 This suggests that PYY3-36 affects feeding by action on homeostatic and hedonic brain circuits. Peptide YY may also affect energy expenditure.57 Among other studies confirming this effect,58,59 peripheral infusion of PYY3-36 in humans showed increased energy expenditure and fat oxidation rates.60 In obese children, levels of the anorexigenic hormone PYY are low. After efficient weight loss in Obeldicks, PYY levels significantly increased, reaching levels comparable with normal-weight individuals.14 Once effective weight loss has been achieved, the anorectic effect of PYY may help stabilize weight and thereby prevent later weight gain in patients whose PYY levels increased to normal levels. Glucagon-like peptide 1 is a gut hormone synthesized from enteroendocrine L cells of the small and large intestine and secreted in 2 major molecular forms, GLP-17-36 amide and GLP-17-37, with equipotent biological activity (Table). Glucagon-like peptide 1 binds receptors in key appetite-related sites in the hypothalamus (eg, arcuate and dorsomedial nuclei) and the brainstem (specifically the nucleus of the solitary tract).15,61,62 It is the most potent insulin-stimulating hormone known to date, it suppresses glucagon secretion, and it inhibits gastric emptying and acid secretion. In obese children, an attenuated GLP-1 response may contribute to impaired insulin response, leading to T2DM.61-63 Glucagon-like peptide 1 may also reduce energy intake and enhance satiety, likely through the aforementioned delay of gastric emptying and specific GLP-1 receptors in the central nervous system. Its role in childhood obesity is poorly understood, with contradictory post–weight loss level changes reported in the literature.15-18,61-68 Obese children participating in Obeldicks showed significant decreases in GLP-1 levels. At baseline, GLP-1 levels did not differ significantly between obese and lean children. The glucose levels remained static and the decreases in GLP-1 levels were significantly correlated with decreases in insulin levels and insulin resistance index scores.16 Oxyntomodulin is a 37–amino acid peptide that, like GLP-1, is a product of the preproglucagon gene (Table). It is released into the circulation system postprandially and when administered either centrally or peripherally in rodent models or peripherally in humans reduces food intake.19 Oxyntomodulin is equally effective as GLP-1 at inhibiting food intake even though it is thought to do so through a different pathway. Cholecystokinin was the first gut hormone implicated in the control of appetite by reducing food intake.21 Cholecystokinin is a meal termination signal released postprandially from the gastrointestinal tract (mostly upper small intestine), reducing both meal size and meal duration. After eating, cholecystokinin levels remain elevated up to 5 hours and stimulate gall bladder contraction, pancreatic enzyme release, and intestinal motility, which in turn affect gastric emptying. The alimentary cholecystokinin receptor, CCKA, is present on the vagus nerve, enteric neurons, brainstem, and dorsomedial nucleus of the hypothalamus. Cholecystokinin likely mediates its effect on appetite regulation by crossing the blood-brain barrier where it acts on receptors in the dorsomedial nucleus of the hypothalamus to reduce levels of neuropeptide Y, a potent appetite-stimulating peptide. However, studies in children are missing so far. Insulin plays an extremely important role in energy homeostasis (Table). Insulin receptors are expressed in different hypothalamic nuclei. After passing the blood-brain barrier, insulin exerts appetite-inhibiting effects. In rodents, it was proven that insulin administered intracerebroventricularly inhibits food intake by activation of the insulin receptor substrate–phosphatidylinositol 3-kinase (IRS-PI3K) pathway in ventromedial neurons of the hypothalamus.69,70 Numerous insulin knockout models show that decreased central insulin effect leads to the obese phenotype. Most recent research results show that central insulin resistance can be caused by hypothalamic inflammation due to nutrient excess,71 causing reduced IRS-PI3K signaling, which thereby contributes to increased appetite and the maintenance of elevated body weight.72 In childhood obesity, increased blood insulin levels indicate peripheral and central insulin resistance. Successful reduction of overweight leads to reduction of hyperinsulinemia and improved insulin sensitivity.22,23 Pancreatic polypeptide is a 36–amino acid peptide produced under vagal control by peripheral cells of the endocrine pancreatic islets, and to a lesser extent in the exocrine pancreas, colon, and rectum, in response to a meal and insulin-induced hypoglycemia (Table).47,73 Administration of pharmacological doses of PP in humans decreases food intake for 24 hours74 and inhibits the gastric emptying rate, exocrine pancreatic secretion, and gallbladder motility.75 Changes in PP levels and their relation to the anorexigenic hormones insulin and leptin were studied in Obeldicks.25 At baseline, obese subjects had lower PP concentrations compared with lean controls.25 Following lifestyle intervention, PP concentrations significantly increased and tended to normalize in the children who achieved substantial weight loss in comparison with children who did not lose weight.25 Yet, the changes in PP concentrations did not significantly correlate to changes of insulin and leptin concentrations.25 Amylin is a 37–amino acid polypeptide synthesized and released together with insulin by the pancreatic beta cells in response to nutritional input and contributes to glycemic and appetite control (Table).26,76 Amylin is a satiety peptide, causing a reduction of the meal size and inhibition of gastric emptying. In addition, amyloid depositions that have been detected in pancreatic islets of T2DM play a central role in the development of beta-cell failure in T2DM.27 In obese children, amylin levels were significantly higher as compared with lean controls.26 Substantial weight loss in Obeldicks led to a significant decrease of amylin concentrations.26 Moreover, the increase of amylin levels in childhood was related to hypersecretion of insulin.26 Adipose tissue is not only important in energy storage, but it is also very active in producing hormones and cytokines (adipokines), which play a role in the pathogenesis of obesity-associated illnesses. Proinflammatory adipokines produced in adipose tissue are leptin, resistin, plasminogen activator inhibitor-1, interleukin 6, and tumor necrosis factor α. Several studies demonstrated higher levels of inflammatory markers in obese children than in normal-weight children77-81 and some of these could be normalized by lifestyle intervention.77 Adipokines are a possible link between insulin resistance and adiposity. The production of interleukin 6 leads to the increase of C-reactive protein, which represents a cardiovascular risk factor.82 Leptin is a 167–amino acid peptide formed in adipose tissue, forwarding information regarding energy supply and peripheral energy storage in adipose tissue to the brain, specifically the hypothalamus (Table). Leptin production is stimulated by insulin and glucocorticoids. The leptin receptor is a single-transmembrane domain receptor of the cytokine receptor family, which activates Januskinase2 in the signal transduction pathway and like insulin, activates IRS-PI3K in neurons of the ventromedial hypothalamus, whereby it induces central appetite inhibition and stimulates energy expenditure by increasing the central sympathetic tone. A high density of leptin receptors was found in the hypothalamic arcuate nucleus and the ventromedial hypothalamus nucleus.20,83 Leptin has a soluble receptor that represents the main binding site for leptin in blood and may be a negative regulator of free leptin.84 Leptin's homeostatic effect is anorexigenic, invoking satiety and ceasing nutritional intake. Leptin levels circulating in the blood correlate with the amount of adiposal tissue mass. Although it would seem that increased levels of leptin in overweight individuals would lead to appetite suppression and lower food intake, this does not occur because increased fat mass also leads to leptin resistance and decreased leptin signaling in the brain.72 This resistance then may lead to ineffective appetite inhibition and changes the set point of energy homeostasis, resulting in a defense of a higher level of body fat. Leptin deficiency in obese individuals is very rare and is caused by a homozygous ob gene mutation. Only a few patients and families have been reported in the literature. In these individuals, leptin levels are low and leptin therapy has been proven to be a causal treatment of obesity.24 In most obese individuals, however, serum leptin levels are upregulated because of the increased fat mass and leptin resistance, and leptin levels fall after successful weight loss,28,29 while decreased serum soluble receptor concentrations in obese children increase after weight loss.29 Adiponectin is synthesized and secreted exclusively by adipose tissue (Table). It exerts anti-inflammatory effects and appetite-restraining effects and counters insulin resistance, thereby offering protective mechanisms against the development of both T2DM and cardiovascular disease.82 Adiponectin also affects thermogenesis and adiponectin receptors are expressed in various peripheral tissues, including muscle, liver, and hypothalamus.85,86 The central appetite-adjusting mechanisms of adiponectin are not yet fully understood. Interestingly, adiponectin levels are reduced in states of obesity and T2DM.30,31 Adiponectin has anti-inflammatory properties and negatively correlates with cytokine levels and insulin resistance. Low adiponectin levels might play a role in the development of the metabolic syndrome and cardiovascular disease.30,31 In Obeldicks, adiponectin levels significantly increased and insulin resistance significantly improved in a parallel manner in the children who lost weight.30 In an even more recent study, Jeffery et al31 also studied this negative correlation of adiponectin levels in obese children and its role in mediating cardiovascular disease in children. They found clear links between adiponectin and features of the metabolic syndrome. Resistin is another hormone secreted by adipose tissue and is involved in insulin sensitivity. It has been shown to modulate both glucose tolerance and lipid metabolism in vivo and in vitro.87 Although some data are contradictory, it seems that insulin might inhibit resistin secretion; however, recent animal models show that insulin is not the major regulator of resistin.88 Two longitudinal adult analyses reported serum resistin changes to be positively correlated with changes in fat mass or weight loss,89,90 yet other adult studies reported no correlations.91-93 Resistin serum levels were studied in obese children after 1 year of weight loss.94 Girls demonstrated higher resistin concentrations than boys, but there were no differences of resistin levels between lean and obese children, and there were no significant changes after weight loss.94 Retinol-binding protein 4 (RBP4) is a recently identified adipokine secreted primarily from adipose tissue with some secretion by the liver. It is a proposed link between obesity and insulin resistance.95 In normal mice, elevated RBP4 levels caused insulin resistance in muscle and increased hepatic gluconeogenesis, whereas RBP4 gene knockout mice had increased insulin sensitivity.95 In adults96,97 and children79,98,99 with obesity and T2DM, elevated RBP4 levels have been correlated with insulin resistance. Two recent studies showed that lifestyle intervention almost reversed elevated RBP4 levels in obese children.79,99 In our lifestyle intervention,99 children with substantial weight loss demonstrated a significant decrease of RBP4 levels in a parallel manner to blood pressure and triglycerides and insulin levels. These data suggest a link between RBP4, obesity, and markers of the metabolic syndrome. Visfatin/NAMPT is a recently identified adipocytokine from visceral fat that was found in higher concentrations in obese than in nonobese children.100 Visfatin, originally named pre–B cell colony-enhancing factor, is from the same gene that encodes nicotinamide 5-phosphoribosyl-1-pyrophosphate transferase (NAMPT), an enzyme important in mammalian nicotinamide adenine dinucleotide (NAD+) biosynthesis.101 The relationship between visfatin/NAMPT and the parameters of glucose metabolism and insulin resistance is uncertain because of contradicting data102-107 potentially attributed to differences in immunoassay specificity.108 Recent evidence indicates haplodeficiency and chemical inhibition of NAMPT may cause defects in NAD+ biosynthesis.101 Alterations in NAD levels could alter activities of important enzymes in metabolic pathways such as glycolysis or fatty acid oxidation in pancreatic beta cells.101,109 After the appetite-inhibiting effect of leptin had been discovered, it was hoped that administration of leptin might be a cure for obesity. The attempts were disappointing, mostly because simple obesity results in leptin resistance. Only in very rare patients with congenital leptin deficiency does leptin treatment lead to a strong long-term reduction of overweight.24 Some of the gut hormones, such as GLP-1 and CCK, have a very short half-life of a few minutes in the circulation because of rapid degradation, thus limiting their use as antiobesity drugs. However, exendin-4 is a long-acting GLP-1 receptor agonist that has recently been approved by the US Food and Drug Administration for the treatment of T2DM and has also been associated with weight loss.110 Preliminary data from rat models suggest that oxyntomodulin may be useful in treating obesity.111 One such rodent study suggested that oxyntomodulin exerts its anorectic effect through the GLP-1 receptor, as it was ineffective in GLP-1 receptor knockout mice.112 In a recent 4-day human study, oxyntomodulin not only promoted weight loss but increased energy expenditure by more than 25%.113 GLP-1 and the amylin analogue pramlintide appear to decrease weight in patients with T2DM, which is an important secondary goal in treating these patients.114 Pharmaceutical studies have recently been performed to explore amylin's therapeutic potential for treating both obesity and diabetes.114-116 Traditional pharmacotherapies to treat T2DM often exacerbate obesity, undermining any benefits of improved glycemic control as well as patients' compliance with the treatment. The addition of leptin after amylin pretreatment elicited even greater weight loss compared with both monotherapy conditions, providing both nonclinical and clinical evidence that integrated neurohormonal approaches to obesity pharmacotherapy may facilitate more successful weight loss by emulating naturally occurring synergies.116 Administration of PYY in obese humans has been reported to reduce food intake in the short-term.117 Several gut hormone–based treatments for obesity are under investigation in phase 2 and 3 clinical trials, and many more are in the pipeline.115 These gut peptides need to be injected. Orally active inhibitors of the incretin-degrading enzyme dipeptidyl peptidase-IV offer an alternative.118 Probably the most important conclusion to be made from the data presented herein is that the pathologies affecting energy homeostasis in obese children have at least as much to do with the endocrine phenomena that are involved in the communication between peripheral tissues (gut, adipose tissue) and the brain as they have to do with genetics and sociocultural and lifestyle factors. This implies that the solutions to this serious, ever-escalating threat to both the life span and life quality of our children are also embedded in a better understanding of the endocrine status of the obese child. The majority of the changes of gut hormones and adipokines observed in obese children are reversible after weight loss, and therefore, pharmacological interventions based on these hormones will likely not solve the obesity epidemic in childhood. However, successful solutions have much to do with the length and thoroughness of interventions; there are no simple, quick-fix pharmaceutical solutions to sustainable weight loss. Understanding the pathways of body weight– and food intake–regulating gut- and adipose tissue–derived hormones will help to find better answers that can effectively combat both childhood obesity as well as the plethora of pathologies that it either causes or exacerbates. Correspondence: Christian L. Roth, MD, Division of Endocrinology, Seattle Children's Hospital Research Institute, 1900 Ninth Ave, Seattle, WA 98101 (email@example.com). Accepted for Publication: October 6, 2009. Author Contributions:Study concept and design: Roth. Analysis and interpretation of data: Roth and Reinehr. Drafting of the manuscript: Roth. Critical revision of the manuscript for important intellectual content: Roth and Reinehr. Statistical analysis: Roth. Obtained funding: Reinehr. Administrative, technical, and material support: Roth. Study supervision: Roth and Reinehr. Financial Disclosure: None reported.
<urn:uuid:a580b116-b9a4-460a-ab73-815fcbb1c128>
CC-MAIN-2022-33
https://jamanetwork.com/journals/jamapediatrics/fullarticle/382835
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00699.warc.gz
en
0.924021
5,888
2.53125
3
Educators guide for Assessment of Basic Language & Learning Skill ICT Offers Classroom Training With Individual Help Sessions For Hands-On Learning. Classes Available For All Levels Morning & Evening To Fit Your Schedule Learn English with these free learning English videos and materials from BBC Learning English. This site will help you learn English and improve your pronunciation, grammar and vocabulary knowledge The Learn English Network has everything you need to become a confident English learner! We like to think we have one of the friendliest communities on the www Welcome to Learning English We have lots of free online activities to help teenagers and adults practise their English. Choose the skill you want to practise and the level that's right for you Welcome to LearnEnglish Learn English online and improve your skills through our high-quality courses and resources - all designed for adult language learners. Everything you find here has been specially created by the British Council, the world's English teaching experts Study English for Free Your connection to the world Learning English Online Grammar, Vocabulary, Exercises, Tests, Games You will find a lot of information about the English language on this site. You can learn English words, practise grammar, look at some basic rules, prepare for exams, do tests or just have fun playing games Learn American English with English language lessons from Voice of America. VOA Learning English helps you learn English with vocabulary, listening and comprehension lessons through daily news and. Der Betrieb von LearningApps.org wird unterstützt von: über LearningApps.org Impressum Datenschutz / Rechtliches Impressum Datenschutz / Rechtliche To thank you for reaching 300k subscribers, here is your 30 minute video to master ALL the basics of the English language! And if you want to study more, cli.. . The course includes eight lessons that help beginners develop basic skills in the English language. Go. Aussprache. When you start learning English, you should definitely learn the alphabet and the sounds of English. Find useful information about English pronunciation for beginners here: The English Alphabet; The definite article the two, to. Welcome to LearnEnglish Teens. Covid-19 learning support: LearnEnglish Teens wants to support students who can't go to school at the moment. You can find tips and advice for learning at home on our new support page.. LearnEnglish Teens is brought to you by the British Council, the world's English teaching experts Intermediate level articles are for individuals with a limited knowledge of American English. Stories are often between 500 words to 1,000 in length. They may include audio from newsmakers. New. You can get unlimited access to helpful learning materials and activities when you subscribe to LearnEnglish Select - a flexible, online way of learning English. Study at your own pace. Access lessons whenever and wherever you want, for as long as you like. Receive exclusive new content every month Learn how to speak English with the BBC. Every day we have a new video to help you learn the English language. We also produce regular 'extra' videos across the week so come back every day to see. Learning. Teaching. Using English! UsingEnglish.com provides a large and growing collection of English as a Second Language (ESL) tools & resources for students, teachers, learners and academics, covering the full spectrum of ESL, EFL, ESOL, and EAP subject areas. Students » Teachers » UsingEnglish.com was established in 2002 and is a general English language site specialising in English as. Preparing to learn English online with us is simple. You need three things: a stable internet connection, a computer and the free video conferencing tool Zoom. How long will it take to improve my language skills? It depends. As is the case when learning any new skill, your progress depends on your efforts. If you would like to learn English online quickly, then we suggest that you make. Learning English is what people do when they want to use the English language. In language learning, we often talk about language skills and language systems. Language skills include: speaking, listening, reading, and writing. Language systems include vocabulary, grammar, and pronunciation. A lot of people learn English at school, where English is a common subjects. Many people also want to. 10 Reasons Why Learning English Is the Best Decision You Ever Made 1. You will discover that English is easy to learn. Most people think that learning a language is very difficult. In many cases, it is easier for certain people to learn English because English is related to their native language. For example, for people from Europe who are. Learn Business English Business English Basics . Topic. Details. Computers / Computing. Parts of a computer; Days and Dates. Making Arrangements. Useful for organising your diary. English for Meetings. The structure of a typical business meeting. Roles in formal meetings. Who does what? Common phrases to use in meetings. Jobs and Work. Jobs / Work and Professions. Common Job Interview. One of the best 5 podcasts for English learners of all levels: Podcasts in English is remarkable if only for its sheer variety. Indeed, the website offers a wealth of podcasts for every level. What's more, the episodes are quite short -normally under 5 minutes - perfect for those pressed for time. They also address entertaining topics, which include such gems as 10 remarkable facts. E-Learning für alle ISTQB-Level u. Module ab 750,00 € zzgl. MwSt. Foundation Level, Test Manager, Test Analyst, Technical Test Analyst, Test Automation Eng Category: Learning English Topic: Learning and Teaching Short and long term solutions for those who want to speak more quickly and smoothly. Also useful for teachers planning classroom fluency practice LEARNING English. Through my own studies and the help of italki's tutors, I was able to receive a score of 7.5 on my first IELTS test. Andy. LEARNING English. I was a little bit skeptical in the beginning. I was used to studying with books and I was missing this. However, I quickly realized that it was the best tool I have ever found. Lindalva. LEARNING French. 1; 2; 3; Learn More Than Just a. LEARN 3000 WORDS with NEWS IN LEVELS. News in Levels is designed to teach you 3000 words in English. Please follow the instructions below 80 000+ English ESL worksheets, English ESL activities and video lessons for distance learning, home learning and printables for physical classroom . Look up the meanings of words, abbreviations, phrases, and idioms in our free English Dictionary Keep a language log or journal Either buy a notebook or dedicate a file on your cell phone or computer to your English learning experience. Every time you learn a new, important word, add it to your log, along with a definition and and example, if possible. You can also write other experiences in English if you want more writing practice The world has a new mania — a mania for learning English, said Jay Walker on the TED stage in 2009. English is accepted as a shared language of science, a language of global business and the language of the Internet, with at least 1.5 billion students learning it worldwide. So the TED Distribution team wondered: What if students could l.. Learning English level 4 is currently the hardest level choose the lesson you want by clicking on the link. Title: Creation Date: A beautiful sunrise English reading and writing practise: May 20, 2020: A school dinner lady Reading and writing English lesson: Apr 04, 2020: At the Airport conversation Learning English : May 16, 2020: Autumn and fall what's the difference English lesson: Apr 14. Learn English and get access to plenty of useful knowledge. Read in original some of the greatest English literary works and rediscover them in new ways. Find the necessary literature on any subjects. Understand the articles written in the best universities of the world. Unlike other languages, English can be practiced anywhere in the world. The English language is a gateway for making. Learn English for free with us! We have a new learning portal that you shouldn't miss! There is a complete course with over 60 conversational situations to listen to and many exercises to help you practice your English. Learn how to introduce yourself, ask someone how they are doing and much more! Go. Support . As a free learning portal we rely on your support and are so grateful for any. .com is a free site for English learners. You will find free English vocabulary sheets, English grammar sheets, English exercises and English lessons. Thousands of English penpals are waiting for you. They will help you learn English. There is an English forum too Learning English . 26 March 2013. Mentors support children with limited English but 'gaps' in specialist skills remain. Rising numbers of school children with limited English puts pressure on. The included subject matter and activities have been written and designed by TEFL (Teaching English as a Foreign Language) experts.The difficulty ranges from Pre-intermediate to Advanced level English according to the CEFR (Council of Europe Language Level) scale. This is technically classified as an ESP (English for Specific Purposes) course, teaching hundreds of relevant vocabulary terms. This is an intermediate English course for children which teaches and reinforces complex grammar, vocabulary and sentence forms 11 Of The Best YouTube Channels For Learning English 1) British Council LearnEnglish This is the official YouTube channel of the British Council. Videos are professional... 2) Anglo-Link Run by Minoo Short of the UK, this channel provides lessons that are several minutes long and focus on... 3). Best Learning English Podcasts We Could Find Linguistic podcasts are at the forefront of today's educational landscape, where digital advancements have facilitated contextual learning, enabling users to utilize technology to learn dialect, phonetics, semantics and more Learning English Lesson One Learning English Lesson One ist das siebte Studioalbum der Band Die Toten Hosen und ihr erstes Album mit ausschließlich englischen Texten. Es wurde innerhalb eines Jahres in London, New York, Rio de Janeiro und Köln unter der Leitung von Jon Caffery produziert und erschien erstmals am 11. November 1991 Learn how to speak English quickly with our Complete English online course. Whether you're learning as a beginner or at a more advanced level, our course - covering everything from English pronunciation and grammar to English expressions - will help you move past the basics and become fluent in English Learning English In this part of our site, you can find self-study materials for learning English. You can review the words, phrases and structures from any page with our Expemo app This means that learning English makes it much easier to travel anywhere you want. For example, aeroplane announcements, train timetables, emergency information and street signs are often translated into English, particularly in countries that use a different type of alphabet. Plus, even if you don't find other travellers or local people that speak your mother tongue, you are practically. English songs with subtitles and lyrics so you can read as you listen. Singer: Jonathan Taylor. These are special ESL songs to help in learning English Learning English use a limited vocabulary and are read at a slower pace than VOA's other English broadcasts. Previously known as Special English Online courses are a great way to learn English or improve your existing English language skills, no matter what your goals are. If you need to improve your ability to speak English, you can take courses in English pronunciation and grammar or more specialized areas like business English and email writing. If you're a native speaker looking to take your skills to the next level, you can take. Lernen Sie die Übersetzung für 'learning' in LEOs Englisch ⇔ Deutsch Wörterbuch. Mit Flexionstabellen der verschiedenen Fälle und Zeiten Aussprache und relevante Diskussionen Kostenloser Vokabeltraine How to Learn English How to organize your learning for maximum results. 5 Tips for Learning English Tips and ideas on the best ways to learn English faster. English Learning Tips for Beginners 6 ideas to help beginners learn English faster. The 4 Language Skills Listening, speaking, reading, writing - and why you need them. Oral Fluency in English Immerse yourself in the British culture; this means surround yourself with individuals that speak, live, walk and talk British English. It's the surest way to learn a British accent quickly. Soon, you'll find yourself naturally able to speak with the variations above. Anything with a British speaker will work—try listening to the BBC (which provides free radio and television newscasts on the. Welcome to Learn English Now. The ability to speak English will be a great blessing in your life. English skills can improve your daily life, help you pursue educational opportunities, lead to better employment, and expand your circles of friends and acquaintances. EnglishConnect is made up of several English courses. Learn English Now is for novice speakers without internet access. It helps. Learning English is also a great way to boost your career and increase your chances of getting a job especially in the business environment, whether in a multicultural company or working abroad. Being an English language learner is a great way to boost your language capacity setting you on a path to become bilingual or more. English Speaking Skills . When you are learning to speak English, it. Learning English has never been so fun - or so addictive! Download Duolingo here! 2. Quiz your English - The best for exam prep. If you're preparing for an exam, Quiz your English will be your new favourite app! It's designed by Cambridge Assessment English and there are levels which are specifically for the Cambridge B2 First and IELTS. You can expect to see the kind of vocabulary and. Erfahren Sie mehr über Veröffentlichungen, Rezensionen, Mitwirkenden und Lieder von Die Toten Hosen - Learning English, Lesson One auf Discogs. Lesen Sie Rezensionen und informieren Sie sich über beteiligte Personen. Vervollständigen Sie Ihre Die Toten Hosen-Sammlung Intermediate:- How long have you been learning English? Advanced:- Why are you learning English? 39 discussions 18.2K comments Most recent: New members - Introductions by Teach 1:20PM . Let's Practise and Learn For the things we have to learn before we can do them, we learn by doing them. Good old Aristotle. 633 discussions 23.2K comments Most recent: How are you feeling today? by. Every Monday, we share images and quotes aimed to inspire and motivate you along your English learning journey. We all need a little pick-me-up now and then, especially those who are trying to learn English!And since we've been sharing inspirational messages for so long, we've collected a few of your favorites for days when you need extra special English learning motivation .TEFL can occur either within the state school system or more privately, at a language school or with a tutor Englishlin Advance your English skills with one-to-one training in pronunciation, grammar and more. Study pronunciation to excel in presentations, interviews and networking The fastest, easiest, and most fun way to learn English and American culture. Start speaking English in minutes with audio and video lessons, audio dictionary, and learning community! Hallo, Pooh, you're just in time for a little smackerel of something. Sign In Learn English. Thousands of lessons. No credit card needed. Join Now. Or sign up using Facebook Continue with Facebook By clicking. Learn English for free with 1680 video lessons by experienced native-speaker teachers. Classes cover English grammar, vocabulary, pronunciation, IELTS, TOEFL, and more. Join millions of ESL students worldwide who are improving their English every day with engVid Learn English Online has been providing support to ESL and EFL learners and teachers for free since 1999. We do what we do for the love of English Teaching beginners can be a daunting prospect, especially when it's a monolingual group and you know nothing of their language, or it's a multilingual group and the only common language is the English you've been tasked with teaching them. Nevertheless, not only is it possible to teach beginners only through English, but it can also be one of the most rewarding levels to teach. To help. Babbel is the new way to learn a foreign language. The comprehensive learning system combines effective education methods with state-of-the-art technology. Interactive online courses will improve your grammar, vocabulary and pronunciation skills in no time. You'll make fast progress and have fun doing it Englisch lernen Mit dem Spotlight-Magazin, Spotlight Audio sowie dem Übungsheft Spotlight Plus trainieren Sie Ihre Englischkenntnisse ganz nebenbei.. Mit jeder Ausgabe erwarten Sie fundiert recherchierte Artikel, die spannende Einblicke in die englischsprachige Welt erlauben - und gleichzeitig mit Übungen und Lernhilfen Ihre Sprachkenntnisse verbessern Learn English online with a humorous story, personalized lessons, and daily study reminders. Special offer: One month free Gymglish - curso de inglés por Internet Aprende inglés a través de una historia humorística, lecciones personalisadas y recordatorios cotidianos por email. Oferta especial: Un mes gratis Gymglish - Cours d'anglais sur Internet Apprendre l'anglais à travers une. National Geographic Learning and English Language Teaching. National Geographic Learning's mission is to bring the world to the classroom and the classroom to life. With our English language programs, students learn about their world by experiencing it. Through our partnerships with National Geographic and TED, they develop the language and skills they need to be successful global citizens. English Course Outline; Learn Japanese: Online Language Course for All Levels; Master pronunciation on our iOS app with Slowdown audio; New articles. Meet new people and make friends while learning a new language; Become fluent in a new language with just 10 minutes a day; English Course Outline ; Try This Free German Language-Learning App; Japanese Alphabet: Learn Hiragana, Katakana & Kanji. In this lesson, learners will practise the alphabet, the letter names and writing upper-case and lower-case letters through a song and various games. For extension, they will watch a story about animals for every letter of the alphabet and work as a class to produce an alphabet wall frieze or a picture dictionary learning english 16 Question strips adapted from the above Pair Work activity. These questions can be used with students seated in pairs or in small groups, or with students standing So learning English isn't just a question of learning the rules - it's about learning the many exceptions to the rules. The numerous exceptions make it difficult to apply existing knowledge and use the same principle with a new word, so it's harder to make quick progress. The order of the words . Native English-speakers intuitively know what order to put words in, but this is hard to. TeachingEnglish is brought to you by the British Council, the world's English teaching experts. If you want help planning your lessons, you've come to the right place! We have hundreds of high-quality resources to help you in the classroom as well as articles, videos, publications and courses to help you with your continuing professional development as a teacher or teacher educator. Getting. Learning English Will Help Open Your Mind We believe that we are all brought up to see the world in one way. That's a good thing, but at a certain point, we need to expand our horizons. Learning English will help you understand the world through a different language Englisch-Deutsch-Übersetzungen für learning im Online-Wörterbuch dict.cc (Deutschwörterbuch)
<urn:uuid:8b29de26-9f39-4b39-8c71-6f9180621d8e>
CC-MAIN-2022-33
https://erstverstecken.com/english/learn-onlinexjc-z1536892
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00499.warc.gz
en
0.889381
4,398
2.625
3
Stable isotopes of oxygen and hydrogen in water have become state of the art parameters in hydrological studies to delineate water sources (Yurtsever and Gat, 1981; Joussaume et al., 1984; Rozanski et al., 1993; Bowen and Revenaugh, 2003; Terzer et al., 2013; Stumpp et al., 2014; Galewsky et al., 2016) and investigate processes, such as evaporation (Craig and Gordon, 1965; Gibson et al., 2016) and transpiration (Dongmann et al., 1974; Helliker and Ehleringer, 2000). Choosing the oceanic reservoir as a starting point of the hydrologic cycle, evaporation of isotopically relatively uniform ocean water enriches the resulting water vapour in light isotopes. Condensation of this water vapour in clouds during formation of rain is assumed to be an isotopic equilibrium process, which only depends on temperature. Consequently, the isotopic composition of precipitation is subject to several effects that mostly depend on the temperature during condensation. They include the altitude effect, the latitude effect, and seasonality. It also correlates with travel distance, and thus continentality because of Rayleigh fractionation during subsequent rainout events (Clark and Fritz, 1997; Gat et al., 2000). On a global scale, these effects average the isotope abundance of oxygen and hydrogen in precipitation to a linear relationship, which has been termed the global meteoric water line (GMWL) and defined by Craig (1961) as: To reduce the number of effects that have to be taken into account during the evolution from ocean water to precipitation, Dansgaard (1964) introduced the concept of a deuterium excess value (d), which ideally should not be altered by isotope equilibrium effects. It is defined as Evaporation as a non-equilibrium process is generally assumed to be the major modifier of d-values, allowing direct comparison of oceanic water vapour and moisture precipitating from clouds (Merlivat and Jouzel, 1979). On large scales and assuming a closed water cycle, d-values of average oceanic water vapour are mainly controlled by relative humidity (RH) and sea surface temperature (SST). Precipitation along the GMWL is generated for RH and SST values of 85% and 25 °C, respectively (Clark and Fritz, 1997). Based on the closure assumption of the water cycle, simple linear relationships between d and RH, and d and SST were introduced to use source values of d as a ‘fingerprint’ to trace water in the atmosphere up to its point of rainout (Rindsberger et al., 1983; Johnsen et al., 1989; Pfahl and Sodemann, 2014). An example for one of these linear relationships at the location of evaporation is expressed as Note that after the condensation of water vapour in clouds, rain droplets can be subject to additional evaporation during their fall through a warm and dry air column which shifts their d-value from its original vapour composition (Friedman et al., 1962; Stewart, 1975). This effect is more pronounced for small rain amounts and can be accounted for if precipitation amount weighted d-values are considered (Lee and Fung, 2008). These theoretical basics enable applications to estimate contributions of recycled inland water to precipitation (Froehlich et al., 2008; Aemisegger et al., 2014; Parkes et al., 2017). They may also discern and compare meteorological patterns (Liotta et al., 2006; Guan et al., 2013), and enable the paleoclimate interpretation of ice cores (Jouzel et al., 1982; Steffensen et al., 2008). Some of the highest natural deuterium excess values in precipitation have been recorded in the Mediterranean, with maximum long-term mean values of 22‰ (Gat and Carmi, 1970). High d water vapour is produced when cold dry air from the surrounding continents interacts with the warm Mediterranean seawater and enhances isotope fractionation during evaporation (Gat and Carmi, 1970; Gat et al., 1996). This high d signal is transported eastward, mainly by prevailing Westerlies. Extending from countries with direct contact to the Mediterranean, several studies from countries further away refer to Mediterranean moisture as an explanation of unusually high d-values in precipitation and surface waters. These include studies from Syria (Kattan, 1997; Al Charideh and Zakhem, 2010), Jordan (Bajjali, 1990), Saudi-Arabia (Alyamani, 2001; Michelsen et al., 2015), Iraq (Hamamin and Ali, 2013; Ali et al., 2015), Iran (Osati et al., 2014; Parizi and Samani, 2014), Pakistan (Hussain et al., 2015), Tajikistan (Liu et al., 2015), northern India (Jeelani et al., 2013, 2017), and western China (Yao et al., 2013; Wang et al., 2015). Moreover, the moisture source of precipitation in Central Asia was discussed by several studies, which identified several air travel pathways and identified Mediterranean moisture as one of the potential main sources for precipitation in Central Asia (Aizen et al., 1995; Kreutz et al., 2003; Tian et al., 2007) while other studies identified the Indian Ocean as another major moisture source (Aizen et al., 1996; Karim and Veizer, 2002; Jeelani et al., 2017). Additional proposed pathways include polar air masses (Tian et al., 2007) and more continental Westerlies (Aizen et al., 1996; Meier et al., 2013; Pang et al., 2014). In several of these studies, the notion of Mediterranean moisture as the only explanation for high d-values appears quite speculative and is often not discussed in detail. Our study challenges this rather simple notion and aims for a detailed investigation of air and moisture pathways to the Western Pamir Mountains in Tajikistan. The central objective of this work is to identify and attempt to quantify the influence of Mediterranean moisture and its isotope composition on precipitation at our study region, to test the hypothesis that the Mediterranean is a major moisture source to this area. The Pamir Mountains (or the Pamirs) are an excellent choice for this task because they lie directly in the proposed pathway of Mediterranean moisture and occupy a central geographic position in the Asian continent. Study region and concept The Pamir Mountains are a high mountain region in Central Asia, mainly within Tajikistan. They are bordered by high mountain regions, such as the Kunlun Mountains, the Hindu Kush, the Karakoram Range, and the Tien Shan. Regarding morphology, the Pamir Mountains can be subdivided into the Western and Eastern Pamir Mountains. The Western Pamir Mountains exhibit a rugged alpine environment with deeply incised valleys while the Eastern Pamir Mountains show smaller flat-bottomed valleys based on a high plateau and a lower morphological gradient between peaks and the valley floor. The selected study region to examine the fate of Mediterranean moisture lies in the Western Pamir Mountains. In order to study this region, water samples for stable isotope analyses were collected at two study sites, Khorugh and Navabad, which are 34 km apart, in Tajikistan (Table 1). Over the course of 15 months 19 monthly-integrated samples were collected in 2012 and 2013. Subsequently, sampling of single rain events took place in 2013 and 2014, resulting in further 89 event samples collected over a period of 13 months (Meier et al., 2015a, 2015b). Both sample types are examined separately to test our hypothesis of Mediterranean moisture influence in the Western Pamir Mountains. In order to cope with the coarser temporal resolution of integrated samples, their d-values are compared to monthly d averages of isotope precipitation monitoring stations along the route of commonly assumed air travel pathways. As mentioned in the introduction, one of these pathways connects the Mediterranean to the Pamir Mountains. To access isotope data, the Global Network of Isotopes in Precipitation (GNIP) was used in a first step (GNIP, IAEA/WMO, 2018). Additional isotope data were collected and evaluated from numerous publications for areas with sparse spatial coverage by GNIP stations. A full list of GNIP stations and additional data from publications used in this study can be found in Table 1. These stations and further isotope data were clustered and subdivided into ten regional classes based on their geographical position: W Mediterranean, E Mediterranean, Levant, Middle East, Caspian Sea, Polar, Persian Gulf, N India, Kabul & Kashmir, and Western Pamir (Fig. 1 and Table 1). From monthly d-values for all stations in one class, two types of averages were calculated for each month: the arithmetic mean Trajectory calculation and classification The d-values of event samples are more easily allocated to specific air travel pathways via computed backward air mass trajectory models, such as HYSPLIT (Stein et al., 2015). The necessary user input for these trajectory models consists of a location and a date of the associated precipitation event. From gridded meteorological data, such as the Global Data Assimilation System (GDAS; NOAA, 2018), HYSPLIT computes regularly spaced data points along trajectories which include additional parameters such as elevation of the air parcel, specific humidity of the air parcel and elevation of the planetary boundary layer (PBL). These parameters can be used to categorise the resulting trajectories into the wedge-shaped or radial regional classes that were pre-defined to represent areas of possible moisture origin. Note that classes for integrated and event samples are not necessarily identical. The classes of integrated monthly samples aggregate multiple, spatially close station and represent the regions where precipitation collection took place. Event samples were divided into classes that correspond to possible main moisture sources. They are wedge shaped due to the central starting point of the calculated trajectories. A detailed summary of data processing for event trajectories is given in the following paragraph. Four of the 89 events that were sampled at Khorugh and Navabad in the Western Pamir Mountains showed d-values below –10‰, which hints at sample alteration (Michelsen et al., 2018). These events were excluded from further calculations. For the remaining 85 single precipitation events, backward air mass trajectories were calculated for each location with the Hybrid Single-Particle Lagrangian Integrated Trajectory model (HYSPLIT) of the National Oceanic and Atmospheric Administration (NOAA) Air Resources Laboratory (Stein et al., 2015; Rolph et al., 2017). GDAS1 grids with 1°×1° resolution from the National Oceanic and Atmospheric Administration and National Centers for Environmental Prediction (NOAA/NCEP) were used as meteorological input data. Starting times for the trajectories were chosen in accordance with precipitation records of the United States Air Force (USAF) station in Khorugh (Tajikistan; station ID 389540). The time period for all trajectories covered by this model was set to 7 days, and hourly data points along the backward trajectories were produced. In order to follow the specific air mass that produced precipitation at the sampling site, the trajectory starting altitude should correspond to the altitude of the rain clouds during sampled events. Since this cloud altitude was not measured, eight different trajectory starting altitudes were used as inputs with 150, 300, 500, 1000, 1500, 2000, 2500, and 3000 meters above ground level, thus generating eight trajectories per event. In order to select one of the eight starting altitudes for further evaluation, the evolution of specific humidity along each trajectory was assessed. For air mass altitudes below the height of the PBL derived from the HYSPLIT model, an increase in specific humidity was assumed to correspond to evaporation from the underlying ground area, and a decrease in specific humidity was assumed to correspond to precipitation (Bottyán et al., 2017). For air mass altitudes above the height of the PBL, humidity changes were assumed to be not due to ground surface interaction and consequently ignored. From the eight trajectories per event, the one with the largest sum of specific humidity increase below the PBL height along a trajectory was selected. This selection ensured that the trajectory that represents one precipitation event has a history of maximum moisture uptake and thus is regarded as the most representative for the sampled precipitation (Bottyán et al., 2017). To evaluate the origin of moisture in these 85 selected trajectories, radial regional classes around the precipitation collection site were constructed (Fig. 2). Each trajectory was allocated to one of these radial classes. The class boundaries were chosen to include larger bodies of surface water that may act as moisture sources (Mediterranean Sea, Caspian Sea, northern Indian Ocean). If a trajectory crossed areas of several classes, it was associated with the class where most of the moisture entered the air parcel according to the model. Since the boundaries of the constructed wedge-shaped classes all converge to the sampling locations in the Pamir Mountains, a correct classification of air parcels that receive their moisture close to the sampling site is difficult. For this reason, an additional ‘local’ class was defined for air parcels that received most of their moisture in close proximity (<3° distance, ≈300 km) to the sampling sites. Further classes were introduced, because some trajectories originated, and had their moisture uptake, outside the reasonable boundaries of the area of the radial classes. This eventually resulted in nine classes: Polar, Caspian Sea, Mediterranean, Persian Gulf, N Indian Ocean, Local, Africa, W Atlantic, and E Atlantic (Fig. 2). Field sampling and laboratory analyses Monthly-integrated precipitation samples were collected from Hellmann-type rain gauges. To avoid evaporation effects, samples were collected at least twice per day, stored in high-density polyethylene (HDPE) bottles, and conflated to monthly samples. Event samples were transferred from the Hellman rain gauges into durable PE bags (Whirl-Pak, Nasco, Fort Atkinson, WI, USA). These water-filled bags were put into HDPE bottles for shipping. Precipitation samples were analysed in the laboratory for stable isotopes of the Department of Catchment Hydrology in the Helmholtz Centre for Environmental Research – UFZ in Halle/Saale (Germany). Water samples were analysed for δ18O and δ2H by an isotope ratio infrared spectroscopy (IRIS) analyser, based on wavelength-scanned cavity ring-down spectroscopy (L2120-i, Picarro Inc., Santa Clara, CA, USA). Each sample was measured by nine injections while the first three injections were rejected to exclude memory effects. All isotope measurements were scale normalised against the high and low international reference materials Vienna Standard Mean Ocean Water (VSMOW) and Standard Light Antarctic Precipitation (SLAP). The two-point calibration was controlled by a third laboratory reference water that was calibrated directly against VSMOW and SLAP. All values are reported in the standard δ-notation in per mil (‰) versus VSMOW according to Oxygen and hydrogen stable isotopes The results of monthly integrated samples for oxygen and hydrogen stable isotopes are shown in a dual isotope plot of δ2H against δ18O (Fig. 3). Values for δ18O range between –24.4‰ and –2.0‰, whereby minimum values are reached in the cold season (boreal winter), while maximum values were observed during warm season (boreal summer). The Local Meteoric Water Line (LMWL) calculated from the samples (Fig. 3) has the equation Monthly integrated samples The first region to consider along the proposed W–E transect is the Western Mediterranean. Stations in this region have an average deuterium excess of 10‰–12‰ in the cold season and around 3‰ in the warm season (Fig. 4). The seasonal evolution is a smooth sinusoidal course between those two extremes. The differences between precipitation-weighted and non-weighted averages increase from 1‰ in the cold season to 3‰ in the warm season. This seasonal pattern continues in eastward direction in the eastern Mediterranean and Levant where cold season values reached 17‰ and 22‰, respectively (Fig. 4). Warm season values scatter around 6‰–7‰, which results in a more pronounced seasonal oscillation in the Levant region when compared with the eastern Mediterranean. For July and August, no monthly isotope values were recorded in the Levant region because of the absence of rain. Precipitation amount-weighted to non-weighted differences increase from cold to warm seasons in the eastern Mediterranean. However, these differences decrease in the Levant region. For all those three Mediterranean-influenced regions, the spatial and temporal resolution of isotope data is very good and averages of d-values can be assumed to be representative for their respective region. The Middle East region (for extent of this region as defined in this study see Fig. 1) is dominated by data from the GNIP station at Teheran (Iran), while additional measurements are mainly single monthly values from various sampling sites in Iraq (Fig. 1). The seasonal variations of d-values are similar to the regions located to the west (W and E Mediterranean, Levant) described above (Fig. 4), however, they show more variable values for late summer due to low volume sample sizes. An approximate difference between cold and warm seasons of about 15‰ in d-values is comparable to the seasonal differences in the Levant region. The Middle East absolute values range from about 0‰ to about 17‰ (Fig. 4), while d-values observed in Levant range from ∼7‰ to ∼22‰. South of the Middle East region landmass, the Persian Gulf acts as a moisture source for passing air masses. In this region, the only available GNIP station of Bahrain exhibits d-values between 19‰ in the cold season and 7‰ at the beginning of the warm season (Fig. 4). A strong difference between precipitation-weighted and non-weighted d averages was found here. Moreover, in summer, absence of precipitation leaves a gap in the d-record. In general, the seasonal d-value patterns of the region around the Persian Gulf are comparable to the d-values of the Middle East region. Following the transect further eastwards, a distribution between cold and warm seasons similar to the Mediterranean can also be observed in the GNIP station of Kabul (Afghanistan). This is also reflected in precipitation samples of Jeelani et al. (2017) that were collected from the Kashmir Valley (India), with high d-values of ∼21‰ in the cold season and low values of 3‰–7‰ in the warm season (Fig. 4). The d-values of stations in Northern India are less variable than those of stations described so far, with a minimum–maximum difference of around 8‰ (Fig. 4). Precipitation weighted values generally decrease from January with 14‰ to 6‰ in June. From July to October d-values are stable around 8‰. A notable difference between precipitation weighted and unweighted averages can be observed from March to June. Stations that were assigned here to the Polar class show d-values between 4‰ and 10‰ (Fig. 4). Contrary to the seasonal distribution pattern with a single summer minimum that is observed eastward from the Mediterranean, this Polar class has two minima in February and July with low d-values around 4‰. A first maximum is found in April with around 7‰. From July to November d-values steadily increase to a second peak of ∼10‰. Two stations to the northwest of the Pamir Mountains were assigned to the Caspian Sea class that exhibit a similar seasonal evolution when compared to the Polar class stations. However, they show increased amplitudes with minimum and maximum values of 2‰ and 10‰ in June and November (Fig. 4). Integrated samples (unweighted monthly means here) from the Western Pamir Mountains display d-values between 9‰ and 19‰ (Fig. 4). Note that September values are missing, because of lack of precipitation. A first maximum in March of ∼19‰ is followed by a steady decrease to 9‰ in July. Another peak of around 17‰ is observed in August. December holds another minimum around 9‰, after which values increase until March. Possible moisture uptake along air mass trajectories The calculated air mass trajectories leading up to each sampled rain event in the Western Pamir Mountains are shown in Fig. 2. Note that none of the calculated trajectories for the sampling sites arrived from an eastern direction. Some of the trajectories had the majority of their moisture uptake and their origin beyond the reasonable boundaries of the radial classes introduced in the methods section. These six long-distance trajectories (Fig. 2, small box; Fig. 5a) include pathways with main moisture uptake over the northeastern Atlantic Ocean in January, from the western Atlantic Ocean in March, and from northern Africa in January and December. Statistics of events or trajectories per class and month are shown in Fig. 5a. January holds the annual maximum of 25 sampled precipitation events, while June to October was dry and no precipitation was collected. Events assigned to the N Indian Ocean class are most frequent in January with a decrease in spring and an increase from November to January. If the Persian Gulf and the Mediterranean class are combined, they show a similar distribution, but with a maximum of precipitation events in March. Polar and Caspian Sea class precipitation events are rare and only obvious in late spring. Fig. 5b was generated using the specific humidity information along the trajectories. As stated in the methods section, an increase in specific humidity below the PBL is assumed to correspond to additional moisture uptake from the ground surface along the pathway of transported water vapour. Consequently, each point along the trajectories, where an increase in specific humidity is detected, is considered an additional source for the precipitation at the sampled rain event. For the month of January, the increase in specific humidity of all points of one class was added up. This sum of specific humidity increase of one class was expressed as a percentage relative to the sum of specific humidity increase of all classes, during the month of January. This operation was repeated for each month and enables to untangle the moisture contribution of the different classes to precipitation at the sampling site. Moisture contribution of the N Indian Ocean class varies between 15% and 60%. The sum of Persian Gulf and Mediterranean moisture fluctuates between 20% and 40%. Moisture from the Caspian Sea and Polar classes each has maximum contributions of 15%. Each trajectory eventually leads up to a rain event, for which the d-value was calculated according to Eq. (2). The d-values of the Western Pamir Mountain rain events are summarised as monthly boxplots in Fig. 6a. In order to compare different classes and to retain a seasonal resolution, an average of d-values for each month and class was calculated. Thus, Fig. 6b shows one point per class for each month, if at least one trajectory was assigned to this class. Monthly averaged values The evolution of d-values along the considered transect from the Mediterranean to Central Asia is initiated by Atlantic moisture, which arrives at the western Mediterranean. In winter, the deuterium excess of 10‰–12‰ is near the value of the GMWL and consistent with average moisture produced over the Atlantic Ocean (Dansgaard, 1964). Additional local moisture from evaporated water of the Mediterranean Sea strongly depends on the season. The atmospheric conditions during the cold season favour the formation of water vapour with higher d-values, when cold and dry air from the continent prevails (Gat and Carmi, 1970; Rindsberger et al., 1983; Liotta et al., 2006). In the warmer season more humid air prevents high d-values. Additionally, warm season sub-cloud evaporation of falling water droplets enhances d-value decrease, which results in even lower d-values (Stewart, 1975). This trend manifests itself in positive deviations of precipitation-weighted from non-weighted monthly averages, because lighter rain events, which increasingly suffer from sub-cloud evaporation, contribute less to precipitation-weighted averages. The general increase of d-values in precipitation from the western to the eastern Mediterranean is consistent with the meteorological evolution, where air becomes drier the further eastwards it moves. The warmer season d-values in the eastern Mediterranean are comparable to the western Mediterranean, whereas winter values increase eastward. This winter increase is due to stronger evaporation in the eastern, more enclosed part of the Mediterranean basin. There, continental influences of cold, dry air are increased (Gat and Carmi, 1970). This can also be observed in the residual seawater that undergoes a d-value shift from ∼–3‰ in the western Mediterranean to ∼–5‰ in the eastern Mediterranean. As local seawater serves as a source for precipitation, this shift is also transposed to the rainout trajectories. Compared to the Mediterranean, the Levant region is subject to increased continental influence with warmer summers and colder winters (Kalnay et al., 1996; their long term monthly mean temperature data), which increases the overall d-value together with its seasonal amplitude. In the Middle East this trend does not continue, which can result from various influences. First, mixing of Mediterranean moisture with other less-enriched moisture sources (e.g. Black Sea; IAEA, 2005) can account for a decrease in average d-values. Second, another important factor is the lower relative humidity compared with the Eastern Mediterranean that can induce a more pronounced sub-cloud evaporation. Notably smaller rain amounts in the Middle East also increase this observable effect of the amount driven sub-cloud evaporation. In the Persian Gulf, the GNIP station in Bahrain is dominated by dry air, which is also manifested by missing summer precipitation and large differences between amount-weighted and averaged d-values. These d-values are similar to those of the Eastern Mediterranean, however, the seasonal evolution does not show the smooth sinusoidal evolution of the Mediterranean stations and is noisier. One potential interpretation is that nearby local evaporation of Persian Gulf seawater plays a major part in the precipitation of this station. Further possible influences of moisture from the Indian Ocean were proposed by Rizk and Alsharhan (1999). In Northern India, the seasonal distribution pattern of d-values differs clearly from the simple sinusoidal trough-peak distribution of the Mediterranean. Since India is strongly influenced by air from the Indian Ocean, the seasons in this region differ from European ones and are comprised by winter, summer, monsoon, and post-monsoon season. The generally increased d-values in winter have been interpreted as of Mediterranean or generally western origin from where moisture is transported eastward by so-called Western Disturbances. These wind patterns are extra-tropical cyclones (Dimri et al., 2015; Dimri and Chevuturi, 2016) in high-altitude air masses (Jeelani et al., 2017). The warm season from March to June is hot and dry, a fact that causes decreasing d-values and a more pronounced sub-cloud evaporation effect. During the monsoon season from July to September d-values are stable around 8.5‰ and no difference between amount-weighted and non-weighted could be observed. This is due to heavy rainfall events that saturate the air quickly with moisture and prevent pronounced sub-cloud evaporation (Peng et al., 2005). The GNIP station in Kabul and a site in the Kashmir valley (Jeelani et al., 2017), both located south of the Pamir Mountains, are located at the crossway of proposed Mediterranean and Monsoon air mass influences. Cold season values can be ascribed to moisture of Mediterranean origin with a high probability due to the extreme high d-values of around 20‰. Around July, monsoon and Mediterranean precipitation both show similar values and cannot be differentiated from monthly d-values. As a result, the overall annual d-value pattern is more similar to Mediterranean stations than to Indian ones (Fig. 4). GNIP stations to the North and Northwest of the sampling site (Caspian Sea and Polar class in Fig. 4) are located in cooler and more continental settings. Stations in both regions exhibit a similar seasonal d-value distribution. Their seasonal amplitude is smaller compared to more maritime stations and below 10‰ year-round. The seasonal evolution in the Western Pamir Mountains is not so obvious with respect to the possible influence of the discussed sources or transit paths. The shape of the seasonal distribution in the Western Pamir is different from the simple sinusoidal trend westward and in Kabul and Kashmir (Fig. 4). The annual weighted average of d of 13‰ is higher than the 10‰ of average global precipitation, which suggests influence of enriched moisture. From October to February, during the cold season, values range between 10‰ and 12‰, with the exception of 15‰ in November. This is lower than the cold season values for the Eastern Mediterranean, and regions eastward, and suggests a divers and not exclusively Mediterranean moisture origin. The d-values of March and August are positive outlier with 19‰ and 17‰. Similar positive excursion could be observed in Bahrain in March, albeit only reaching 13‰, and in Kabul in August with 20‰. However, a causal connection between these month-long irregularities is highly speculative. During summer months, d-values decrease to a range between 8‰ and 9‰ in July that are comparable to the Kashmir Valley and Northern India during the monsoon season. Northern moisture with year-round d-values below 10‰ does not seem to exert significant influences on those in the Western Pamir Mountains with d-values above 10‰ during most of the year. Air mass trajectories give a good indication from where the moisture for a specific precipitation event originated. During the sampled period, a dry period from June to October made it difficult to analyse general year-round source variabilities. For the rest of the year, however, moisture sources were variable (Fig. 5). Most of the time, moisture from each sector contributed to precipitation in the Pamir Mountains. Long-distance trajectories from Africa or the W Atlantic occurred with a higher frequency in December and January (Fig. 5a). This can be due to the prevailing west wind zone in temperate latitudes during winter, where air can be transported westward over long distances. The accompanying d-values are diverse and range between 0‰ and 18‰. They also do not show a seasonal trend (Fig. 6b). Because of the low number of occurrences, d-values of the long-distance trajectories are not further discussed here. Moisture from northern and northwestern areas arriving in the Western Pamir Mountains could mainly be detected from March to May (Fig. 5). In March, moisture from the Caspian Sea class has slightly negative d-values (Fig. 6b). The evolution to May with notably increasing d-values is consistent with average air temperature and relative humidity. Temperatures near 0 °C and high relative humidity keep d-values near 0‰ in winter. During spring, temperatures increase and the air becomes drier. This in turn increases the d-value of evaporated Caspian Sea water as well as the possible contribution of recycled water from soils and other surface water bodies. The contribution of N Indian Ocean moisture decreases from January to May from around 60% to 20%, which can result from the hot and dry Indian summer season climate around March, April, and May (Fig. 5b). Another hint to this Indian summer dryness are the notably lower d-values of ∼5‰, compared to 9‰–13‰ in the cold season (Fig. 6b). After the monsoon season, air from the N Indian Ocean adds increasingly more moisture and dominates the winter months in the Western Pamir Mountains. Additionally, the d-averages of monthly integrated samples (dashed lines, Fig. 4 Western Pamir plot, and Fig. 6b) compare relatively well with the averages of event samples from the N Indian Ocean (green diamonds, Fig. 6b) during November, December, and January, which also points to a cold season dominance of N Indian Ocean moisture. Mediterranean and Persian Gulf moisture do not show a simple distribution concerning the contribution to the total amount of specific humidity in the Western Pamir Mountains. If both are summed up and subsequently interpreted as ‘western’ moisture, an increase from January to April and a subsequent decrease to May can be found (Fig. 5b). In December, January, and February, moisture contributions from ‘western’ classes were as low as 20%–30%, while a maximum of around 40% occurred in March and April. This observation is in contrast to several other studies in the wider region (Aizen et al., 1995; Kreutz et al., 2003; Tian et al., 2007), where Mediterranean moisture is often proposed to be a major source of precipitation and influences d-values during winter months. Especially in the notch formed by Himalaya and Hindu Kush Mountains, Western Disturbances contribute to the formation of a low pressure area in winter, resulting in precipitation events enriched in the heavy isotopes 18O and 2H (Lang and Barros, 2004). Regions on the southern flank of the Hindu Kush and Karakoram Mountains receive more Mediterranean moisture, which is indicated by high d-values of around 20‰ in the cold season (Fig. 4, Kabul and Kashmir region). The Western Pamir Mountains seem to be less influenced by these high d-values of Mediterranean origin. This tendency is even clearer when the type of precipitation in the Western Pamir Mountains is considered. In contrast to rain, snow formation in clouds happens under non-equilibrium conditions (Lamb et al., 2017) and tends to elevate the d-values of the resulting snow above the water vapour it was formed from (Jouzel and Merlivat, 1984; Uemura et al., 2005). Thus, snow samples derived from Mediterranean moisture should show even higher d-values. Since at least part of the precipitation during the winter months falls in the form of snow, a connection to high d-values of Mediterranean origin should be expected to be expressed more clearly. Precipitation events that were assigned to the ‘local’ moisture source class represent moisture from within 3° radial distance. The d-values of these events are in the same range as N Indian Ocean and western moisture (Fig. 6b). Moisture from the proximity of the measurement stations ultimately also originates from evaporated vapour of different seas. Therefore, a comparison between its d-value and those of other source area classes can help to understand the moisture source of a wider region. In November, local moisture is a major contributor (Fig. 5b) with low d-values (Fig. 6b). From December to February, ‘local’ d-values are in the range of N Indian Ocean values. This changes from February to April when they resemble western d-values more closely. This behaviour correlates well with the seasonal specific humidity contribution, where most moisture stems from the N Indian Ocean in December and January, and western moisture takes the lead in March. Over the course of the year, d-values of the ‘local’ class are closely related to the overall average of event samples (dotted line in Fig. 6b). This indicates that moisture in the 3° proximity to the sampling stations is well represented by the average of all discussed moisture sources at the sampling site. This shows that the sampled precipitation events are representative of a wider region. The strong influence of Mediterranean or western air masses and moisture that seems to influence several sites on the southern slopes of Himalaya and Hindu Kush Mountains, could not be found as distinctively in our event-based and monthly integrated data. Specifically, deuterium excess values above 15‰ that are typical for cold season precipitation in the eastern Mediterranean and further east, as well as in the Persian Gulf and Kabul, were not as abundant in the Western Pamir Mountains during this time of the year. With respect to our hypothesis regarding the influence of Mediterranean moisture, our data evaluation of trajectories from the Western Mediterranean to the Western Pamir show a maximum moisture contribution during March and April, when it was not initially expected. Stations with monthly integrated precipitation collection along this transect exhibit a seasonal d-excess trend of a clear sinusoidal shape. This characteristic was not observed in the Western Pamir Mountains, where month-long spikes are superimposed on a weaker seasonal trend. Our findings confirm the complexity of precipitation moisture origin in Central Asia and suggest to refrain from simple conclusions, such as linking high d-values to Mediterranean moisture without thorough discussion. The observations of a maximum ‘western’ moisture contribution in March and April and its relatively low d-values of below 15‰ contradict the simplified assumptions of other studies concerning moisture sources in Central Asia. The seasonal distribution of d-values in Kabul and the Kashmir Valley resemble what can be expected from a continuation of the eastward trend from the Mediterranean to the Middle East and the transport of Mediterranean moisture. The sparse distribution of isotope data in precipitation in the Middle East and Central Asia needs to be improved for a more thorough assessment of water availability and sources in these regions. Future studies would benefit from an increased number of stations that collect precipitation for stable isotope analyses along the transect from the Mediterranean to Central Asia, to address further important research questions such as the calculation of the contribution of recycled water to precipitation. Therefore, the authors like to encourage submission of isotope data to freely accessible online databases, such as the GNIP/GNIR database (iaea.org/water), the Waterisotopes Database of The University of Utah (waterisotopes.org) or the World Data Center PANGAEA (pangaea.de).
<urn:uuid:e0cc3bac-68de-4002-983e-92af95154933>
CC-MAIN-2022-33
https://b.tellusjournals.se/articles/10.1080/16000889.2019.1601987/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571719.48/warc/CC-MAIN-20220812140019-20220812170019-00298.warc.gz
en
0.931627
7,990
2.9375
3
On a beautiful early September day, Ken Gierhart hiked a trail familiar since boyhood to Music Pass in the Sangre de Cristo mountains above Westcliffe. As he dropped off the saddle toward the Sand Creek lakes, he noticed people heading the opposite direction with fishing poles. “How’s the fishing?” he asked one woman. “They’re all dead,” she replied, saying nothing more as she passed. Puzzled, Gierhart came upon another woman heading away from the lakes and tried his question again. “There is no fishing,” she said. “They’re all dead.” This time, the angler paused to explain that Colorado Parks and Wildlife, according to signs posted in the area, had used a chemical called rotenone to kill all the fish in the lakes and Sand Creek, which meanders south down the mountain before veering west to eventually disappear, after 13 miles, into the depths of the Great Sand Dunes. The project is part of a long-planned strategy to restore the native Rio Grande cutthroat trout to waters where its numbers have dwindled toward the edge of extinction. Increasingly scarce in a dwindling native range and hybridized with other species like non-native cutthroats, which had been stocked alongside it many years ago, the Rio Grande cutthroat eventually will be reintroduced to the mountain lakes and streams where it once thrived. But the process can be disconcerting — especially to an unsuspecting hiker like Gierhart. It wasn’t until he got closer to Lower Sand Creek Lake that he saw the informational signs for himself. Then he headed toward the upper lake, following the trail that crosses the creek several times along the way. He said piles of dead earthworms filled seemingly every crevice in the rocks. And then it got worse. “It was horrifying at that level to see what had been done to the lake,” Gierhart said. “When I got to the lake I saw fish belly-up, carcasses on the bank where animals or birds had pulled them out.” Gierhart, a 54-year-old wholesale tree grower, hadn’t heard anything about the fish management plan, and he stewed all the way back to his home in Westcliffe. There, he fired up his Facebook account and vented in a post that estimated “thousands” of dead fish and that attracted nearly 100 comments, most expressing concern over an undertaking they, too, seemed unaware of. “I”ve always been preservation conscious,” Gierhart said, still steamed a couple weeks later, “but to see aquatic life dead like that, I started thinking about the watershed, the lasting effects, side effects.” The rant and its response caught the attention of CPW officials, who expressed frustration over response to a broad regional project that has been years in the making and which framed its intent in a compact signed in 2003 — and renewed in 2013 by six federal entities, state agencies in Colorado and New Mexico and three American Indian tribal agencies. The agreement also received non-signatory support from two Trout Unlimited groups. The Sand Creek drainage was officially listed in a 2013 strategy document. In 2019, meetings on both the Westcliffe and Alamosa sides of the mountain yielded no opposition — other than concern over the temporary loss of fishing — and little public comment. The project moved ahead, though a year later than originally scheduled due to a late fish spawn. “It’s something we need to do,” said John Alves, the Durango-based senior aquatic biologist for CPW’s Southwest Region. “With only 11% of its historic range left, the Rio Grande cutthroat trout is always susceptible to petitions to list it as endangered, and also to extrication if there are events like fire. It’s a constant process for us.” Joe Lewandowski, spokesman for CPW’s Southwest Region, which includes the Sand Creek drainage, notes that the state agency has done similar projects before and will do more of them throughout Colorado. “We don’t get a great deal of pleasure having to poison a stream, but it is necessary to restore native species,” he said in an email to The Colorado Sun. “This has been done in waters to restore the Rio Grande, greenback and the Colorado River cutthroat; and these projects will continue. “We know people are not happy to see dead fish, and it is confusing. It’s very difficult — often impossible — to explain to the general public why we have to do these projects.” The intersection of history, science and politics of wildlife management can be complex. And while in this case the ultimate goal — to restore the Rio Grande cutthroat to its native range — is mostly a shared interest, the path to achieving it can be challenging. After the 2003 conservation agreement, federal and state authorities started doing reconnaissance in 2004 to determine if the drainage could be restored. Geography that essentially isolated water flow, and therefore fish migration, proved fortuitous. “It’s an ideal situation in a lot of regards, because it’s a closed system,” said Fred Bunch, chief of resources management for the Great Sand Dunes National Park and Preserve, which takes in the Sand Creek drainage. “The creeks have their headwaters high in the Sangres, they flow into those lakes and the lakes flow out to the dunes. Thirty-four square miles of sand is a pretty substantial fish barrier.” Bunch points to several reasons why reintroduction of the Rio Grande cutthroat looms important. First, there’s federal policy that favors native species in national parks and preserves. Another has to do with the essential characteristics of a wilderness area. A third is for preservation of the species. “This is an ideal opportunity to restore 13 miles of habitat for the Rio Grande cutthroat trout,” he said. The stakeholders who signed the conservation agreement meet annually to discuss the status of its efforts. The key thing, Bunch said, is to prevent the listing of the Rio Grande cutthroat as an endangered species and ensure it has robust habitat. And that’s where politics can come in. The Center for Biological Diversity, a nonprofit organization, claims roots in the fear that government authority alone will not always protect flora and fauna when powerful business interests can exert political leverage. (The timber industry was its founders’ nemesis.) Now, it contests threats to biodiversity on a range of levels, from climate change to encroachment of off-road vehicles. The group has petitioned multiple times to place the Rio Grande cutthroat trout on the endangered species list, including one case that’s still pending an appeal. From the standpoint of state wildlife managers like Alves, who shares the group’s desire to see the native species rebound, the fish’s presence on the list represents another potential layer of bureaucracy that state workers on the ground would have to contend with. “Once a species is listed by the U.S. Fish and Wildlife Service, local agencies don’t make decisions,” he said. “They’re made by the federal government. For years, since the late ‘90s, there have been petitions to list the Rio Grande cutthroat trout. (The Center for Biological Diversity) sees listing as a way to get timber and mining off public lands.” That is true, said Noah Greenwald, the center’s Portland, Oregon-based endangered species director. “Those things present a real threat to their habitat and to the species, so we want to make sure those things are done in a careful way or there’s avoidance of trout habitat or, to the extent that there’s damage, there’s mitigation — which is what the Endangered Species Act requires.” Aside from territorial concerns stemming from listing the species, both the center and the state agree on some key issues. The center supports and applauds the effort to repopulate the Sand Creek drainage with the native fish. But Greenwald also claims that his organization’s petitions to list the Rio Grande trout “spurred the state to take action to conserve them more than they were before.” “We haven’t succeeded in putting them on the protected list, but we’ve pressured the state to do more for them, which is a benefit for the species,” he said. “They’ve done a tremendous amount of surveys and used staffing resources in an effort to avoid listing them. We don’t think that’s the right tradeoff. It makes more sense to list them and work for recovery.” The center doesn’t even have problems with the kill-to-restock method, or the rotenone compound used to achieve it. Greenwald calls the CPW a “credible messenger” with regard to the safety of rotenone. “We don’t love having to use poisons,” he said. “But there’s been a lot of work done on this issue, and there are not other effective means. As chemicals go, rotenone is pretty specific to fish. … We definitely think it needs to be done carefully and we don’t relish the thought of poison being used. But it’s the only way.” Make more coverage of Colorado’s environment possible by becoming a Colorado Sun member, starting at just $5 a month. Although the battle over listing the fish persists, all sides celebrate the ideas that in the case of the Sand Creek drainage, the area could become a refugium for the species, where the fish could naturally multiply and be used as a source for future stocking or restoration if some other habitat experiences problems — say, from wildfire. “So we’re doing it for many things,” Bunch said. “One is the philosophy of land managers, but there’s also the species itself. Also there’s a recreational piece. It’s a great situation where a hiker can backpack in and catch native fish. That’s a pretty great situation to have, and that’s what we’re shooting for.” From the start, the effort to restore the species has been a multipartner project, including federal, state and county agencies and even private groups like Trout Unlimited. Some of the early upfront money came from the National Park Service but functionally, the reintroduction process is a CPW project done in a national preserve. Cost of a helicopter, boats and other equipment is covered mostly by the state. This year, phase one of the process got underway. But before fish and wildlife authorities took any action, they needed to know exactly how many different species they were dealing with in those waters that stretch from the Sand Creek lakes to the sand dunes. And that’s where science played a big role. John Wood founded Pisces Molecular more than 20 years ago, just a few years before efforts began in earnest to restore the Rio Grande cutthroat trout. Though it has just four people on staff, the Boulder-based biotech lab has clients all over the world, not to mention right in its backyard. Wood’s lab has worked with Colorado Parks and Wildlife on multiple different fish projects, including when in 2007, in conjunction with University of Colorado post-doc Jessica Metcalf, it discovered that CPW’s stock of supposed greenback cutthroat trout — which happens to be Colorado’s state fish — were actually Colorado River cutthroat trout. How do scientists figure out what species are in a waterway? One method is simply catching a sample of fish, clipping off the tiniest bits of their fins, and sending the material to a lab for DNA sequencing. Wood notes that can be laborious and difficult to get an accurate representation of the species makeup of a waterway. The other option is environmental DNA testing. Just as humans regularly shed skin, hair, saliva and other sources of DNA, so do fish. Field researchers can collect a sample of water, filter out all the bits from the water, and send the gunked-up, DNA-laden filter to the lab for testing. These results will indicate the presence of species upstream of where the sample was taken. Regardless of the type of sample, once it gets to the lab, Wood’s team uses polymerase chain reactions, also known as PCR, to check for species-specific genetic markers. For reference, this is the same kind of procedure used in the SARS-CoV-2 coronavirus test; Wood says it’s become “a very sexy technique” since the patent on PCR expired in the 2000s. And it’s remarkably precise; if one-tenth of a drop of a fish’s DNA solution were mixed into an Olympic-sized swimming pool, Wood said, “we would pick it up.” “The only technical field that is changing as fast as computers is molecular genetics, so the sort of techniques that we use now are incredibly more sophisticated than when I was in graduate school, and I find that really fun,” Wood said. Theoretically, this could happen all in the field, but Wood says that it requires “a lot of coordination, because you don’t keep wild fish outside of their water body for very long.” More often, it’s an iterative process between the lab and the wildlife managers — testing the waterway, analyzing the results for the percent purity for individual fish or the population at large, then removing or restocking fish as needed, and doing it all over again. Though Pisces was not directly involved with last month’s rotenone treatment, it has generally worked on identifying species in the Upper Sand Creek Lake drainage. In 2015, Wood’s team found evidence that the drainage had native Rio Grande cutthroat trout that were hybridizing with other subspecies, including Yellowstone cutthroat, greenback cutthroat and Colorado River cutthroat. Wood called CPW’s attitude on restoring native species “enlightened,” especially when compared to previous practices. Much of the 20th century was spent stocking the state’s waterways with outside fish such as rainbow trout, which are especially susceptible to whirling disease; when that struck the state in the 1990s, the rainbow trout population quickly spread it to other fish species, including native cutthroats. And this wasn’t just in one or two rivers; in the process of moving fish around from waterway to waterway, stocking and other efforts inadvertently introduced the disease to 15 of the 17 hydrographic drainages in the state. Along Sand Creek, the CPW found ponds on private property that harbored the parasites that transmit whirling disease. But the ponds were removed with stimulus funds during the Great Recession. Since they qualified as gravel pits, they could be remediated as abandoned mines. The whirling disease went away and the reintroduction plans moved forward. “Humans, when we mess with ecology, we generally make a mess,” Wood said. “So it’s probably philosophically better to do less interventions and strive to maintain what’s there than presume that we’re smarter than Mother Nature.” That said, it’s not like leaving the river to rebound on its own would work. Part of it has to do with the different life cycles of fish species: brook trout, for example, spawn in the fall, giving them a full six months’ head start to grow before cutthroats spawn in the spring. And while rainbow trout spawn in the spring, like cutthroats, Wood notes that the jury’s still out as to the impact of the two species interbreeding freely. In other words: humans made this mess, and only humans can clean it up. “We now know more about genetics, we can discern finer level details, we have a longer history of how our attempts to alter ecologies tend not to work very well, so let’s see if we can remediate some of the damage that we’ve done,” Wood said. In June, weeks before implementation of the first phase of the Rio Grande cutthroat project began, the CPW declared a fish “emergency public salvage” in the Sand Creek drainage. That tactic, which allows anglers to catch an unlimited number of fish from the waterways, has been used more times this summer, for a variety of reasons, than in the past 10 years. On this occasion, the CPW wanted to let anglers help make best use of the fish before the chemical rotenone was administered to kill any that remained. Alves, of the CPW’s Southwest Region, noted that removing the bag limit seemed to be a particularly effective strategy in the lakes. “Get enough anglers out there,” he said, “they do a pretty good job.” The rest is left to rotenone, a plant-based compound effective only on gill-breathing organisms — primarily fish and insects. The CPW workers secured the necessary permissions and trained to use it. During the first week of September, they began the process in the two high mountain lakes and the creeks below — up to a point where waterfalls along Sand Creek provide a natural barrier to fish migration. Phase 2 of the operation will involve clearing Sand Creek from below the waterfalls to the Great Sand Dunes. A helicopter from the Colorado Division of Fire Protection and Control had been busy fighting wildfires, but eventually was freed up to transport boats, motors, pumps and the 5-gallon buckets of rotenone itself to the lakes. Workers mixed the chemical with water. It was administered from a boat throughout the lake, in volumes dictated by the water’s depth. The mixture shaded the water slightly white, a change that diminishes within several hours. To apply rotenone to streams, workers spread out drips that added the rotenone/water solution to the flow every 15 seconds. A four-hour drip produced the desired solution throughout the streams. The chemical works quickly. At one of the drip stations, Alves noticed that as soon as the organic green dye marker reached him from a drip point upstream, fish started dying. Workers also sprayed backwaters where fish might be lurking. Rotenone, though extremely toxic to fish and some insects, is “harmless to man and all warmblooded vertebrates,” according to the journal Nature. Alves notes that it breaks down quickly in streams, but in lakes, at a water temperature of 50 degrees, takes about 28 days to decompose. Then, the waiting begins. The dead fish decompose and, if all goes perfectly, the waterways will be clear for stocking. First, the water is tested — environmental DNA sampling comes in handy here — to make sure no fish survived that could taint the reintroduction of the native Rio Grande. “You’ve got to wait and see,”Alves said. “We’ll do a lot of sampling, electrofishing in the streams, gill netting in the lakes, probably use environmental DNA, and test to see if there are genetic markers. We use that as a confirming tool.” If any fish remain, CPW will come back and repeat the process. “If there’s zero live fish,” Alves said, “we’ll start to restock in the fall.” Typically, CPW uses airplanes to stock the high mountain lakes. Workers on foot or on horseback, and sometimes by helicopter, stock the streams. By stocking fish in a variety of age groups, managers can hasten the turnaround. The Sand Creek drainage, with its hiking trails and beautiful vistas, is a highly used area, Alves said. Though remote, it’s only about an hour-and-a-half hike in from the trail head. What he wants people to understand is that what the CPW is doing “is definitely the right thing.” Yes, the fish are all gone — but that’s only temporary. “As soon as we can, we’ll put in Rio Grande cutthroat trout, really a pretty fish, growing to the same size they’re used to catching, a 15- or 16-inch fish,” he said. “So they’re temporarily losing their opportunity, but it’ll come back. I predict within five years, they’ll see really good cutthroat trout in the lakes.”
<urn:uuid:a6316222-86cb-4258-8f5f-1bfff168c4cd>
CC-MAIN-2022-33
https://coloradosun.com/2020/09/24/kill-fish-to-save-fish-behind-colorados-effort-to-save-the-rio-grande-cutthroat-trout/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571190.0/warc/CC-MAIN-20220810131127-20220810161127-00096.warc.gz
en
0.952111
4,426
2.5625
3
"Les poches des chemises" Translation:The shirts' pockets 96 CommentsThis discussion is locked. Bottom line: No one says "the shirts' pockets". In 51 years as a native English speaker, I have never heard that. Not once. The correct term is "the shirt pockets". If I'm talking to a group of men, I would say, "Look in your shirt pockets and find the transmitter I secretly put there." That would mean all the shirts, and there would be no misunderstanding that, I don't know, maybe all or several of the men were wearing the same shirt with individual pockets for each man. Duolingo folks, you've done a great job here. I'm impressed at how well you have put together this site. But you blew it in this particular case. Please get this corrected. "The shirts' pockets" is just plain silly, even if technically correct. "Shirt pockets" really must be accepted as a correct answer. Look at it this way. If I wanted to translate "the shirt pockets" (meaning the pockets of many shirts) into French, what would I say? Clearly, "les poches des chemises". Therefore, if I wanted to translate that identical French expression back into English, how would I do it? "The shirts' pockets"? Of course not. No one says that. I would translate it very naturally into "the shirt pockets". The fact that it has a slightly different grammatical construction than the French is utterly irrelevant. That's simply how you say it in English. The shirt pockets (as in a kind of thing) = les poches de chemise The shirt's pockets (as in this shirt right here and as opposed to the shirt's sleeves or the shirt's collar) = les poches de la chemise The shirts' pockets (as in these shirts right here etc.) = les poches des chemises Your example "Look in your shirt pockets and find...." is completely wrong. In that example your asking each man to look into his own shirt, therefore shirt, singular, makes sense, each man is looking through one shirt. Now if for example you asked someone to look through a bunch of shirts that are laying on a bed, each one with various pockets you would say look through the shirts' pockets. shirt pockets is a loose compound - which is typical of non-animate part whole constructions in English. In such compounds it is impossible to pluralise the first element; think teethbrushes . It is therefore ambiguous as to how many shirts we are talking about and therefore should be acceptable as a translation of the French expression with a plural version of shirt. @PaddyingoMartinRDC. You're thinking in English. French is far more specific.Especially with articles. Here it is "DES=De+Les" which specifically is more than one shirt, ie shirtS. Yes, the pockets are also plural and so in this sentence there are undeniably many pockets on many shirts. The apostrophe for a singular possessive would be "The Shirt's pockets"= many pockets of the one shirt.(Thinking in French.) I'd appreciate that in a factory "run" of many shirts for which many pockets must be made, then a manager may speak of "The shirt pockets" but this takes the subject in much the same way as "Sheep=singular+plural", "Rice=singular+plural","Human=singular+plural".and just won't work in French.if there is more than one shirt. In English, when the subject noun is plural the apostrophe follows the "S", as in "The Shirts' pockets" indicating many shirts with either a single pocket each or multiple pockets per shirt, we won't know without more context. What we do deduct is that there are many shirts. Otherwise the sentence would read: "Les poches DE LA chemise"., not "Les poches DES chemiseS" See? It's French done the French way. Jackjon, you appear to be missing the point. The English translation ISN'T FRENCH. It's ENGLISH. Applying normative French grammar to English sentences DOESN'T MAKE SENSE. Testing understanding of French grammar, even relatively picky elements, is just fine. Expecting people to write weird or nonsensical English sentences to demonstrate French mastery is NOT just fine. "The shirt pockets" is perfect English and means pretty much exactly what the French means. "The shirts' pockets" is dicey English -- perhaps grammatically allowable, but absolutely not something the vast majority of educated English speakers would ever say. To penalize a native English speaker for translating a French sentence into a perfectly formed and correct English sentence that carries precisely the same meaning as the French is well beyond picky. It's incorrect. Can you imagine an English teacher demanding her French students to render an English sentence into marginal, weird French just to prove they understood some obscure point of English grammar? That would be nonsense, just as it is in this case. I am probably coming across more strongly than I actually feel. I think Duolingo is an awesome resource and of immense value, even with this obvious (but rather small) flaw. I have nothing but gratitude toward those who have worked so hard to create such a valuable web site. I'm not angry, or peeved, or annoyed at people. But having said all that, I maintain that what's wrong is wrong, and saying that French Grammar Demands Thus-And-Such doesn't magically make incorrect (or artificial and stilted) English right. TLDR: Bad English is bad English, regardless of what the original French might have said. @Sbeecroft by all means push your point. Duo will continue to mark it wrong and that is my point. Please don't shoot the messenger. Take it up with Duo. I do not make these ambiguous tricky programmes. I only wish to make the differences between French and English somewhat more clear if at all possible. Apparently not so here. In this sentence "Des chemises" is plural Shirts. End Of. I suggest (and await to be corrected, Stiesurf, Remy Northernguy?) that it is you, not I who is missing the point.With respect. My bad, Jackjon. I thought (or assumed) you were speaking for Duolingo, not merely explaining why the site might be acting as it does. I understand the point you are making. Insofar as the web site goes, I expect you are correct. My only point is: Whether or not you are right about why it is scored as it is, the thinking is fundamentally flawed. The answer SHOULD be marked correct. You may be (and probably are) right about why it's marked wrong, but in any case it should not be marked wrong. That's my only point. My apologies for making the incorrect assumption that you were speaking for or defending the Duolingo site logic rather than merely explaining it. Hi Misho. Presumably you were using the Audio Only app. You should hear the difference between the articles "LA poche (s/l Lah), singular, and LES poches (s/l Lay); plural. DE chemise (s/l Duh),singular, and DES chemises (s/l Day), plural. NB, Poche is feminine, La Poche not Le Poche. Actually, it's a bit more involved than that. Sometimes nouns can be used LIKE adjectives, although they are still nouns. They are referred to as premodifiers, also called an attributive noun or noun adjunct, e.g., time management, college education, kitchen table, etc. It is a noun that acts like an adjective, e.g. Tutankamen, the boy king; a government official, etc. http://grammar.about.com/od/ab/g/Attributive-Noun.htm In this case, however, be aware that "des chemises" is plural, so it is definitely "shirts' pockets" and not "shirt pockets" and not "shirt's pockets". Hi, Not wanting to beat this into the ground, but I am curious to explore the finer points. From my limited knowledge of French, I know adjectives are pluralized, whereas in English there is no such thing as plural adjective. You linked to a nice article defining ‘attributive noun’ in English (which seems to apply here) which is a construction where a noun is being used like an adjective. Thus, it is not clear to me that because I see a plural “s” in the French version that it necessarily follows that the best English version would likewise be pluralized since the word in question is being used like an adjective. Among the examples of attributive nouns cited in the article are “bus stop” and “ marriage certificate”. I don’t think you’d ever see in English “buses’ stops” or “marriages’ certificates” even in cases where you were explicitly referring to multiple buses or marriages. I tried to resolve the question via Google and could find some examples from literature going either way: [examples of ‘shirt pockets’ clearly referring to multiple shirts] “…alike, and were all dressed like Bill Bell — black Stetson hats, blue shirts, and yellow strings from sacks of Bull Durham hanging out of their shirt pockets” from A River Runs Through It (books.google.com/books?isbn=0226500772) “…restaurant which their host had taken them, he then reached over the table and non-chalantly put in each of their shirt pockets a packet of ten crisp $100 bills.” Behind the Eight Ball by Roy Bell (books.google.com/books?id=Bx5p_5yDIMsC) [examples of ‘shirts’ pockets’] “Both Outsiders pulled from their shirts' pockets two little silver metal symbols of the Outsider Religion. These they proceeded to wave in front of the helpless, ...” Record of Mutilation: The Novel By Elias Sassoon (books.google.com/books?isbn=0557175682) It seemed to me that there were more relevant examples of “shirt pockets” than “shirts’ pockets” (I can’t just go by the number of Google hits because I’d need to through and discount the cases where only one shirt is involved etc) So, my bottom line: you make some good points and I believe now that “shirts’ pockets” is a perfectly constructed translation of the original sentence. However, I still feel like “shirt pockets” is also a valid translation. Again, thanks to the other posters. As someone else pointed out, one of the great things about studying another language is learning more about your own. Being formally introduced to the ‘attributive noun’ concept was great. Sorry for taking up too much of your time. Good digging there, dflemingfit! I'll be brief. les poches des chemises is not an example of an attributive noun. But your example of saying "shirt pockets" is. And while no one will disagree with your effort to simply state the obvious, Duo simply wants the more specific answer here as we learn about the plural shirts. You will occasionally see attributive nouns used in Duo's French exercises so keep stay sharp! Have a good one! English speakers wouldn't use the phrase, "The shirts' pockets." It's unnecessarily clumsy. I wouldn't tell a room full of people, "Raise your left hands," for another example. Saying, "Raise your left hand" is understood that I mean a plurality of hands with a single one belonging to each person. Hi Ze. There does seem to be a bug. "Des chemises" is plural. "Les Poches" is plural, and so the whole sentence, subject and object is plural. As far as my limited grammar goes, (but I don't think that I am too far off pitch) the only correct solution is "The shirts' pockets". as is written at the top of this page. (1) "The shirts pockets" is clearly wrong. (2) "The shirts' pockets" is technically correct and a literal translation, but clunky in that few native English speakers would ever say it. (3) "The shirt pockets" is the normal way of saying the same thing in English. Duolingo requires someone to feed it the "correct" answers, and for some reason it appears that #1 has been given as correct while #3 has not. By the length of this thread, you can tell that this has been a topic of dispute for some time now. The bottom line is that the Duolingo answer is what it is. The best you can do is likely to recognize Duolingo's limitations and not worry too much about it. Fwiw. But, but, Sbeecroft, are you missing that French articles are specific and that Des=De Les=Of The. The sentence is all in plural and therefore the shirts are plural. So we must deduce that the pockets (plural) belong to the shirts (plural). Therefore to learn well we need to understand both French articles, Romanic specificness and correct use of the apostrophe. Don't you think? I suggest that as this is not a complete course leading to fluency it is aimed at the basics and is therefore simplified and delineated.. I looked up "Clunky" in the OED and now wonder what you mean by it. The closest definition with a reference to language is "Old Fashioned" and I'm left wondering what is old fashioned about the correct use of an apostrophe in a basic language learning course? Lastly, an apostrophe is silent so how does one "Say" an apostrophe?. With respect, it is only a written punctuation and needs be learnt. Not quite sure what you're asking, Jackjon. I think we covered this ground a year or so ago. My position hasn't changed; "the shirts' pockets" is a clunky (cumbersome, awkward, non-intuitive) construction that is rarely used by native English speakers. If I'm talking about how I have inkstains in the pockets of all my shirts, I say, "I have inkstains in all my shirt pockets", not "shirts' pockets". The latter is not incorrect, but it's simply not how the vast majority of English speakers would ever say the phrase, except under very specific circumstances. Please note, I grant that "the shirts' pockets" is more correct from a strictly grammatical standpoint. But Duolingo is more than strictly grammar; you're supposed to be getting a feel for how something in one language is rendered in the other. Otherwise, we would have to be saying "Good Birthday!" or "Content Année Nouvelle!", neither of which would sound right. As for the apostrophe, if the word "shirts" is used without an apostrophe, it's wrong. Period. No ifs, ands, or buts. If you're going to use the plural possessive, you have to have an apostrophe. Actually we usually say "shirt pockets," and context tells you how many shirts. It is not a lot different from "bookcase," which is a piece of furniture that can hold one or many books in it. I think either should be counted correct, although with Duo it is safest to take the most literal choice. @TheC4Defuser. So why add another then? Firstly, the majority of people on this course have English as a second language and are very brave to engage with it. Secondly, your post is a glaring example of the need for enough posts to clarify the issues raised herein because you've misunderstood the task sentence yourself. The whole sentence is in plural, there are many shirts hence the position of the apostrophe in English. Yours is incorrectly placed as a result of your misunderstanding. The sentence you've translated is "Les poches DE LA CHEMISE" Rest assured that most folk now understand the intricacies of this apparently "easy" task and, now, hopefully so do you. With respect and yes I, too, make mistakes. Cordial. Hi Alexis. This is a very tricky one and quite frankly I think is introduced either too early in the course or maybe best, not in this structure at all. The apostrophe is a whole subject in itself and in my view, used here, is an unnecessary distraction and has generated a lot of clutter. However; if we are in a factory manufacturing shirts which are to have pockets sewn in, the patches of material that will be the pocket(s) of a shirt when sewn in are, indeed The Shirt Pockets. If there is a finished shirt which has more than one pocket then they are The Shirt's Pockets. If there is more than one shirt each with multiple pockets, then they are The Shirts' Pockets. In this task both the pockets and the shirts are plural in French and so it does indeed translate to The Shirts' Pockets. You see Alexis, how tricky the apostrophe is; you've missed it in your "it's". Hi Jofv. Show me someone who is not confused by the apostrophe. It is a very large subject and debate continues about its correct use. Here, in this case the apostrophe is used to show "Possession" in that the pockets belong to the shirts. Because "The Shirts" here is plural, the apostrophe appears after the "S" of "Shirts". (If there was just the one shirt but still more than one pocket, then the apostrophe would appear between the "T" and the "S" of "Shirts".) American and UK English differ. Maybe the Noah Webster followers in America allow both with and without the apostrophe. It is confusing nonetheless here in the UK in that "The dog is wagging it's tail"-apostrophe used, but "The dog is eating its dinner"- No apostrophe. All clear as mud - Mississippi or Thames. For correctly spoken standard English in any locale, the following are true 100% of the time: - The word it's is a contraction meaning "it is". - The word its is a possessive meaning "belonging to it". That is the case in the US, and I would be willing to bet large sums of money it is also the case in the UK. As for shirt's vs. shirts' vs. shirts, I think Jackjon has already addressed this. In brief, shirt/shirts are the singular/plural forms. The possessive of these are shirt's/shirts'. The plural shirts, the singular possessive shirt's, and the plural possessive shirts' are all pronounced exactly the same, but you can see the difference in apostrophe usage when they are written out. Hi Sbeecroft. As a pensioner I do not have money to wager. Here are some thoughts involved in the continuing debate on the apostrophe showing that "100%" really leaves one short of full usage. This is a summary. The apostrophe is used to indicate possession And Other Forms Of Relationship between words. The apostrophe is used to form contractions as in E'en and O'er, O'Clock and 'Cos, Ma'am, Fo'c's'le etc. It is usually dropped as in Pub (which never had it in the first place) Phone, Plane, Flu and more often than not even Assn and Cos have the apostrophe dropped. (Note that a contraction of a single word is sometimes called an Elision. The loss of a letter at the end of a word is called an Acopope; if at the beginning an Aphesis and if in the middle, a Syncope.The apostrophe can be used to indicate the omission of a number or some numbers as in The '14-'18 War and it is also considered correct to omit the apostrophe here, too. The apostrophe in a name can mean The Son or Daughter Of, as in O'Leary but is not used to indicate the omission of the "A" in MacDonauld (McDonauld) eg. The apostrophe is used to indicate certain plural forms of a word without regard to its (no apostrophe used here in "its" even though it is a possessive) meaning, as in "There Are Three But's In The Sentence." (If the word "Buts" was in Italics the word would be Roman/Romanic and is sometimes omitted even here. Note also that the regularly formed plural is used when meaning is attached to it, as in "The Ayes Have It" or "There Are Eight Threes In Twenty Four." The apostrophe is usually used to form the plurals of letters,numerals and symbols as in "Mind Your P's And Q's," There Are Three 5's In 555," "He Had £'s Painted On His Face." The apostrophe is often used to avoid confusion between letters of similar appearance as in "A's" and "As". The apostrophe is also used to form plurals of abreviations and of expressions that use numerals: "Three OK's", "Several MP's" "The 1920's" however it is often omitted here too and with good reason. Consider the headline "1980's Hit". Does this refer to an attack in the decade 1980-1990 or to a chart pop record of 1980? Furthermore if an apostrophe is crudely used to form a plural, one will end up with two apostrophes when forming the possessive plural: "The MP's' votes. Best here to disregard the "Possessive Rule" and omit the apostrophe altogether. Often, and again disregarding the "Possessive Rule", the apostrophe is frequently misused in the needless insertion in the possessive adjective or possessive pronoun as in "Her's, Their's Our's AND IT'S". The correct use here is "Hers, Theirs, Ours AND ITS (no apostrophe even though this is a possessive).The apostrophe is used to indicate a possession sometimes, as in "Mary's Book." Finally (for now, while you search for your wallet?) the apostrophe is used in forming other inflections or abbreviations as when an abbreviation serves as a verb: "She OK'd The Proposal, He KO'd His Opponent, They OD'd On Drugs." Nowthen, are you still up for 100%? With respect, JJ. In that case, Melody, you'll know that the pronoun, first person singular, "I" is always written in higher case. The shirts' pockets has been addressed here many times. I accept that there is no context which makes the task tricky, but if you know English you'll know that the application of the apostrophe is tricky and requires thought and study. With respect, JJ.
<urn:uuid:dc7d6a5e-7a5c-4f9d-b3af-316b6ebfc7ec>
CC-MAIN-2022-33
https://forum.duolingo.com/comment/123751/Les-poches-des-chemises
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571911.5/warc/CC-MAIN-20220813081639-20220813111639-00494.warc.gz
en
0.963742
4,854
2.546875
3
Possible poisonous substances include prescription and over-the-counter drugs, illicit drugs, gases, chemicals, vitamins, food, mushrooms, plants, and animal venom. Some poisons cause no damage, whereas others can cause severe damage or death. The diagnosis is based on symptoms, on information gleaned from the poisoned person and bystanders, and sometimes on blood and urine tests. Drugs should always be stored in original child-proof containers and kept out of the reach of children. Treatments include supporting the person's body functions, preventing additional absorption of the poison, increasing elimination of the poison, and sometimes giving a specific antidote. More than 2 million people suffer some type of poisoning each year in the United States. Drugs—prescription, over-the-counter, and illicit—are a common source of serious poisonings and poisoning-related deaths (see Acetaminophen Poisoning Acetaminophen Poisoning Acetaminophen, a common ingredient in many prescription and non-prescription drugs, is safe in normal doses, but severe overdose can cause liver failure and death. People sometimes ingest too... read more and Aspirin Poisoning Aspirin Poisoning Aspirin and related drugs called salicylates, a common ingredient in many prescription and over-the-counter drugs, is safe in normal doses, but severe overdose can cause severe symptoms and... read more ). Other common poisons include gases (for example, carbon monoxide Carbon Monoxide Poisoning Carbon monoxide is a colorless, odorless gas that is produced when many materials are burned and can be toxic when breathed in large amounts Carbon monoxide poisoning is common. Symptoms may... read more ), household products (see Caustic Substances Poisoning Caustic Substances Poisoning Caustic substances are highly acidic or alkaline chemicals that can cause severe burns to the mouth and digestive tract when swallowed. When swallowed, caustic substances can burn all tissues... read more ), agricultural products, plants Plant and Shrub Poisoning Only a few commonly grown plants are very poisonous, but many others have less serious toxic effects. Generally, poisoning is unlikely unless a plant is highly toxic or large quantities are... read more , heavy metals (for example, iron Iron Poisoning Iron is a mineral essential to life, but taking too much iron can cause severe symptoms, liver damage, and even death. Symptoms develop in stages and begin with vomiting, diarrhea, and abdominal... read more and lead Lead Poisoning Lead poisoning affects many parts of the body, including the brain, nerves, kidneys, liver, and blood. Children are particularly susceptible because their nervous system is still developing... read more ), vitamins, animal venom, and foods (particularly certain species of mushroom Mushroom (Toadstool) Poisoning Many species of mushroom are poisonous and can cause different symptoms depending on the type of mushroom. Different species of mushrooms produce different toxins with different effects Even... read more and bony fish and shellfish Fish and Shellfish Poisoning Certain types of fresh or frozen fish or shellfish may contain toxins that can cause a variety of symptoms. Vomiting and diarrhea ( gastroenteritis) caused by toxins is different from gastroenteritis... read more ). However, almost any substance ingested in sufficiently large quantities can be toxic (poisonous). Poisoning is the most common cause of nonfatal accidents in the home. Young children, because of curiosity and a tendency to explore, are particularly vulnerable to accidental poisoning in the home, as are older people, often due to confusion about their drugs. Because children often share found pills and substances, siblings and playmates may also have been poisoned. Also vulnerable to accidental poisoning are hospitalized people (by drug errors) and industrial workers (by exposure to toxic chemicals). Poisoning may also be a deliberate attempt to commit murder or suicide. Most adults who attempt suicide by poisoning take more than one drug and also consume alcohol. Poisoning may be used to disable a person (for example, to rape or rob them). Rarely, parents with a psychiatric disorder poison their children to cause illness and thus gain medical attention (a disorder called factitious disorder imposed on another Factitious Disorder Imposed on Another Factitious disorder imposed on another is falsifying or producing symptoms of a physical or psychologic disorder in another person. It is usually done by caregivers (typically parents) to someone... read more previously called Munchausen syndrome by proxy). Symptoms of Poisoning The symptoms caused by poisoning depend on the poison, the amount taken, and the age and underlying health of the person who takes it. Some poisons are not very potent and cause problems only with prolonged exposure or repeated ingestion of large amounts. Other poisons are so potent that just a drop on the skin can cause severe symptoms. Some poisons cause symptoms within seconds, whereas others cause symptoms only after hours, days, or even years. Some poisons cause few obvious symptoms until they have damaged vital organs—such as the kidneys or liver—sometimes permanently. Ingested and absorbed toxins generally cause bodywide symptoms, often because they deprive the body's cells of oxygen or activate or block enzymes and receptors. Symptoms may include changes in consciousness, body temperature, heart rate, and breathing and many others, depending on the organs affected. Caustic or irritating substances injure the mucous membranes of the mouth, throat, gastrointestinal tract, and lungs, causing pain, coughing, vomiting, and shortness of breath. Skin contact with toxins can cause various symptoms, for example, rashes, pain, and blistering. Prolonged exposures may cause dermatitis. Eye contact with toxins may injure the eye, causing eye pain, redness, and loss of vision. First Aid for Poisoning The first priority in helping a poisoned person is for bystanders not to become poisoned themselves. People exposed to a toxic gas should be removed from the source quickly, preferably out into fresh air, but rescue attempts should be done by professionals. Special training and precautions must be considered to avoid being overcome by the toxic gases or chemicals during rescue attempts. (See also Overview of Incidents Involving Mass-Casualty Weapons Overview of Incidents Involving Mass-Casualty Weapons Mass-casualty weapons are weapons that can produce a mass-casualty incident. Mass-casualty incidents overwhelm available medical resources because they involve so many injured people (casualties)... read more .) In chemical spills, all contaminated clothing, including socks and shoes, and jewelry should be removed immediately. The skin should be thoroughly washed with soap and water. If the eyes have been exposed, they should be thoroughly flushed with water or saline. Rescuers must be careful to avoid contaminating themselves. If the person appears very sick, emergency medical assistance (911 in most areas of the United States) should be called. Bystanders should do cardiopulmonary resuscitation First-Aid Treatment Cardiac arrest is when the heart stops pumping blood and oxygen to the brain and other organs and tissues. Sometimes a person can be revived after cardiac arrest, particularly if treatment is... read more (CPR) if needed. If the person does not appear very sick, bystanders can contact the nearest poison control center for advice. In the United States, the local poison center can be reached at 800-222-1222. More information is available at the American Association of Poison Control Centers web site (www.aapcc.org). If the caller knows or can find out the identity of the poison and the amount ingested, treatment can often be initiated on site if this is recommended by the poison center. Containers of the poisons and all drugs that might have been taken by the poisoned person (including over-the-counter products) should be saved and given to the doctor or rescue personnel. The poison center may recommend giving the poisoned person activated charcoal Prevent absorption of poison before arrival at a hospital and, rarely, may recommend giving syrup of ipecac to induce vomiting, particularly if the person must travel far to reach the hospital. However, unless specifically instructed to, charcoal and syrup of ipecac should not be given in the home or by first responders (such as ambulance personnel). Syrup of ipecac has unpredictable effects, often causes prolonged vomiting, and may not remove substantial amounts of poison from the stomach. Diagnosis of Poisoning Identification of the poison Sometimes, urine and blood tests Rarely, abdominal x-rays Identifying the poison is helpful to treatment. Labels on bottles and other information from the person, family members, or coworkers best enable the doctor or the poison center to identify poisons. If labels are not available, drugs can often be identified by the markings and colors on the pill or capsule. Laboratory testing is much less likely to identify the poison, and many drugs and poisons cannot be readily identified or measured by the hospital. Sometimes, however, urine and blood tests may help in identification. Blood tests can sometimes reveal the severity of poisoning, but only with a very small number of poisons. Doctors examine people to look for signs that suggest a certain type of substance. For example, doctors look for needle marks or track marks suggesting people have injected drugs (see Injection Drug Use Injection Drug Use Drugs may be swallowed, smoked, inhaled through the nose as a powder (snorted), or injected. When drugs are injected, their effects may occur more quickly, be stronger, or both. Drugs may be... read more ). Doctors also examine people for symptoms characteristic of certain kinds of poisoning. Doctors look to see whether people have traces of a drug or substance on their skin or whether drug patches for drugs absorbed through the skin may be hidden in skin folds, on the roof of the mouth, or under the tongue. For certain poisonings, abdominal x-rays may show the presence and location of the ingested substances. Poisons that may be visible on x-rays include iron, lead, arsenic, other metals, and large packets of cocaine or other illicit drugs swallowed by so-called body packers or drug mules (see Body Packing and Body Stuffing Body Packing and Body Stuffing To smuggle drugs across borders or other security checkpoints, people may voluntarily swallow packets filled with drugs or hide those packets in body cavities. If a packet tears, a drug overdose... read more ). Batteries and magnets are also visible on x-rays, as are fangs, teeth, cartilaginous spines and other animal parts that may break off and remain embedded in the body after an animal attack or envenomation. Kits to identify drugs in the urine can now be bought over the counter. The accuracy of these kits can vary significantly. Thus, results should not be regarded as proof that a certain drug has or has not been taken. Testing is best done in consultation with a professional. If done without a professional, results should be discussed with a professional who has experience with drug testing. The professional can help people interpret test results and draw the appropriate conclusions. Prevention of Poisoning In the United States, widespread use of child-resistant containers with safety caps has greatly reduced the number of poisoning deaths in children younger than age 5. To prevent accidental poisoning, drugs and other potentially dangerous substances should be kept in their original containers and the containers kept where children cannot get them. Toxic substances, such as insecticides and cleaning agents, should not be put in drink bottles or cups, even briefly. Other preventive measures include Clearly labeling household products Storing drugs (particularly opioids) and toxic or dangerous substances in cabinets that are locked and out of the reach of children Using carbon monoxide detectors Expired drugs should be disposed of by mixing them with cat litter or some other substance that is not tempting and putting them in a trash container that is inaccessible to children. People can also call a local pharmacy for advice on how to properly dispose of drugs. All labels should be read before taking or giving any drugs or using household products. Limiting the amount of over-the-counter pain reliever in a single container reduces the severity of poisonings, particularly with acetaminophen, aspirin, or ibuprofen. The identifying marks printed on pills and capsules by the drug manufacturer can help prevent confusion and errors by pharmacists, health care practitioners, and others. Did You Know... Treatment of Poisoning Some people who have been poisoned must be hospitalized. With prompt medical care, most recover fully. The principles for the treatment of all poisonings are the same: Support vital functions such as breathing, blood pressure, body temperature, and heart rate Prevent additional absorption Increase elimination of the poison Give specific antidotes (substances that eliminate, inactivate, or counteract the effects of the poison), if available The usual goal of hospital treatment is to keep people alive until the poison disappears or is inactivated by the body. Eventually, most poisons are inactivated by the liver or are passed into the urine. Provide supportive care Poisoning often requires treatment, termed supportive care, to stabilize the heart, blood pressure, and breathing until the poison disappears or is inactivated. For example, a person who becomes very drowsy or comatose may need a breathing tube inserted into the windpipe. The tube is then attached to a mechanical ventilator Mechanical Ventilation Mechanical ventilation is use of a machine to aid the movement of air into and out of the lungs. Some people with respiratory failure need a mechanical ventilator (a machine that helps air get... read more , which supports the person’s breathing. The tube prevents vomit from entering the lungs, and the ventilator ensures adequate breathing. Treatment also may be needed to control seizures, fever, or vomiting. If a poison causes a high fever, the person may need to be cooled, for example, with a cooling blanket, or sometimes by applying cool water or ice to the skin. If the kidneys stop working, hemodialysis Hemodialysis Dialysis is an artificial process for removing waste products and excess fluids from the body, a process that is needed when the kidneys are not functioning properly. There are a number of reasons... read more is necessary. If liver damage is extensive, treatment for liver failure Liver Failure Liver failure is severe deterioration in liver function. Liver failure is caused by a disorder or substance that damages the liver. Most people have jaundice (yellow skin and eyes), feel tired... read more may be necessary. If the liver or kidneys sustain permanent, severe damage, liver transplantation Liver Transplantation Liver transplantation is the surgical removal of a healthy liver or sometimes a part of a liver from a living person and then its transfer into a person whose liver no longer functions. (See... read more or kidney transplantation Kidney Transplantation Kidney transplantation is the removal of a healthy kidney from a living or recently deceased person and then its transfer into a person with end-stage kidney failure. (See also Overview of Transplantation... read more may be needed. Remove poison from the eyes and skin Poisons in the eyes or on the skin usually should be washed off with large amounts of salt (saline) solution, or tap water. Sometimes soap and water is used on the skin. Prevent absorption of poison Very few swallowed poisons are absorbed so quickly that measures cannot be tried to keep them out of the bloodstream. However, such measures are effective only for certain poisons and situations. Stomach emptying (inducing vomiting or stomach pumping), once commonly done, is now usually avoided because it removes only a small amount of the poison and can cause serious complications. Stomach emptying rarely improves a person's outcome. However, stomach pumping may be done very rarely if an unusually dangerous poison is involved or if the person appears very sick. In this procedure, a tube is inserted through the mouth into the stomach. Water is poured into the stomach through the tube and is then drained out (gastric lavage). This procedure is repeated several times. If people are drowsy because of the poison, doctors usually first put a plastic breathing tube through the mouth into the windpipe (endotracheal intubation). Endotracheal intubation helps keep the gastric lavage liquid from entering the lungs. Doctors often used to give syrup of ipecac, a drug that causes vomiting, to children who swallowed poisonous substances. However, this treatment did not often remove significant amounts of the swallowed substance. Now doctors use ipecac only for substances that are highly toxic and when it would take a long time to get the person to the emergency department. In the hospital, doctors do not give syrup of ipecac to empty the stomach because its effects are inconsistent. Activated charcoal is sometimes given in hospital emergency departments to people who have swallowed poisons. Activated charcoal binds to the poison that is still in the digestive tract, preventing its absorption into the blood. Charcoal is usually taken by mouth if the person is alert and cooperative. Introducing activated charcoal through a tube placed in the nose or mouth in people who are either uncooperative or lethargic is not recommended. Sometimes doctors give charcoal every 4 to 6 hours to help cleanse the body of the poison. Not all poisons are inactivated by charcoal. For example, charcoal does not bind alcohol, iron, or many household chemicals. Whole-bowel irrigation is a treatment method designed to flush a poison from the digestive tract. It is used only occasionally, for example, for serious poisoning caused by poisons that get stuck in the digestive tract or need to be moved physically (such as packets of hidden, smuggled drugs) or poisons that are absorbed slowly (such as some sustained-release drugs) or not absorbed by activated charcoal (such as iron and lead). Increase elimination of poison If a poison remains life threatening despite the use of charcoal and antidotes Antidotes Poisoning is the harmful effect that occurs when a toxic substance is swallowed, is inhaled, or comes in contact with the skin, eyes, or mucous membranes, such as those of the mouth or nose... read more , more complicated treatments that remove the poison may be needed. The most common treatments are hemodialysis and charcoal hemoperfusion. In hemodialysis Hemodialysis Dialysis is an artificial process for removing waste products and excess fluids from the body, a process that is needed when the kidneys are not functioning properly. There are a number of reasons... read more , an artificial kidney (dialyzer) is used to filter the poisons directly from the bloodstream. In charcoal hemoperfusion, a person's blood is passed over activated charcoal to help eliminate the poisons (see table Hemofiltration and Hemoperfusion: Other Ways of Filtering the Blood Hemofiltration and Hemoperfusion: Other Ways of Filtering the Blood ). For either of these methods, small tubes (catheters) are inserted into blood vessels, one to drain blood from an artery and another to return blood to a vein. The blood is passed through special filters that remove the toxic substance before being returned to the body. Alkaline diuresis is sometimes used. With this procedure, a solution containing sodium bicarbonate (the chemical in baking soda) is given by vein to make the urine more alkaline or basic (as opposed to acidic). This can increase the amount of certain drugs (such as aspirin and barbiturates) excreted in the urine. Although most poisons and drugs do not have specific antidotes (unlike the popular perception from TV and movies), some do. Some common drugs that might require specific antidotes include acetaminophen ( antidote is N-acetylcysteine Treatment Acetaminophen, a common ingredient in many prescription and non-prescription drugs, is safe in normal doses, but severe overdose can cause liver failure and death. People sometimes ingest too... read more ) and opioids such as heroin and fentanyl (antidote is naloxone). Some poisonous bites and stings also have antidotes (see Snakebites Snakebites Venomous snakes in the United States include pit vipers (rattlesnakes, copperheads, and cottonmouths) and coral snakes. Severe envenomation can cause damage to the bitten extremity, bleeding... read more ). Not everyone who has been exposed to a poison requires its antidote. Many people recover on their own. But with severe poisoning, antidotes can be lifesaving. Mental health evaluation People who attempt suicide Suicidal Behavior Suicide is death caused by an intentional act of self-harm that is designed to be lethal. Suicidal behavior includes completed suicide, attempted suicide, and suicidal ideation. Suicide usually... read more by poisoning need mental health evaluation and appropriate treatment. The following are some English-language resources that may be useful. Please note that THE MANUAL is not responsible for the content of these resources. American Association of Poison Control Centers: Represents the US-based poison centers that provide free, confidential services (24/7) through the Poison Help Line (1-800-222-1222) Disposal of Unused Medicines: What You Should Know: Information on how to safely dispose of unused medicines PoisonHelp.org: For free, confidential online help about specific poisons. Drugs Mentioned In This Article |Generic Name||Select Brand Names| |ADVIL, MOTRIN IB| |No US brand name| |ACTIQ, DURAGESIC, SUBLIMAZE|
<urn:uuid:d6a6e586-012d-4844-9023-a0f5e63462ad>
CC-MAIN-2022-33
https://www.merckmanuals.com/en-ca/home/injuries-and-poisoning/poisoning/overview-of-poisoning
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571472.69/warc/CC-MAIN-20220811133823-20220811163823-00697.warc.gz
en
0.923407
4,546
3.5625
4
The Lens of Colonialism European imperialism lays the groundwork for the avant‑garde So what does Japan have to offer the world from its corner of Asia? There are many aspects to this question, but in my opinion the most significant offering we can make is the Japanese aesthetic, its eye for beauty backed by a long history of development. The ability to see through to the underlying beauty of things should receive much more attention.Soetsu Yanagi When surveying the current landscape of graphic design— with its focus on digital platforms, dynamic typography, and modular layouts—the contributions of premodern Japan might not immediately come to mind as a primary reference point. However, if you closely examine the approaches within our field and trace the lineage of their development, you will discover that the Japanese perspective is a significant and defining factor. From minimalism to geometric stylization, many of the methodologies and techniques we employ within our contemporary practice have been significantly shaped by the legacy of Japanese creative innovation. There is evidence of human inhabitance on the archipelago of Japan that dates as far back as the prehistoric era— 30,000 BCE. The earliest documentation of Japanese language and culture can be found in Chinese sources dating from the 3rd century. Although the exact origins of Japanese culture remain a mystery to this day—it is generally agreed upon among scholars and historians that many aspects of early Japanese culture were significantly impacted by Chinese contributions. The Japanese written language utilizes Chinese characters called Kanji, and a large portion of its vocabulary is borrowed from Chinese. The earliest examples of Japanese painting deploy styles and motifs which align with those found in preceding Chinese works. Buddhism, which is currently practiced by over half of the Japanese population, was first introduced through the arrival of Chinese and Korean monks around 550. By the eighth century, Japan had developed a distinct and refined culture all its own. The duration from 794 to 1185 is known as the Heian period and is regarded as a golden age of classical Japanese art and culture. Named after the then capital city of Heian—or modern day Kyoto—this era is defined by a social structure centered on the ruling emperor and his imperial court. Part king and part religious leader, the emperor was the concentrated source of authority and influence. During this period, he and his accompanying nobility dominated the politics and economy of Japan and were also the primary patrons for the arts. Painters of the Heian period generally focused on classical subject matter that aligned with the tastes of this elite echelon. Traditional themes like idealized images of nature, mythological scenes, and images of noble life were the chosen subject matter for works created during this time. During the Heian period, noble families began to employ graphic symbols to signify status and identity. Family crests or Kamon were hung on walls and entryways, embroidered on clothing, and attached to belongings to indicate ownership. Woodblock printing first made its way to Japan in the late 700’s. By the 800’s, the practice of woodblock printing was spreading throughout Japan and was often employed in the creation of fabric screens and banners in which Kamon were combined with other graphic elements to create dynamic tableaus in rich and vibrant colors. Kamon, or Mon for short, often included stylized interpretations of natural elements like leaves and flowers. Though designed to represent family lineage, the crests were most revered for their decorative quality and were regarded as elements of aesthetic beauty. The Kamakura Period Directly following the Heian period was the Kamakura period—named after the new capital city of Kamakura—which spanned from 1185 to 1333. It was during this era that the position of the emperor and his aristocracy diminished and a different type of ruling class took their place within an emerging feudal system. At the center of this new system were shoguns— powerful dictators who represented a new elite class centered on military legacy and conquest. The first Shogun was Minamoto Yorimoto. In 1192, after many years of rival families and rogue emperors vying for political dominance, Yorimoto consolidated power over the country. He came from a powerful imperial family. The succeeding Shoguns that followed Yorimoto would also come from powerful lineages. These rulers were appointed by the emperor and supported by feudal lords called daimyos. While the position of emperor became almost entirely ceremonial during this era, with almost no real political influence, daimyos and their extended families controlled huge portions of land and wielded important economic and political power. During the Kamakura period, there began a duration of time marked by inner conflict, civil wars, and political unrest which would last for over four centuries and bring about major shifts in the socio-economic structure of the country. This era is distinguished by a lack of a cohesive centralized government which contributed to a continuous power struggle between various emperors, shoguns, and daimyo’s. During this tumultuous time, military warriors called samurais were employed for protection. Samurais were considered an elite class of officer and an essential presence on the battlefield as well as at the helm of the daimyos’ vast estates. It was during the Kamakura period that the Samurai class adopted the use of Kamon as expressions of identity. A Samurai’s symbol or crest would often be seen at a distance emblazoned on a flag, helmet, or coat of armor as he was entering the battlefield. Because of this, the design of the Kamon became much more simplified and abstract. Many Samurai Kamon are made up of geometric shapes —these distilled forms retain their visual impact even when seen at a glance or from far away. Over the next 400 years, the use of Kamon would become more popular and widespread eventually becoming used by the common classes as a means of cultural and familial representation. A New Perspective By the mid 1400’s, after a succession of short lived rulers and vanquished Mongol invasions, daimyos began to usurp the shogun as the center of leadership in Japan. While past Emperors and Shogun were from noble lineage, daimyos could ascend from a variety of backgrounds and social classes. Some daimyos were, like the Shogun before them, the progeny of prestigious military dynasties; however, many came from more humble origins. There were daimyos who began their careers in military service or as government officials. By the 1600’s, many samurais began to rise to the rank of daimyo. These feudal dignitaries were quite different from the aristocratic leaders of the past. They were not content being secluded in palaces away from common citizens. They were not interested in the classical art and culture that had been de rigueur within the noble caste for centuries. This change in leadership coincided with a larger evolution in Japanese society. through the 16th and 17th centuries, a culture once exclusively shaped by the experiences of the elite begins to collectively expand it’s perspective to center on a broader range of class identity. This change spanned all aspects of Japanese life—social, political, and economic. Within the arts specifically, a new creative sensibility arose that responded to the tastes of a new class of patrons who were looking to break with tradition and embrace a different way of looking at the world. Artists began to depart from the refined subject matter and restrained expression that had, for centuries characterized past approaches in favor of vibrant colors, bold linearity, and dynamic fluidity. In place of the familiar subjects—nobility, nature, depictions of war—painters and illustrators during this time began to focus on the vibrancy, humanity and beauty of common everyday existence. The Edo Period In 1603, Japan was again brought under shogun rule. Tokugawa Ieyasu began as the son of a daimyo and rose to power in the wake of the Battle of Sekigahara in which he defeated internal rivals for power. Tokugawa unified Japan and instilled a sense of stability that would last almost 300 years. Following him came a long line of familial successors; the Tokugawa shoguns ruled up until 1868 when the emperor and imperial court regained power under the Meiji Restoration. The period of The Tokagawa Shoguns is referred to as the Edo period (1603—1868)— named after the then capital city of Edo which is modern day Tokyo. Tokugawa Leyasu and his administration understood the significance of Japanese culture and wanted to create an environment of safety and security so this culture could thrive. In an effort to address both inner conflict and the looming threat of European Colonialism which had began to rise during the 1400’s, a policy of Sakoku or national isolation was established. Under this policy, not only did Japan close its borders to all other countries, Japanese citizens were also forbidden to travel abroad. Leaving the country was punishable by death. This period lasted for over two centuries until, in 1854, Japan was ordered by the US to open its ports and participate in trade and commerce with the West. Within the seclusion of the Edo period, there arose a wealthy leisure class who thrived within the developing economic prosperity that characterizes this era. To cater to this flourishing middle class, entertainment districts were established in all major cities. Areas containing theaters, bars, massage parlors, baths, teahouses, and brothels became centers of metropolitan life and creative culture throughout Japan. “Edo-period cities contained newly rich townspeople, mostly merchants and artisans known as chōnin, who gained economic strength by taking advantage of the dramatic expansion of the cities and commerce. Eventually, they found themselves in a paradoxical position of being economically powerful but socially confined. As a result, they turned their attention, and their assets, to conspicuous consumption and the pursuit of pleasure in the entertainment districts.” Department of Asian Art. “Art of the Pleasure Quarters and the Ukiyo-e Style.” In Heilbrunn Timeline of Art History. New York: The Metropolitan Museum of Art, 2000–. http://www.metmuseum.org/toah/hd/plea/hd_plea.htm (October 2004) The word Ukiyo originates from a Buddhist term which is meant to express the impermanence and fleeting nature of life. The approximate translation is “to float”. Within the context of the hedonism and mystery that distinguished life in the Edo period of Japan, the term Ukiyo took on a very different meaning. The pleasure centers of cities like Edo (modern day Tokyo),Kyoto, and Osaka catered to the desire for fantasy, wonder, and escape that exemplifies the zeitgeist of Japan throughout the 17th, 18th, and 19th centuries. These spaces became known as “floating worlds” where everyday people could see and experience the pleasures of metropolitan life—art, performance, fashion, romance, and connection. The characters who inhabited these floating worlds were larger than life. Courtesans, actors, and dancers captivated the clientele who frequented the establishments of these districts. These dynamic figures also captured the imaginations of artists and writers of the time and became the central subject of many creative works of the Edo period. Certain literary works which centered on the experiences inside the entertainment districts of Japan were called ukiyo-e-zōshi or “floating world booklets”. Paintings, illustrations, and prints from this era are known as ukiyo-e—”pictures of the floating world. A History of Printing Woodblock printing first came to Japan from China in the 700’s and by the 12th century was primarily being used to print Buddhist texts. The first non-religious book— a dictionary of Chinese and Japanese terms— wasn’t printed until 1590. Many more books focusing on secular subject matter soon followed. Though movable type was first developed in China in 1040 by the inventor Bi Sheng, movable type was introduced to Japan through Korea around the time that Tokugawa Leyasu was rising to power—around 1593. Leyasu later ordered the creation of movable type printing presses to produce publications which featured historical and political subject matter. Other private presses were established which printed works of Japanese literature. Because of the vast size of the Japanese alphabet, a press which incorporated movable type was quite an investment and the books created using this system were expensive and not accessible to those outside of the noble classes. The Edo period marks a time where, along with economic prosperity and cultural innovation, literacy and scholarship were on the rise. There was a large demand for books from Japanese citizens of all classes. Because of this, the format of woodblock printing—an easier and cheaper method of book production—became the dominant means of printing during this era. “To create a woodblock print in the traditional Japanese style, an artist would first draw an image onto washi, a thin yet durable type of paper. The washi would then be glued to a block of wood, and—using the drawing’s outlines as a guide—the artist would carve the image into its surface. The artist would then apply ink to the relief. A piece of paper would be placed on top of it, and a flat tool called a baren would help transfer the ink to the paper. To incorporate multiple colors into the same work, artists would simply repeat the entire process, creating separate woodblocks and painting each with a different pigment.” Richman-Abdou, Kelly. “The Unique History and Exquisite Aesthetic of Japan’s Ethereal Woodblock Prints” My Modern Met (August 2019) p. 24 During the mid 1600’s a demand for illustrated publications grew creating a growing market for book illustrators, many of whom were also painters. Hishikawa Moronobu was one artist who was well known for both his paintings and his book illustrations. He worked with a book publisher to create what is considered one of the first Uiyo-e prints—a single sheet, one color woodblock print featuring illustrations of female figures. The prints immediately became popular with collectors and many other artists began to emulate this new format. The prints were initially printed in black. Many artist later integrated watercolor and colored ink to achieve full color works. Ukiyo-e celebrated everyday city life and the drama and romance of the pleasure districts. Artworks depicted townspeople shopping in the market, crowds gathered at a festival, or patrons enjoying a theatre performance. The early works were almost exclusively focused on erotic subject matter—courtesans and their relationships with their clientele. Courtesans were paid to court the wealthy merchants, officers, and artisans that frequented the brothels and bathhouses. Their position was critical within the social structure of life in the Edo period. “Thus, the Japanese courtesan both was and was not a prostitute. She was, indeed, bought for money; but at the same time she enjoyed a considerable degree of freedom and influence. in her own limited world. It was this unique quality, plus the traditional Japanese unconcern with moral problems in this connection, that was to make the Japanese courtesan…the subject of and the stimulus for a vast body of surprisingly excellent literature and art that was to sustain itself for nearly three hundred years.” Lane, Richard. “Images From the Floating World, The Japanese Print” Secaucus, New Jersey: Chartwell Books (October 2004) p. 24 Over the years, as Ukiyo-e prints rose in popularity, their subject matter began to expand. By the late 1600’s, prints began to depict themes outside of the erotic and featured samurais, actors, and ordinary townspeople in addition to courtesans and their suitors. During this time, Kabuki theaters began to appear in the entertainment districts. A stylized form of dance and performance art first invented by a Japanese dancer and Shinto priestess named Izumo no Okuni, Kabuki incorporated lavish costumes, dramatic makeup, and artful choreography. Kabuki performers made ideal subjects for Ukiyo-e prints and paintings. The rise of the Ukiyo-e artist Suzuki Harunobu in the mid 1700’s coincided with the development of multicolor woodblock printing. The unparalleled vibrancy of Harunobu’s prints is juxtaposed with a solemn and quiet sensibility. His work often depicts children and scenes of domestic life. The creation of the full color Ukiyo-e print is regarded as a significant milestone in the history of Japanese culture. Because Ukiyo-e prints were multiples printed in editions, they were less expensive than singular artworks, making them accessible to a wider range of patrons. The prints, however, were not viewed as cheap imitations of the paintings or illustrations from which they were derived. These prints signify a fully realized embodiment of the artistic sensibility of the Edo period and eventually became more sought after than paintings. They were perceived as a new and dynamic artform. The format of woodblock printing perfectly aligned with the prevailing style of the Edo period. The inherent flatness of woodblock printing intensified the flatness of the linear language and stylized perspective that characterized the Ukiyo-e approach. Woodblock inks contained a high ratio of pigment resulting in dazzling and luminous colors which were more vibrant than those found in traditional paintings. The graphic flatness and bold colors resulted in eye-catching works whose appeal transcended class and social status. The Ukiyo-e prints bridged high and low culture within Japan and beyond. By the 1800’s artists like Katsushika Hokusai and Utagawa Hiroshige began to depart from figural themes and revisit the subject of nature which had been so prevalent in the earlier classical works of preceding periods. Landscapes, seascapes, flora and fauna, when rendered in the Ukiyo-e style as full color woodblock prints, were infused with a fresh perspective. In the 1830’s, when Hokusai was in his seventies, he created some of the most prominent works within the Ukiyo-e genre. His Mount Fuji series and the iconic Under the Wave at Kanagawa are immediately recognizable not just as masterpieces of the Edo period but as seminal works within the global history of art and design. Hokusai’s landscapes inspired many younger emerging artists—one of which was Utagawa Hiroshige. In his 30’s, Hiroshige decides to devote himself to the study of landscape painting. His masterful depictions of Japanese landscapes would earn him the position of one of the most revered artists of the Edo period. His work would serve as an inspiration for future generations and would be later discovered by European artists of the Avant Garde. Japan has a rich tradition of craft that dates back to ancient times. Throughout its long history of development, Japanese craft, has been intimately interconnected with the adjacent fields of art, design, and architecture. There is a long legacy of shared approaches within these creative disciplines. During the Muramachi period which began in the 14th century, there began to emerge a distinct cultural perspective centered on minimalism, materiality, and a reverence for nature. These concerns have roots in both Zen Buddhism and the tea ceremony which, during the 1400’s, went from being a religious ritual to a widely adopted social practice. It was during this time that the concept of wabi-sabi originated. Wabi-sabi is an aesthetic philosophy which values simplicity, emptiness, impermanence, and imperfection. Within different creative practices, the concept of wab-sabi is represented in a variety of ways—from material choice to layout and proportion. Collectively, the presence of wabi-sabi translates into a specific value system within which elements like emptiness, negative space, pure form, abstraction, and deformation are viewed as essential features of meaning and experience. If we compare these approaches with conventions or styles that appeared in European countries at around the same time, we see an extreme contrast in perception. From ancient Rome to Victorian England, the European perspective has long positioned labor, realism, and idealized execution as central elements of value and legitimacy. The End of Sakoku In 1853 Japan’s period of Sakoku was brought to an abrupt end by the US Navy. Under the command of US General Matthew Perry, Japan was ordered to once again open its ports and engage in trade and commerce. When Japan finally emerged from its time of isolation, the country would find it had missed The European Renaissance, the Age of Enlightenment, and the Industrial Revolution. Japanese art and design quickly made its way across the globe as the newly industrialized world received its first glimpse of the creative masterpieces of the Edo period. An exhibition of works from Ukiyo-e artists was featured at the International Exhibition of 1867 in Paris. Shortly after, Ukiyo-e prints became highly sought after in major cities like London, Paris, and Madrid. The Ukiyo-e style would became one of the primary catalysts of the avant garde and modernist movements. These works served as important examples to artists throughout Europe who were looking to break with past styles and find alternative models of creative expression. The organic linear quality of Ukiyo-e prints and the focus on nature in later Ukiyo-e works informed the development of the art nouveau style. The graphic flatness and bold stylization found in both Ukiyo-e prints and Japanese Kamon inspired artist like Van Gogh and Gustav Klimt as well as laid the groundwork for art deco in the early 1900’s. The minimalism and materiality inherent in Japanese craft, architecture, and interior design influenced prominent modernist architects from Frank Lloyd Wright to Richard Neutra. As we survey the impact of Japanese work on the European creative perspective, it is important to understand the characteristics which defined European experiences during this time and to acknowledge how outside perspectives and cultures were received within these experiences. The popularity of Japanese art and design that ignited throughout Europe in the wake of the end of Sakoku is referred to by historians as Japonisme. As Japanese contributions were first beginning to infiltrate the European consciousness and Japonisme was growing, European countries were at the height of their colonial campaigns and at the center of a newly developed industrialized world. Inherent in colonialism and, in turn, capitalism and industry are systems of dominance, desecration, and erasure. As European countries expanded their territories across countries in Asia, Africa, and the Americas, they would, through plunder and forced assimilation, erase existing cultures and histories and, in their place, institute Euro-centric customs and beliefs. This dominance is present in all transactions with outside cultures. Though Japanese approaches were highly regarded and valued by European audiences, this positive reception was centered on concepts of “otherness” and defined by a lack of cultural understanding. At its core, Japonisme was opportunistic and self-serving. European creatives were exposed to Japanese works at a time when they were looking for alternative creative approaches. Within the new industrialized landscape, labor and realism, which had long been hallmarks of legitimacy, were fast losing their cultural relevance. With the advent of mechanical processes, the “look” of labor could now be cheaply replicated. The development of photography would soon render the skill of naturalistic replication obsolete. Surely European creatives felt the clock ticking and responded in kind. The “new” and “exotic” functioned as means of “inspiration” for artists and designers looking to break from traditional European perspectives at a time when those perspectives had run their course. An unfortunate circumstance of the European appropriation of Japanese works is the loss of cultural context. The traditional Japanese perspective represents a complete shift in values, both aesthetically and philosophically, when compared to those of 19th century Europe. Surely an outlook in which concepts like emptiness, and simplicity are regarded as meaningful and worthy could have served as an opposing and corrective force against the surge of European colonialism and capitalism that would, by the twentieth century, come to dominate the world. The Rise of Colonialism During the 1800’s, Europeans were seeing examples of art and design from different cultures around the globe, not only in the books and publications that were being printed but also in the form of physical artifacts that were appearing throughout Europe as a result of colonial conquests. The era of European colonialism spans from the 1400’s up until the 20th century. The colonial era, deplorably named the age of discovery by some historians, began with Portugal who was, in the 1420’s, exploring the coast of Africa trying to find a trade route around India. Portugal ended up colonizing several African islands, setting up trade posts, and established a lucrative business selling resources of the region—namely gold, ivory, and the islands’ inhabitants. The next country to colonize was Spain who sent Christopher Columbus on a similar expedition to establish a trade route with India. His journey of 1492 landed him on the continent of North America. Spain eventually claimed most of Latin America, while simultaneously expanding into the Philippines. During this time, Portugal claimed Brazil in addition to other African territories. Both countries enslaved the native populations living in the lands they colonized and put them to work on plantations or in mines. Other countries who began expanding outside their borders were England and France who fought over areas of Africa in the late nineteenth century, and the Netherlands who took control of parts of Indonesia in 1800. By 1900, 90% of African lands had been colonized by European powers—France, England, Spain, Germany, Belgium, Portugal, and Italy. The only countries to escape colonization were Ethiopia and Liberia. These European powers were interested in the resources of Africa—minerals, agriculture, ivory and Africans. Today Africa is made up of 53 countries. These borders were not drawn by Africans, rather, they were the result of 7 European countries dividing up the continent in 1884 and staking claim on territories without regard for cultures, tribes, or regions. The impact of Africa In addition to stripping the conquered countries of natural resources and enslaving its people, another result of colonial conquest was the looting of indigenous artwork. Beginning in the late 1800’s, thousand of objects from colonized lands began to appear in museums, galleries, and specialty shops throughout European cities like Paris, Berlin, Munich, and London. African sculptures and textiles were among the works most prevalent at this time. Though at first regarded with a novel curiosity by Europeans, who used words like ‘savage” ,”tribal”, and “primitive” to describe the indigenous works, eventually, the bold stylization and graphic geometry of the masks and prints taken from cultures living on the continent of Africa began to grab the attention of emerging artist like Pablo Picasso and Henri Matisse. During this time, many of the artists who were living and working in prominent European cities were contributing to a series of creative movements that would significantly alter the course of art and design in Europe and beyond. The creative approach of African artists and designers would be one of the key elements that would spark this visual revolution. In Europe, starting from the ancient Greeks and Romans and continuing through the middle ages, the Renaissance, and the Victorian era, the primary value system within which practitioners of the visual arts were working was centered on replication. Creative endeavor, in large part, was in service of literal representation. Though there are, within this system, countless examples of expressiveness, inventiveness, mastery, and conceptual expansiveness, they all take place within the narrow confines of the European perspective which rarely departs from the objective of interpreting the natural physical world and the corporeal bodies that inhabit that realm. The few examples where imagination is employed to execute a departure from naturalistic subject matter are most likely in service of interpreting the more otherworldly aspects of Christian mythology. The highly stylized and abstracted work from nations like Gabon and the Republic of the Congo in Central Africa or the Ivory Coast which is on the Western part of the continent, presented examples of creative perspectives looking within. African artist were using imagination and aesthetic vision to re-imagine and reinvent the world around them. They were departing from the conventions of naturalism and mixing aesthetic concerns with elements of function resulting in a sophisticated and groundbreaking combination of art and design which would serve as a model for the future modernist movements in Europe. When we survey the avant garde and modernist movements of the late 19th and early 20th centuries, we see again and again the impact of the African perspective on the output of European creatives. Beginning in the twentieth century, the success of European artists and designers would hinge upon their ability to create the appearance of visual innovation. Within the growing spaces of media, marketing, and advertising, there would be a constant expectation of novelty and reinvention. The perception of the ”new” —new ideas, new messages, new forms—would function as a means of grasping the imaginations and attentions of consumers which is the primary intent of marketing. You can imagine that, towards the end of the Victorian Era, artists and designers of the time were feeling the pressure of these expectations. So many aspects of life had completely transformed; yet, the prevailing visual styles of the Victorian era were, at best, mashups of recycled conventions from subsequent eras. Towards the end of the 1800’s, we see a variety of approaches which depart from the Victorian perspective. The Arts and Crafts movement looked to the past. They rejected the sensibilities of the Victorian era and their accompanying methods of production in favor of revisiting Renaissance and pre-Renaissance styles and methods. Other artists and designers of the time looked to outside cultural perspectives for inspiration and ideas. In the late 1800’s, there emerged a number of experimental movements centered on finding new means of creative expression. These creative communities collectively formed the European avant garde. Constructivism, Futurism, Cubism, Art Nouveau, and countless others were united in a quest for finding new ways of seeing, thinking, and making. For these artists and designers, Japanese and African works served as doorways into new ways of looking at the world. The artist of the avant garde employed aesthetic approaches like minimalism, abstraction, geometry, flatness, and stylization—qualities which are inherent in traditional Japanese and African art and design. The Benin Empire Once located in what is now southern Nigeria, the Kingdom of Benin is regarded as as one of the oldest and most developed states in the coastal area of West Africa. Also known as the Edo Empire, it was first formed in the 11th century and eventually became known as a prominent center of innovation and culture until it was colonized by the British in 1897. Benin city, the capital of the Benin Kingdom, still exists today and is located in current day Edo State, one of the 36 states of modern day Nigeria. At the peak of the Benin Empire, Benin City was considered one of the architectural marvels of Africa. The walls of Benin city were a series of massive, man made earth formations made up of interlocking banks and ditches. The 1974 edition of the Guinness Book of World Records described the walls of Benin as “the world’s largest earthworks carried out prior to the mechanical era”. “In all, they are four times longer than the Great Wall of China and consumed a hundred times more material than the Great Pyramid of Cheops. They took an estimated 150 million hours of digging to construct and are perhaps the largest single archaeological phenomenon on the planet.” Fred Pearce, New Scientist Magazine, 1999 Benin City was one of the first cities to have street lights. Palm oil lamps in metal encasements were affixed to tall poles and placed overhead throughout the city to illuminate the streets of Benin City at night. The layout of the city was based on the design of fractals. Interlocking symmetry and repeated forms served as the foundation for the city’s structure and layout. In his book African Fractals, the mathematician Ron Eglash explains, “When Europeans first came to Africa, they considered the architecture very disorganized and thus primitive. It never occurred to them that the Africans might have been using a form of mathematics that they hadn’t even discovered yet.” Equally awe inspiring is the art of the Benin culture. By the 1300’s the Benin Empire had over 2 million inhabitants and was known throughout Africa and beyond for its masterful bronze sculptures. The Oba or King would commission these artworks to honor past ancestors or celebrate achievements of the Benin people. Cast plaques and figurative works featured elaborate ornamentation and bold stylized forms. As the Benin empire began to engage in trade with European nations in the 1400’s, the bronzes became sought after by European collectors. The city and its groundbreaking architecture would be completely destroyed by British Soldiers who forced the Benin Empire under British occupation in a devastating battle in February of 1897. Villages were burned to the ground. Shrines, palaces, and holy spaces were looted by British forces. Sacred objects and artworks were stolen including a vast collection of the prized Benin bronzes. The stolen works eventually made their way to museums and galleries throughout Europe and to the homes of private collectors. In 2016, students from Jesus College, part of Cambridge University in England, demanded a stolen Benin bronze be removed from display at the college. The artwork had been donated to the school by a father of a student in 1905. In November of 2019, the college announced it would be returning the sculpture to Nigeria. The decision was made after a committee of students and teachers had traced the origins of the sculpture’s acquisition and learned that it had been taken in the original British raid of 1897. There are hundreds of stolen Benin bronze sculptures just in the UK alone. Many scholars and historians are hoping that institutions which have a Benin bronze in their collection will follow Cambridge’s example and return the artwork to its rightful owners—the citizens of Nigeria. Just this past March, in 2021, The University of Aberdeen in England said that it planned to return a Benin bronze that had been in its collection since 1957 and recently the German government has also agreed to return plundered Benin artworks to Nigeria. Between the 17th and early 20th centuries, the Kingdom of Kuba was one of Africa’s most prominent empires. Located in what is today the southeastern part of the Democratic Republic of the Congo, at its height, the Kuba kingdom was a dominant presence in regional trade and commerce. Through the export of ivory and rubber, Kuba leaders amassed great wealth and political influence. The empire also had a sophisticated socio-economic system that included a complex legal process which included trials by juries, methods of taxation, and a constitution. The Kuba people were also known for their distinct approach to visual art and design—stylized masks, graphic sculptures, intricate beaded jewelry and tapestries, and most notably their boldly patterned textiles or Kuba cloth was highly regarded and sought after throughout Africa and beyond. Kuba cloth is made from the leaves of the Raffia palm. The leaves are cut and then dyed using natural materials. Once the fibers are softened, they are then woven into a sturdy cloth using a loom. This first part of the cloth making process has traditionally been done by men. Once the cloth is prepared, it is then embroidered with vibrant geometric patterns. The embroidery has traditionally been done by women.The process is extremely time consuming. A small piece of fabric can take several days to produce. The embroidered patterns of Kuba cloths often incorporate both repetition of form along with unexpected visual elements that can break or deviate from the main pattern or design. Across the visual arrangement of a single textile, patterns will shift, morph, or change completely. Each graphic composition contains elements of meaning and personal mythology. The designs as a whole tell a story—usually narratives that express some aspect of cultural history or achievement. The Fang People The culture known as the Fang people first migrated to Central Africa in the 1800’s and can now be found in areas of Equatorial Guinea, Northern Gabon, and Southern Cameroon. Traditionally farmers and hunters, the Fang are well known for their knowledge of animals, plants and herbs in the region they inhabit. They are also recognized for their works of art—masks, idols, and objects made from wood, iron and stone. Many Fang artworks were originally created to be used within rituals or celebrations. These artworks were not meant to be viewed as static objects. They were made as sacred elements to be integrated within physical experiences like dances, burials, and spiritual ceremonies. The Fang employ a distinct visual approach within their creative works— a combination of bold, stylized forms, distilled simplicity, and dynamic expressiveness. This distinct style has gained the Fang culture international acclaim. Fang works are included in the collections of countless museums across the globe. During the 1900’s the work of the Fang people significantly influenced the artists of the avant-garde movements in Europe. Creatives like Modigliani and Brancusi borrowed heavily from the visual innovations of the Fang perspective.
<urn:uuid:666e6db2-2834-473d-a13e-411b711f47ac>
CC-MAIN-2022-33
https://gabrielstromberg.com/2021/the-lens-of-colonialism/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573172.64/warc/CC-MAIN-20220818063910-20220818093910-00299.warc.gz
en
0.971543
7,795
3.34375
3
- For ages 10 and up - For 3 to 5 players - Approximately 90 minutes to complete - Counting & Math - Logical & Critical Decision Making - Pattern/Color Matching - Strategy & Tactics - Auctioning & Bidding - Child – Moderate - Adult – Easy Theme & Narrative: - Build a college to enlighten the world! - Gamer Geek approved! - Parent Geek approved! - Child Geek mixed! Socrates said, “The only good is knowledge and the only evil is ignorance.” In the spirit of Socrates and his passion to challenge people’s thinking (which ultimately lead to his death), schools, colleges, and universities have been formed. Students of all ages and from all walks of life have met under roofs of thatch, mud, wood, metal, and marble for centuries to learn. You are now tasked to create a college that will house the greatest scholars so young minds might be challenged, nurtured, and released into the world in hopes of making it a better place. Dreaming Spires, designed by Jeremy Hogan and published through Game Salute, is comprised of 120 wooden cubes, 88 Building tiles, 27 Event cards, 43 Scholar cards, 48 coins (in assorted values), 1 Money bag (for all those coins), 1 scoreboard, and 1 Chancellor’s Mace. The tiles, scoreboard, and cards are all durable and of excellent quality. The cubes and Chancellor’s mace are both made of solid wood, but you’ll want to be gentle with the Chancellor’s Mace. It’s roughly the length of a large toothpick and its thinner portions could easily be snapped in two if the player is a bit over zealous with their chancellery powers. You’ll be fine during the game, but don’t go giving the mace to the Child Geeks to play with. Beginning Your Education To set up the game, first place the scoreboard in the middle of the playing area. Second, sort the Scholar and Event cards into two different piles. Further organize each pile by separating them by era. The back and the border of each card’s face will be 1 of 4 different colors. These colors indicate the different eras. You should now have 4 Scholar card piles and 4 Event card piles organized by color. Set these piles aside for a moment, face-up. Each era and its matching color are summarized here. These colors are used for the cards and the scoreboard. - Medieval Era: Gray - Enlightenment Era: Yellow - Imperial Era: Red - Modern Era: Blue Take the Medieval Era Scholar and Event card piles, give each pile a shuffle, and place them next to the scoreboard, face-down. Draw 2 Event cards and place them face-down next to the Event draw pile. Draw 5 Scholar cards and place them face-up next to the Scholar draw pile. Third, find all the “Garden” and “Quad” Building tiles, setting them aside, and face-up. Take the remaining Building tiles and shuffle them face-down or place them in a box. Draw 10 and place these Building tiles, face-up, to one side of the game playing area. Fourth, place all the coins in the bag and stir them in the bag to mix them up. Set the Money bag to one side of the game playing area for now. Fifth, give each player 24 cubes of a single color. Any cubes not given should be returned to the game box. That’s it for game set up. Give the first player the Chancellor’s Mace and begin. Building the Foundations of Knowledge Dreaming Spires is played in turns and rounds. Rounds are referred to as “Eras” and there are 2 turns per Era. The Eras are Medieval, Enlightenment, Imperial, and Modern. While the Eras might be different, they all share the same game play which I will now summarize. Step 1: Collect Money (on first turn of Era only) Each player, in turn order sequence, randomly draws 4 coins from the Money bag. Players should keep the coin values, along with any others they have collected, a secret. Step 2: Clean Scholars and Events (on first turn of Era only) For the first Era, this step should be skipped because the Era was set during game set up, but every subsequent Era after the first needs fresh scholars and events to ponder and fret over. All face-up Event and Scholar cards are picked up and removed from the game, including those owned by the player. Thematically speaking, the Scholars have died. The current Era’s Scholar and Event piles are shuffled and placed face-down next to the scoreboard. Draw 2 Event cards from the draw pile and place them face-down next to the Event card draw pile. Then draw 5 Scholars and placed them face-up next to the Scholar card draw pile. Note that all Reputation markers on the scoreboard and Building tiles remain in play when a new Era begins. Step 3: Reveal Event The player who is currently Chancellor selects 1 of the 2 face-down Event cards and flips it over, reading it out loud if required. The Event is not resolved at this time, but all the players now know what card is up for grabs at the end of the turn. If there is only 1 face-down Event card available, the Chancellor reveals that card. Step 4: Take Actions Starting with the Chancellor and then continuing in turn order sequence, each player will take 4 actions. The available actions are listed here and players can take the same action multiple times if they like. Action: Build a Building For this action, players can either select from the 10 Building tiles that are visible, a “Quad” Building tile, or a “Garden” Building tile. Some Building tiles cost money which is shown on the Building tile. Players cannot grab a Building tile they cannot place. Building tiles represent the different sections of the player’s college. The first Building tile the player acquires is placed in front of them. All subsequent Building tiles must be built off this tile or previous built tiles. Every player’s first Building tile will either be a “Garden” or a “Quad” Building tile. These two tiles can be built next to any other tile. All other Building tiles can only be placed next to a Building tile that shares the same color on at least 1 side. There are Building tiles with 2 colors, but they only need to touch the side of another Building tile that has at least 1 similar color. The game’s rule book provides an example college build that does an excellent job of showing how tiles are used in the game. The Building tiles provide benefits that are used to attract Scholars. However, most Building tiles only provide 1/4 of the benefit (located on the Building tile’s corners). This means players will need to acquire Building tiles with specific benefits and colors to correctly build their college. There are also Building tiles that provide the benefit by default and do not need to be attached to another Building tile to complete the benefit icon. The benefits are as follows. Note: New Building tiles are drawn AFTER the current player completes their 4 actions. Action: Admit a Scholar The face-up Scholar cards for the Era are available to admit to a player’s college, but they must first meet the Scholar’s requirements. Listed on the top of every Scholar card are the required benefits provided by the Building tiles. The number of total Building tiles a player has in their college is not important, but they must have enough buildings that provide the benefits listed. Each Building tile can only be used once to provide a benefit requirement. If the player meets the requirements, they can take the Scholar and place it in front of them. Note: New Scholar cards are drawn from the Scholar draw pile AFTER the player completes their 4 actions. Action: Use a Scholar A player can use any of their Scholar cards that have not yet been used in the current turn. Scholars can do one of the following, but not both. - Increase the player’s college’s reputation - Perform their scholarly ability A college’s reputation is tracked on the scoreboard. If the scholar is used to improve the college’s reputation, the Scholar card will list one or more subjects and the bonus the Scholar provides. The player will need to select which subject they want to improve and then move their player cube accordingly on the scoreboard. If the player doesn’t have a cube yet on the scoreboard for the subject they want to improve, it’s now placed. Scholarly abilities can be very powerful, but can also cost the player. Some Scholarly abilities will require the player to reduce their reputation in a subject or pay money. These abilities are neither good or bad in the short run, however. The player must determine how the Scholar’s academic influence can further their college’s reputation. Sometimes, to further one’s goals, you have to pay upfront and hope the return in the future was worth the cost. Action: Draw 1 Random Coin As an action, the player can draw 1 random coin from the Money bag and add it to any other coins they have collected. Remember to keep all coins hidden and secret from opponents. Step 5: Resolve Event The Chancellor now resolves the revealed Event, but not in the way you might think. Players bid for the Event card and its reward. More to the point, players are also bidding on who will go first during the next Era as the winner will become the new Chancellor. Bidding is determined by the Event type. - Action: Chancellor opens the bid and then the bidding progresses in turn order sequence. Players must always bid higher than the last bid or pass. Players who pass can bid the next time the bidding turn order sequence returns to them. When no one bids higher than the previous bid, the player who bid the highest pays the amount using their coins and collects the Event card, as well as the Chancellor’s Mace. The Event card’s reward is then resolved. - Silent Auction: All players place any amount of coins in their hand and keep it secret from their opponents. When ready, all players reveal their bid by opening their hand and stating how much they are offering. The player with the highest bid wins the Event cards, the Event card’s reward, and the Chancellor’s Mace. - Dutch Auction: The Chancellor slowly counts down from 12 to 0. Any other player can shout “bid” during this countdown. The player who shouted “bid” now pays the amount in coins equal to the last number spoken by the Chancellor, collects the Event card, resolves the Event reward, and receives the Chancellor’s Mace. - Contribution: All players place any amount of coins in their palm secretly and then reveal them when everyone is ready. The player with the first and second highest amount of coins share the Event card’s reward, but the player who bid the highest claims the Chancellor’s Mace. If players tied for first, no reward is given to the second highest bid. All players, regardless if they won or not, lose the coins they used in the bid. Note: The Chancellor’s Mace always goes to the player who won the bid. In the event of a tie, the current Chancellor gets to determine who will get the mace by selecting one of the tied players. The rules do not state if bribes are allowed or not. I’m just saying… Events can also be triggered by Scholarly abilities. If so, an Event card is drawn from the Event card draw pile and the player who triggered the Event runs the bid as normal. However, the winner of the bid does not claim the Chancellor’s Mace. That only happens when an Event is won at the end of a turn. Step 6: Score Era (on second turn only) At the end of the second turn of the Era, players now determine their score. This step is skipped during the first turn of every Era. You’ll need to refer to the following scoreboard image to understand what I’m about to describe as scoring in this game can be a bit tricky to understand at first. To begin with, the scoreboard has three different sections that are used and there are multiple steps to determine the Era’s score. An Era is scored as follows. A) Determine Era Score Box Ownership First, you will only be scoring certain College Reputation rows and Academic Reputation columns. Each Era scores differently and what is scored is indicated by the Era Score Box (my term for it – the game refers to them as “active scoring boxes”). For example, the Medieval Era will be scoring the “Fellows” and “Student” College Reputation rows and “Politics”, “Theology”, “Philosophy”, and “Science” Academic Reputation columns. To make it easy to determine which boxes in the Era Score Box are being scored, match the boxes to the current Era color. Only the current Era and previous Era boxes are scored. Second, for each Era Score Box, determine which player has the most points by adding the numbers in the row and the column that intersects with the Era Score Box. For example, the left top-most Medieval Era Score Box scores the “Fellows” College Reputation row and “Politics” Academic Reputation column. The player who has the highest total places their cube color in that Era Score Box. Note: Reputation cubes are NEVER removed from the scoreboard. If there are ever ties, they are broken by first awarding the win to the player who has more points in the Academic Reputation for that specific Era Score Box. If that cannot be used, victory goes to the player with the biggest college (number of tiles). If there is still a tie, the player with the most money wins followed by the Chancellor determining who should win. B) Eliminate Era Score Box Ownership Count the number of Era Score Cubes each player has. The player with the least number of cubes removes their cubes from the Era Score Boxes. Ties are broken based on size of college, then money, and then by Chancellor’s decision. Those empty Era Score Boxes must now be filled. Determine which player cube should be placed by repeating “A” above, but ignore the cube colors that match the cubes that were just removed from the Era Score Box. In this way, a player who never had a chance to win an Era Score Box might suddenly find themselves with 2 or more. Repeat “B” until there is only 1 player cube color remaining. This player has won the Era and collects 1 coin from the Money bag at random. If this was the final Era, the player who wins also wins the game. Now here is the part that confused many of our players. As the game progresses, the cubes in the Era Score Box will continue to be added and removed. Each Era builds off the previous one. This means each scoring portion at the end of an Era will take slightly longer than the last one as the number of possible empty Era Score Boxes that need to be refilled increases. But the way you determine the winner is always the same. It just takes a bit longer each time you do it because there are more cubes to consider. To learn more about Dreaming Spires, visit the game’s web page. The Child Geeks had a hard time getting into this game. All the Child Geeks demonstrated a clear understanding of how to build their college, how to use the money in auctions, use their Scholars, and make use of the Event benefits. What they kept failing to grasp was the scoring. According to one Child Geek, “I don’t understand why some players are getting points and some players aren’t. It doesn’t make any sense to me.” Another Child Geek said, “I like everything about this game except when we score points.” Not all the Child Geeks had difficulty grasping how points were earned, but the lack of a well understood means to make points was causing a great deal of frustration. Not enough to persuade all the Child Geeks to dislike the game, however, which left Dreaming Spires with a mixed approval from the Child Geeks. The Parent Geeks also were a bit confused about the scoring, but after a brief explanation and the conclusion of an Era, the Parent Geeks understood it perfectly. One Parent Geek said, “Oh! That’s how the scoring goes. I was looking at it wrong. OK, now got it.” Well, no, not really. It took a game or two until the Parent Geeks felt truly confident in their ability to understand how to score points. According to one Parent Geek, “Wow, this game has a lot of layers. I like that and I don’t feel overwhelmed, but this game is riddled with subtle decisions and strategy.” Not so much that the casual gamers felt overwhelmed, but the non-gamers remained confused and somewhat lost at time. When the eras came to an end, all the Parent Geeks voted to approve Dreaming Spires. The Gamer Geeks were happy to give the game a try as it looks very appealing on the gaming table. They quickly understood the game’s simple and straight forward rules and again, like the other geeks before them, came to a confused stop when it came time to score. Unlike the other groups, the Gamer Geeks were only confused for about a minute or two as I explained the scoring – again – and then they had it. According to one Gamer Geek, “I am really enjoying this game. It’s easy to learn, easy to play, and has a lot of depth to it.” Another Gamer Geek said, “At first I thought the game was a bit out of control with everything you can do, but I see now how each action fits and works together. Great game.” And finally, another Gamer Geek wanted to say, “This game feels a bit fiddly, I think it could use some tightening up, but I’m enjoying myself. Not a perfect game, but it’s a fun one.” All the Gamer Geeks voted to approve Dreaming Spires. I was fortunate enough to speak to the Mr. Hogan, the game designer, about Dreaming Spires. I always find the origin stories of games to be interesting. According to Mr. Hogan, one of his inspirations was observing the grounds and history of the University of Oxford. Great and important thinkers walked the same paths and lectured under the same roofs Mr. Hogan now contemplated his own lessons as a student. This, as you might have guessed, inspired Mr. Hogan in a profound way. Once I started to realize how many extraordinary stories Oxford had to offer, it struck me that as a game designer, I had the perfect medium to share them. My aim was to create an accessible but strategic experience that included Oxford’s famous people, events and colleges and while the systems in the game evolved during development, these three elements are the pillars that the game is built on. – Jeremy Hogan, Game Designer That might very well be true, but I think the center pillar of this game is the scoreboard. Buying tiles, admitting Scholars, and purchasing Events is good fun, but why you bother to mess with these bits is unclear. That is until you put the focus on the scoreboard. It’s here that players will see how each Era influences the score that slowly creeps forward. Each Era is an opportunity to change the Era Score Box and shift the victory. That is downright awesome and terribly inconvenient if you don’t understand how scoring works. Players must constantly diversify and strengthen their academic and college reputation or they risk losing ground. This makes every tile you place important, every Scholar you admit a possible advantage, and each Event a game changer. Simply excellent. As for me, I very much enjoyed Dreaming Spires. It hit all the right buttons and played very well. I was frustrated at times when I didn’t build my college well enough or I was outbid by an opponent for an Event, but not once did I leave a game grumpy. This is a fun game, a challenging game, and frustrating game all wrapped into a nice package. It made me think, made me despair, and made me shout for joy. Can’t ask for much more than that. I will leave you with one last quote which the game designer found inspirational and serves as the origin of the game’s name. I think the quote does an excellent job of summing up that game, too. “It is well that there are palaces of peace, And discipline and dreaming and desire, Lest we forget our heritage and cease The Spirit’s work – to hunger and aspire” – from Oxford, by CS Lewis This game was given to Father Geek as a review copy. Father Geek was not paid, bribed, wined, dined, or threatened in vain hopes of influencing this review. Such is the statuesque and legendary integrity of Father Geek.
<urn:uuid:c21fbfbb-9279-4515-b83d-619609285bc2>
CC-MAIN-2022-33
https://fathergeek.com/dreaming-spires/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570692.22/warc/CC-MAIN-20220807181008-20220807211008-00697.warc.gz
en
0.963145
4,519
2.765625
3
De Stijl (/ /; Dutch pronunciation: [də ˈstɛil]), Dutch for "The Style", also known as Neoplasticism, was a Dutch art movement founded in 1917 in Leiden. De Stijl consisted of artists and architects. In a more narrow sense, the term De Stijl is used to refer to a body of work from 1917 to 1931 founded in the Netherlands. Proponents of De Stijl advocated pure abstraction and universality by a reduction to the essentials of form and colour; they simplified visual compositions to vertical and horizontal, using only black, white and primary colors. De Stijl is also the name of a journal that was published by the Dutch painter, designer, writer, and critic Theo van Doesburg that served to propagate the group's theories. Along with van Doesburg, the group's principal members were the painters Piet Mondrian, Vilmos Huszár, Bart van der Leck, and the architects Gerrit Rietveld, Robert van 't Hoff, and J. J. P. Oud. The artistic philosophy that formed a basis for the group's work is known as Neoplasticism—the new plastic art (or Nieuwe Beelding in Dutch). According to Theo van Doesburg in the introduction of the magazine De Stijl 1917 no.1, the "De Stijl"-movement was a reaction to the "Modern Baroque" of the Amsterdam School movement (Dutch expressionist architecture) with the magazine Wendingen (1918–1931). Principles and influencesEdit Mondrian sets forth the delimitations of Neoplasticism in his essay "Neo-Plasticism in Pictorial Art". He writes, "this new plastic idea will ignore the particulars of appearance, that is to say, natural form and colour. On the contrary, it should find its expression in the abstraction of form and colour, that is to say, in the straight line and the clearly defined primary colour". With these constraints, his art allows only primary colours and non-colours, only squares and rectangles, only straight and horizontal or vertical lines. The De Stijl movement posited the fundamental principle of the geometry of the straight line, the square, and the rectangle, combined with a strong asymmetricality; the predominant use of pure primary colors with black and white; and the relationship between positive and negative elements in an arrangement of non-objective forms and lines. The name De Stijl is supposedly derived from Gottfried Semper's Der Stil in den technischen und tektonischen Künsten oder Praktische Ästhetik (1861–3), which Curl suggests was mistakenly believed to advocate materialism and functionalism. The "plastic vision" of De Stijl artists, also called Neo-Plasticism, saw itself as reaching beyond the changing appearance of natural things to bring an audience into intimate contact with an immutable core of reality, a reality that was not so much a visible fact as an underlying spiritual vision. In general, De Stijl proposed ultimate simplicity and abstraction, both in architecture and painting, by using only straight horizontal and vertical lines and rectangular forms. Furthermore, their formal vocabulary was limited to the primary colours, red, yellow, and blue, and the three primary values, black, white, and grey. The works avoided symmetry and attained aesthetic balance by the use of opposition. This element of the movement embodies the second meaning of stijl: "a post, jamb or support"; this is best exemplified by the construction of crossing joints, most commonly seen in carpentry. In many of the group's three-dimensional works, vertical and horizontal lines are positioned in layers or planes that do not intersect, thereby allowing each element to exist independently and unobstructed by other elements. This feature can be found in the Rietveld Schröder House and the Red and Blue Chair. De Stijl was influenced by Cubist painting as well as by the mysticism and the ideas about "ideal" geometric forms (such as the "perfect straight line") in the neoplatonic philosophy of mathematician M. H. J. Schoenmaekers. The De Stijl movement was also influenced by Neopositivism. The works of De Stijl would influence the Bauhaus style and the international style of architecture as well as clothing and interior design. However, it did not follow the general guidelines of an "-ism" (e.g., Cubism, Futurism, Surrealism), nor did it adhere to the principles of art schools like the Bauhaus; it was a collective project, a joint enterprise. In music, De Stijl was an influence only on the work of composer Jakob van Domselaer, a close friend of Mondrian. Between 1913 and 1916, he composed his Proeven van Stijlkunst ("Experiments in Artistic Style"), inspired mainly by Mondrian's paintings. This minimalistic—and, at the time, revolutionary—music defined "horizontal" and "vertical" musical elements and aimed at balancing those two principles. Van Domselaer was relatively unknown in his lifetime, and did not play a significant role within De Stijl. From the flurry of new art movements that followed the Impressionist revolutionary new perception of painting, Cubism arose in the early 20th century as an important and influential new direction. In the Netherlands, too, there was interest in this "new art". However, because the Netherlands remained neutral in World War I, Dutch artists were not able to leave the country after 1914 and were thus effectively isolated from the international art world—and in particular, from Paris, which was its centre then. During that period, Theo van Doesburg started looking for other artists to set up a journal and start an art movement. Van Doesburg was also a writer, poet, and critic, who had been more successful writing about art than working as an independent artist. Quite adept at making new contacts due to his flamboyant personality and outgoing nature, he had many useful connections in the art world. Founding of De StijlEdit Around 1915, Van Doesburg started meeting the artists who would eventually become the founders of the journal. He first met Piet Mondrian at an exhibition in Stedelijk Museum Amsterdam. Mondrian, who had moved to Paris in 1912 (and there, changed his name from "Mondriaan"), had been visiting the Netherlands when war broke out. He could not return to Paris, and was staying in the artists' community of Laren, where he met Bart van der Leck and regularly saw M. H. J. Schoenmaekers. In 1915, Schoenmaekers published Het nieuwe wereldbeeld ("The New Image of the World"), followed in 1916 by Beginselen der beeldende wiskunde ("Principles of Plastic Mathematics"). These two publications would greatly influence Mondrian and other members of De Stijl. Van Doesburg also knew J. J. P. Oud and the Hungarian artist Vilmos Huszár. In 1917 the cooperation of these artists, together with the poet Antony Kok, resulted in the founding of De Stijl. The young architect Gerrit Rietveld joined the group in 1918. At its height De Stijl had 100 members and the journal had a circulation of 300. During those first few years, the group was still relatively homogeneous, although Van der Leck left in 1918 due to artistic differences of opinion. Manifestos were being published, signed by all members. The social and economic circumstances of the time formed an important source of inspiration for their theories, and their ideas about architecture were heavily influenced by Hendrik Petrus Berlage and Frank Lloyd Wright. The name Nieuwe Beelding was a term first coined in 1917 by Mondrian, who wrote a series of twelve articles called De Nieuwe Beelding in de schilderkunst ("Neo-Plasticism in Painting") that were published in the journal De Stijl. In 1920 he published a book titled Le Néo-Plasticisme. Around 1921, the group's character started to change. From the time of van Doesburg's association with Bauhaus, other influences started playing a role. These influences were mainly Malevich and Russian Constructivism, to which not all members agreed. In 1924 Mondrian broke with the group after van Doesburg proposed the theory of elementarism, suggesting that a diagonal line is more vital than horizontal and vertical ones. In addition, the De Stijl group acquired many new "members". Dadaist influences, such as I. K. Bonset's poetry and Aldo Camini's "antiphilosophy" generated controversy as well. Only after Van Doesburg's death was it revealed that Bonset and Camini were two of his pseudonyms. After van Doesburg's deathEdit Theo van Doesburg died in Davos, Switzerland, in 1931. His wife, Nelly, administered his estate. Because of van Doesburg's pivotal role within De Stijl, the group did not survive. Individual members remained in contact, but De Stijl could not exist without a strong central character. Thus, it may be wrong to think of De Stijl as a close-knit group of artists. The members knew each other, but most communication took place by letter. For example, Mondrian and Rietveld never met in person. Many, though not all, artists did stay true to the movement's basic ideas, even after 1931. Rietveld, for instance, continued designing furniture according to De Stijl principles, while Mondrian continued working in the style he had initiated around 1920. Van der Leck, on the other hand, went back to figurative compositions after his departure from the group. Influence on architectureEdit The De Stijl influence on architecture remained considerable long after its inception; Mies van der Rohe was among the most important proponents of its ideas. Between 1923 and 1924, Rietveld designed the Rietveld Schröder House, the only building to have been created completely according to De Stijl principles. Examples of Stijl-influenced works by J.J.P. Oud can be found in Rotterdam (Café De Unie) and Hook of Holland. Other examples include the Eames House by Charles and Ray Eames, and the interior decoration for the Aubette dance hall in Strasbourg, designed by Sophie Taeuber-Arp, Jean Arp and van Doesburg. Works by De Stijl members are scattered all over the world, but De Stijl-themed exhibitions are organised regularly. Museums with large De Stijl collections include the Gemeentemuseum in The Hague (which owns the world's most extensive, although not exclusively De Stijl-related, Mondrian collection) and Amsterdam's Stedelijk Museum, where many works by Rietveld and Van Doesburg are on display. The Centraal Museum of Utrecht has the largest Rietveld collection worldwide; it also owns the Rietveld Schröder House, Rietveld's adjacent "show house", and the Rietveld Schröder Archives. - Ilya Bolotowsky (1907–1981), painter and sculptor - Burgoyne Diller (1906–1965), painter - Theo van Doesburg (1883–1931), painter, designer, and writer; co-founder of De Stijl movement; published De Stijl, 1917–1931 - Cornelis van Eesteren (1897–1981), architect - Jean Gorin (1899–1981), painter, sculptor - Robert van 't Hoff (1887–1979), architect - Vilmos Huszár (1884–1960), painter - Frederick John Kiesler (1890–1965), architect, theater designer, artist, sculptor - Antony Kok (1882–1969), poet - Bart van der Leck (1876–1958), painter - Piet Mondrian (1872–1944), painter, co-founder of De Stijl - Marlow Moss (1889–1958), painter - J. J. P. Oud (1890–1963), architect - Gerrit Rietveld (1888–1964), architect and designer - Kurt Schwitters (1887–1948), painter, sculptor - Georges Vantongerloo (1886–1965), sculptor - Friedrich Vordemberge-Gildewart (1899–1962), painter - Jan Wils (1891–1972), architect References and sourcesEdit - Linduff, David G. Wilkins, Bernard Schultz, Katheryn M. (1994). Art past, art present (2nd ed.). Englewood Cliffs, N.J.: Prentice Hall. pp. 523. ISBN 978-0-13-062084-2. - "De Stijl". Tate Glossary. The Tate. Retrieved 31 July 2006. - Curl, James Stevens (2006). A Dictionary of Architecture and Landscape Architecture (Paperback) (Second ed.). Oxford University Press. ISBN 978-0-19-860678-9. - Tate. "Neo-plasticism – Art Term – Tate". - "The Solomon R. Guggenheim Museum-Guggenheim] Collection Online: De Stijl". Archived from the original on 29 April 2014. - Denker, Susan A. (September 1982). "De Stijl: 1917–1931, Visions of Utopia". Art Journal. 42 (3): 242–246. doi:10.1080/00043249.1982.10792803. - Linduff, David G. Wilkins, Bernard Schultz, Katheryn M. (1994). Art past, art present (2nd ed.). Englewood Cliffs, N.J.: Prentice Hall. p. 523. ISBN 978-0-13-062084-2. - Theo van Doesburg (1918). Translated by Janet Seligman; Introd. by Hans M. Wingler; Postscript by H.L.C. Jaffé (eds.). Grundbegriffe der Neuen Gestaltenden Kunst (Grondbeginselen der Nieuwe beeldende Kunst [Principles of Neo-Plastic Art]) (in German, Dutch, and English). London, UK: Lund Humphries (1968). ISBN 978-0853311041. - Dujardin, Alain; Quirindongo, Jop (26 January 2017). "This 100-Year-Old Dutch Movement Shaped Web Design Today". Backchannel (blog). Retrieved 29 January 2017. - Piet Mondrian, Le Néo-Plasticisme, Principe Général de l'Equivalence Plastique, Paris, 1920, Universitätsbibliothek Heidelberg - "Marble and Mondrian: a tour of Moscow metro". DW. Retrieved 6 April 2017. - "Type design for Rumyantsevo Moscow Metro station". Retrieved 6 April 2017. - "Ilya Bolotowsky". Sullivan Goss. Retrieved 24 September 2015. - "Burgoyne Diller". Sullivan Goss. Archived from the original on 25 September 2015. Retrieved 24 September 2015. - "de Stijl". the-artists.org. 28 December 2008. Retrieved 24 September 2015. - "Robert Van 'T Hoff in The Kröller-Müller Museum". Het Nieuwe Instituut. Retrieved 24 September 2015. - "Vilmos huszar De Stijl". MoMA. Retrieved 24 September 2015. - "AD Classics: Endless House / Friedrick Kiesler". ArchDaily. 11 April 2011. Retrieved 24 September 2015. - White, Michael (20 September 2003). De Stijl and Dutch Modernism. Manchester University Press. p. 134. ISBN 978-0-7190-6162-2. - Hauffe, Thomas (1998). Design (Reprinted ed.). London: Laurence King. p. 71. ISBN 9781856691345. OCLC 40406039. - Gottfried, Schlütersche Verlagsgesellschaft mbH & Co. KG. "Spaces for the Permanent Collection - sprengel-museum.com". www.sprengel-museum.com. Hannover. - White, Michael (20 September 2003). De Stijl and Dutch Modernism. Manchester University Press. p. 36. ISBN 978-0-7190-6162-2. - "De Stijl Architecture". Design Arts. Art and Culture. Archived from the original on 27 March 2006. Retrieved 31 July 2006. - van Doesburg, Theo (1924). "Towards a plastic architecture". Translation of original published in De Stijl, XII, 6/7. Architecture & CAAD. Archived from the original on 28 November 2005. Retrieved 31 July 2006. - Blotkamp, Carel, ed. (1982). De beginjaren van De Stijl 1917–1922. Utrecht: Reflex. - Blotkamp, Carel, ed. (1996). De vervolgjaren van De Stijl 1922–1932. Amsterdam: Veen. - Jaffé, H. L. C. (1956). De Stijl, 1917–1931, The Dutch Contribution to Modern Art (1st ed.). Amsterdam: J.M. Meulenhoff. - Janssen, Hans; White, Michael (2011). The Story of De Stijl. Lund Humphries. ISBN 978-1-84822-094-2. - Overy, Paul (1969). De Stijl (1st ed.). London: Studio Vista. - White, Michael (2003). De Stijl and Dutch Modernism. Manchester [etc]: Manchester University Press. - Many sourced quotes and facts of De Stijl artists in: De Stijl 1917–1931 – The Dutch Contribution to Modern Art, by H.L.C. Jaffé; J.M. Meulenhoff, Amsterdam 1956 - De Stijl, The International Dada Archive, University of Iowa Libraries - Jakob van Domselaer's Proeven van Stijlkunst, rare recording. - Essay about Mondrian and mysticism Scans of the complete first volume of the journal. - De Stijl Manifesto, Theo van Doesburg, 1918
<urn:uuid:e822ece5-8e67-4e2b-818c-1ae555ab0230>
CC-MAIN-2022-33
https://en.m.wikipedia.org/wiki/De_Stijl
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00499.warc.gz
en
0.871171
4,116
2.625
3
The Story of Kwashin Koji By Lafcadio Hearn From: Lafcadio Hearn, Japanese Miscellany, Little, Brown, and Company, Boston, MA, 1901, Page 37-51. (Footnotes with astatic are from the original book, others by Masatoshi Iguchi.) During the period of Tenshō, *; there lived, in one of the northern districts of Kyōto, an old man whom the people called Kwashin Koji.*; He wore a long white beard, and was always dressed like a Shinto priest; but he made his living by exhibiting Buddhist pictures and by preaching Buddhist doctrine. Every fine day he used to go to the grounds of the temple Gion, and there suspend to some tree a large kakémono on which were depicted the punishments of the various hells . This kakémono was so wonderfully painted that all things represented in it seemed to be real; and the old man would discourse to the people crowding to see it, and explain to them the Law of Cause and Effect, — pointing out with a Buddhist staff [nyoi], which he always carried, each detail of the different torments, and exhorting everybody to follow the teachings of the Buddha. Multitudes assembled to look at the picture and to hear the old man preach about it; and sometimes the mat which he spread before him, to receive contributions, was covered out of sight by the heaping of coins thrown upon it. Oda Nobunaga was at that time ruler of Kyoto and of the surrounding provinces. One of his retainers, named Arakawa , during a visit to the temple of Gion, happened to see the picture being displayed there; and he afterwards talked about it at the palace. Nobunaga was interested by Arakawa's description, and sent orders to Kwashin Koji to come at once to the palace, and to bring the picture with him. When Nobunaga saw the kakémono he was not able to conceal his surprise at the vividness of the work: the demons and the tortured spirits actually appeared to move before his eyes; and he heard voices crying out of the picture; and the blood there represented seemed to be really flowing, — so that he could not help putting out his finger to feel if the painting was wet. But the finger was not stained, — for the paper proved to be perfectly dry. More and more astonished, Nobunaga asked who had made the wonderful picture. Kwashin Koji answered that it had been painted by the famous Oguri Sōtan,* — after he had performed the rite of self-purification every day for a hundred days, and practised great austerities, and made earnest prayer for inspiration to the divine Kwannon of Kiyomidzu Temple . Observing Nobunaga's evident desire to possess the kakémono, Arakawa then asked Kwashin Koji whether he would “offer it up,” as a gift to the great lord. But the old man boldly answered: — This painting is the only object of value that 1 possess; and I am able to make a little money by showing it to the people. Were I now to present this picture to the lord, I should deprive myself of the only means which I have to make my living. However, if the lord be greatly desirous to possess it, let him pay me for it the sum of one hundred ryō of gold. With that amount of money I should be able to engage in some profitable business. Otherwise, I must refuse to give up the picture.” Nobunaga did not seem to be pleased at this reply; and he remained silent. Arakawa presently whispered something in the ear of the lord, who nodded assent; and Kwashin Koji was then dismissed, with a small present of money. But when the old man left the palace, Arakawa secretly followed him, — hoping for a chance to get the picture by foul means. The chance came; for Kwashin Koji happened to take a road leading directly to the heights beyond the town. When he reached a certain lonesome spot at the foot of the hills, where the road made a sudden turn, he was seized by Arakawa, who said to him: — “Why were you so greedy as to ask a hundred ryō of gold for that picture? Instead of a hundred ryo of gold, I am now going to give you one piece of iron three feet long.” Then Arakawa drew his sword, and killed the old man, and took the picture. The next day Arakawa presented the kakémono — still wrapped up as Kwashin Koji had wrapped it before leaving the palace — to Oda Nobunaga, who ordered it to be hung up forthwith. But, when it was unrolled, both Nobunaga and his retainer were astounded to find that there was no picture at all — nothing but a blank surface. Arakawa could not explain how the original painting had disappeared; and as he had been guilty — whether willingly or unwillingly — of deceiving his master, it was decided that he should be punished. Accordingly he was sentenced to remain in confinement for a considerable time. Scarcely had Arakawa completed his term of imprisonment, when news was brought to him that Kwashin Koji was exhibiting the famous picture in the grounds of Kitano Temple . Arakawa could hardly believe his ears; but the information inspired him with a vague hope that he might be able, in some way or other, to secure the kakémono, and thereby redeem his recent fault. So he quickly assembled some of his followers, and hurried to the temple; but when he reached it he was told that Kwashin Koji had gone away. Several days later, word was brought to Arakawa that Kwashin Koji was exhibiting the picture at Kiyomidzu Temple, and preaching about it to an immense crowd. Arakawa made all haste to Kiyomidzu; but he arrived there only in time to see the crowd disperse, — for Kwashin Koji had again disappeared. At last one day Arakawa unexpectedly caught sight of Kwashin Koji in a wine-shop, and there captured him. The old man only laughed goodhumoredly on finding himself seized, and said: — “I will go with you; but please wait until I drink a little wine.” To this request Arakawa made no objection; and Kwashin Koji thereupon drank, to the amazement of the bystanders, twelve bowls of wine. After drinking the twelfth he declared himself satisfied; and Arakawa ordered him to be bound with a rope, and taken to Nobunaga's residence. In the court of the palace Kwashin Koji was examined at once by the Chief Officer, and sternly reprimanded. Finally the Chief Officer said to him: — “It is evident that you have been deluding people by magical practices; and for this offence alone you deserve to be heavily punished. However, if you will now respectfully offer up that picture to the Lord Nobunaga, we shall this time overlook your fault. Otherwise we shall certainly inflict upon you a very severe punishment.” At this menace Kwashin Koji laughed in a bewildered way, and exclaimed: — “It is not I who have been guilty of deluding people. Then, turning to Arakawa, he cried out: — “You are the deceiver! You wanted to flatter the lord by giving him that picture; and you tried to kill me in order to steal it. Surely, if there be any such thing as crime, that was a crime! As luck would have it, you did not succeed in killing me, but if you had succeeded, as you wished, what would you have been able to plead in excuse for such an act? You stole the picture, at all events. The picture that I now have is only a copy. And after you stole the picture, you changed your mind about giving it to Lord Nobunaga; and you devised a plan to keep it for yourself. So you gave a blank kakémono to Lord Nobunaga; and, in order to conceal your secret act and purpose, you pretended that I had deceived you by substituting a blank kakémono for the real one. Where the real picture now is, I do not know. You probably do.” At these words Arakawa became so angry that he rushed towards the prisoner, and would have struck him but for the interference of the guards. And this sudden outburst of anger caused the Chief Officer to suspect that Arakawa was not altogether innocent. He ordered Kwashin Koji to be taken to prison for the time being; and he then proceeded to question Arakawa closely. Now Arakawa was naturally slow of speech; and on this occasion, being greatly excited, he could scarcely speak at all; and he stammered, and contradicted himself, and betrayed every sign of guilt. Then the Chief Officer ordered that Arakawa should be beaten with a stick until he told the truth. But it was not possible for him even to seem to tell the truth. So he was beaten with a bamboo until his senses departed from him, and he lay as if dead. Kwashin Koi was told in the prison about what had happened to Arakawa; and he laughed. But after a little while he said to the jailer: — “Listen! That fellow Arakawa really behaved like a rascal; and I purposely brought this punishment upon him, in order to correct his evil inclinations. But now please say to the Chief Officer that Arakawa must have been ignorant of the truth, and that I shall explain the whole matter satisfactorily.” Then Kwashin Koji was again taken before the Chief Officer, to whom he made the following declaration: — “In any picture of real excellence there must be a ghost; and such a picture, having a will of its own, may refuse to be separated from the person who gave it life, or even from its rightful owner . There are many stories to prove that really great pictures have souls. It is well known that some sparrows, painted upon a sliding-screen [fusuma] by Hōgen Yenshin , once flew away, leaving blank the spaces which they had occupied upon the surface. Also it is well known that a horse, painted upon a certain kakémono, used to go out at night to eat grass. Now, in this present case, I believe the truth to be that, inasmuch as the Lord Nobunaga never became the rightful owner of my kakémono, the picture voluntarily vanished from the paper when it was unrolled in his presence. But if you will give me the price that I first asked, — one hundred ryō of gold, — I think that the painting will then reappear, of its own accord, upon the now blank paper. At all events, let us try! There is nothing to risk, — since, if the picture does not reappear, I shall at once return the money.” On hearing of these strange assertions, Nobunaga ordered the hundred ryō to be paid, and came in person to observe the result. The kakémono was then unrolled before him; and, to the amazement of all present, the painting reappeared, with all its details. But the colours seemed to have faded a little; and the figures of the souls and the demons did not look really alive, as before. Perceiving this difference, the lord asked Kwashin Koji to explain the reason of it; and Kwashin Koji replied: — “The value of the painting, as you first saw it, was the value of a painting beyond all price. But the value of the painting, as you now see it, represents exactly what you paid for it, — one hundred ryō of gold. . . . How could it be otherwise?” On hearing this answer, all present felt that it would be worse than useless to oppose the old man any further. He was immediately set at liberty; and Arakawa was also liberated, as he had more than expiated his fault by the punishment which he had undergone. Now Arakawa had a younger brother named Buichi, — also a retainer in the service of Nobunaga. Buichi was furiously angry because Arakawa had been beaten and imprisoned; and he resolved to kill Kwashin Koji. Kwashin Koji no sooner found himself again at liberty than he went straight to a wine-shop, and called for wine. Buichi rushed after him into the shop, struck him down, and cut off his head. Then, taking the hundred ryō that had been paid to the old man, Buichi wrapped up the head and the gold together in a cloth, and hurried home to show them to Arakawa. But when he unfastened the cloth he found, instead of the head, only an empty winegourd , and only a lump of filth instead of the gold. . . . And the bewilderment of the brothers was presently increased by the information that the headless body had disappeared from the wineshop, — none could say how or when. Nothing more was heard of Kwashin Koji until about a month later, when a drunken man was found one evening asleep in the gateway of Lord Nobunaga's palace, and snoring so loud that every snore sounded like the rumbling of distant thunder. A retainer discovered that the drunkard was Kwashin Koji. For this insolent offence, the old fellow was at once seized and thrown into the prison. But he did not awake; and in the prison he continued to sleep without interruption for ten days and ten nights, — all the while snoring so that the sound could be heard to a great distance. About this time, the Lord Nobunaga came to his death through the treachery of one of his captains, Akéchi Mitsuhide , who thereupon usurped rule. But Mitsuhidé's power endured only for a period of twelve days. Now when Mitsuhidé became master of Kyoto, he was told of the case of Kwashin Koji; and he ordered that the prisoner should be brought before him. Accordingly Kwashin Koji was summoned into the presence of the new lord; but Mitsuhidé spoke to him kindly, treated him as a guest, and commanded that a good dinner should be served to him. When the old man had eaten, Mitsuhidé said to him: — “I have heard that you are very fond of wine; — how much wine can you drink at a single sitting?” Kwashin Koji answered: — “I do not really know how much; I stop drinking only when I feel intoxication coming on.” Then the lord set a great wine-cup * before Kwashin Koji, and told a servant to fill the cup as often as the old man wished. And Kwashin Koji emptied the great cup ten times in succession, and asked for more; but the servant made answer that the wine-vessel was exhausted. All present were astounded by this drinking-feat; and the lord asked Kwashin Koji, “Are you not yet satisfied, Sir?” “Well, yes,” replied Kwashin Koji, “I am somewhat satisfied; — and now, in return for your august kindness, I shall display a little of my art. Be therefore so good as to observe that screen.” He pointed to a large eight-folding screen upon which were painted the Eight Beautiful Views of the Lake of Ōmi (Ōmi. Hakkei); and everybody looked at the screen. In one of the views the artist had represented, far away on the lake, a man rowing a boat, — the boat occupying, upon the surface of the screen, a space of less than an inch in length. Kwashin Koji then waved his hand in the direction of the boat; and all saw the boat suddenly turn, and begin to move toward the foreground of the picture. It grew rapidly larger and larger as it approached; and presently the features of the boatman became clearly distinguishable. Still the boat drew nearer, — always becoming larger, — until it appeared to be only a short distance away. And, all of a sudden, the water of the lake seemed to overflow, — out of the picture into the room; — ;and the room was flooded; — and the spectators girded up their robes in haste, as the water rose above their knees. In the same moment the boat appeared to glide out of the screen, — a real fishing-boat; — and the creaking of the single oar could be heard. Still the flood in the room continued to rise, until the spectators were standing up to their girdles in water. Then the boat came close up to Kwashin Koji; and Kwashin Koji climbed into it; and the boatman turned about, and began to row away very swiftly. And, as the boat receded, the water in the room began to lower rapidly, — seeming to ebb back into the screen. No sooner had the boat passed the apparent foreground of the picture than the room was dry again! But still the painted vessel appeared to glide over the painted water, — retreating further into the distance, and ever growing smaller, — till at last it dwindled to a dot in the offing. And then it disappeared altogether; and Kwashin Koji disappeared with it. He was never again seen in Japan. * The period of Tensho lasted from 1573 to 1591 (A.D.). The death of the great captain, Oda Nobunaga, who figures in this story, occurred in 1582. * Related in the curious old book Yasō-Kidan. The source novel was “Kwashi Koji (果心居士)” written in Chinese, in Kousai Ishikawa (石川鴻斎), “Yaso Kidan (夜窓鬼談, Night Window Demon Talk), Toyo-do, Tokyo, 1889 (Meiji, 明治 22). Kwashin Koji was sixty odd years old according to the source novel. Beard and whisker in the source novel. Buddhist pictures: “A picture to show various phases of the Hell” is better. temple Gion: “Gion Shrine” in the present-day Gion-cho, Higashiyama-ku, Kyoto. Kakemono = A hanging scroll. the punishments of the various hells: “various punishments in the Hell” is more feasible. Nyoi = A ceremonial sceptre used by monks on the occasion of giving pleachings and memorial services. Nobunaga Oda (1543 - 1582), a powerful lord from Owari (the present-day Aichi Prefecture). He won wars and constructed a great castle at Azuchi in 1576 (Tensho 4). He was appointed the Udaijin (Upper Minister) in Kyoto. Among retainers of Nobunaga was a certain Shimpachiro Arakawa but he has died in a battle in in 1574 (Tensho 2). Thus, the Arakawa in the story must be an fictional character (Robert Campbell). Cf. Ref 4. * Oguri Sōtan was a great religious artist who flourished in the early part of the fifteenth century. He became a Buddhist priest in the later years of his life. Otowasan Kiyomizu Temple, the main temple of North-Hosso fact of Mahayana, located at Kiyomizu-cho, Higasiyama-ku, Kyoto. Not a temple but a shurine, called Kitano-Tempangu, at bakurou-cho, Kamigyou-ku, Kyouto, which enshrines Michzane Sugawara. a wine-shop: “a pub (public house)” is more feasible. WIne: Japanese sake. “Wine in the text below should be read as “sake”. “In any picture of real excellence there must be a ghost; and such a picture, having a will of its own, may refuse to be separated from the person who gave it life, or even from its rightful owner.” The following description may be better for what was said in the original novel. “There is a spirit in a masterpiece picture. If it was not owned by the rightfull owner, the spirit would not remain.” There are many traditions that a masterpiece painting bears a spirit, as pointed out by Robert Campbell. Hōgen Yenshin: “Hōgen Motonobu” is correct. Kanou Motonobu (1476 - 1559) was a painter, the second master of the Kanou School. Hōgen (later Kohōgen) was his honourable title. Whether he painted “a group of sparrow) is unknown: A painting, entitled “Escaped sparrows (Nuke suzume)”, painted by Nobumasa Kanou (1607 - 1658) on the sliding-screen in Chion-in Temple (Rinka-cho, Higashiyama-ku, Kyoto) is well known (Robert Campbell). A horse painted on the corridor of imperial palace went out of the picture every night and ate bush-clover on the door and rice plant in the farmland (In: Kokin-chomon-shu (古今著聞集) Vol. 11. Robert Campbell). A horse painted by Tanyu Kanou (1602 - 1674) on a picture scroll in Ryuzenji Temple, Hamamatsu, went out night to eat vegetables in nearby fields (Kiyoshi Mitarai, “Enshu nana-fushigi-no-hanashi (Enshu seven wonders)” by Enshu Densetsu Kenkyu Kyoukai 1982. http://www.hamamatsu-books.jp/category/detail/4dfeb6b90ab5d.html. The date of the second tradition is obviously later than the time o Kwasin Koji or Nobunaga. Ref 16. Winegourd (?): “a sake bottle” in the original novel. a lump of filth: “a lump of clay” in the original novel. Mitsuhide Akechi (1528 (?) - 82), a lord and retainer of Nubonaga, given the fief of Yamashiro, east to Kyoto. As written in the text, he betrayed and attacked Nobunaga who was on sojourn in Honnouji Temple in Kyoto, forcing Nobunaga to suicide. The event was on the 21st June 1582. He was counterattacked by Hideyoshi Toyoyomi, a retainer of Nobunaga and killed eleven days later on the 2nd July 1582. * The term “bowl” would better indicate the kind of vessel to which the story-teller refers. Some of the so called cups, used on festival occasions, were very large, shallow lacquered basins capable of holding considerably more than a quart. To empty one of the largest size, at a draught, was considered to be no small feat.
<urn:uuid:19955145-cb12-49ad-9110-e94bbb3a6799>
CC-MAIN-2022-33
http://maiguch.sakura.ne.jp/ALL-FILES/ENGLISH-PAGE/ESSAYS-ETC/html-lafcadio-hearn-kwashinkoji/kwashin_koji_by_lafcadio_hearn_20220228.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00699.warc.gz
en
0.978382
5,159
2.6875
3
- Research Article - Open Access The microbiome associated with equine periodontitis and oral health Veterinary Research volume 47, Article number: 49 (2016) Equine periodontal disease is a common and painful condition and its severe form, periodontitis, can lead to tooth loss. Its aetiopathogenesis remains poorly understood despite recent increased awareness of this disorder amongst the veterinary profession. Bacteria have been found to be causative agents of the disease in other species, but current understanding of their role in equine periodontitis is extremely limited. The aim of this study was to use high-throughput sequencing to identify the microbiome associated with equine periodontitis and oral health. Subgingival plaque samples from 24 horses with periodontitis and gingival swabs from 24 orally healthy horses were collected. DNA was extracted from samples, the V3–V4 region of the bacterial 16S rRNA gene amplified by PCR and amplicons sequenced using Illumina MiSeq. Data processing was conducted using USEARCH and QIIME. Diversity analyses were performed with PAST v3.02. Linear discriminant analysis effect size (LEfSe) was used to determine differences between the groups. In total, 1308 OTUs were identified and classified into 356 genera or higher taxa. Microbial profiles at health differed significantly from periodontitis, both in their composition (p < 0.0001, F = 12.24; PERMANOVA) and in microbial diversity (p < 0.001; Mann–Whitney test). Samples from healthy horses were less diverse (1.78, SD 0.74; Shannon diversity index) and were dominated by the genera Gemella and Actinobacillus, while the periodontitis group samples showed higher diversity (3.16, SD 0.98) and were dominated by the genera Prevotella and Veillonella. It is concluded that the microbiomes associated with equine oral health and periodontitis are distinct, with the latter displaying greater microbial diversity. Periodontal disease has long been recognised as a common and painful equine oral disorder and its substantial welfare impact was acknowledged at the start of the twentieth century being described as “the scourge of the horse’s mouth” [1, 2]. More recently, studies have shown the presence of periodontitis in up to 75% of horses [3, 4] with prevalence increasing with advancing age. A dental survey noted that classical (i.e. plaque-induced) periodontal disease was rare in horses, but periodontal disease induced by food impaction due to abnormal spacing between the cheek teeth was common . The condition is often associated with the presence of cheek teeth diastemata and can also be present secondary to other oral disorders such as supernumerary, displaced or rotated teeth . Dropping of feed (quidding) and difficulty eating are the main clinical signs , although these can be subtle and easily overlooked. More recent clinical studies have reinforced the importance of equine periodontitis, currently recognised as a common and very painful equine dental disease [6, 8]. Two forms of periodontal disease exist, namely gingivitis and periodontitis. Gingivitis is completely reversible and is recognised by the classic signs of bleeding, inflammation, redness and swelling of the gums. Periodontitis attacks the deeper structures that support the teeth, damaging the surrounding bone and periodontal ligament, resulting in tooth loss. Despite the importance of this condition there have been few recent studies into its aetiopathogenesis. Bacteria have been shown to be the causative agents in feline, canine and human periodontal disease and so it is highly likely they play a crucial role in the pathogenesis of the equine condition. Involvement of bacteria in equine periodontal disease was recently acknowledged [9, 10]. However, understanding of the equine oral microbiome is limited and merits further study and little is known about the role bacteria play in equine periodontitis . Studies in other species have estimated that around 50% of oral bacteria cannot be cultured by conventional approaches due to nutritional and fastidious growth requirements and thus the number and variety of bacterial species present in the oral microbiome has been greatly underestimated to date. It is now possible to determine almost the entire community of bacteria, both commensal and pathogenic, that inhabit the equine oral cavity, in both health and periodontitis using culture-independent methods. To date, the majority of approaches have used Sanger sequencing to determine bacterial 16S rRNA gene sequences. This approach allows detection not only of cultivable species but also of fastidious bacteria that may be uncultivable, and also of novel species that may be important in the pathogenesis of disease. This method has already been used to determine the bacterial species present in canine and ovine periodontal disease lesions. The aim of this study was to determine the microbial profiles associated with the healthy equine oral cavity and equine periodontitis using high-throughput sequencing of the bacterial 16S rRNA gene. This approach provides far greater depth, coverage, accuracy and sensitivity than that offered by Sanger sequencing in assessing the composition of complex microbial communities, uncovering microbial diversities that are orders of magnitude higher and with considerably less bias . Materials and methods Ethical approval was granted prior to the start of the study by the University of Glasgow School of Veterinary Medicine Ethics and Research Committee and by the University of Edinburgh Veterinary Ethical Review Committee. All horses involved in the study presented either to the Weipers Centre Equine Hospital, University of Glasgow or the Royal (Dick) School of Veterinary Studies, University of Edinburgh for routine dental examination, investigation of dental disease or had been humanely euthanatised for reasons unrelated to the oral cavity and sent for post-mortem examination. Following a thorough oral examination horses were categorised as either “orally healthy” or “periodontitis” and placed into two groups accordingly. The orally healthy group had no evidence of gingival inflammation, no periodontal pockets and no evidence of any other oral pathology. The “periodontitis” group had obvious gingival inflammation and periodontal pockets of over 5 mm in depth. No antimicrobial drugs had been given in the previous 8 weeks to any horse involved in the study. Once food debris was removed, an equine dental curette was used to collect subgingival plaque samples from a single periodontal pocket (depth greater than 5 mm) of 24 horses with clinical periodontitis and placed into 0.5 mL fastidious anaerobe broth (FAB). A swab of the gingival margin with sufficient pressure to also collect material from the gingival crevice on the buccal aspect of cheek teeth 307–308 (Modified Triadan Numbering System) was taken from 24 orally healthy horses using an Amies Transport Swab (VWR International, Lutterworth, UK). One periodontitis affected sample was lost for further sample processing, resulting in 23 samples from periodontitis cases and 24 samples from healthy horses being available for analysis. Post-mortem samples were collected within 1 hour of euthanasia. Sample processing and DNA extraction Supragingival and subgingival plaque samples were each vortex mixed for 30 s and Amies transport swabs were immersed in 0.5 mL FAB and mixed to remove bacteria. A crude DNA extract was prepared from each sample by digestion with proteinase K (100 µg/mL) at 60 °C for 60 min, followed by boiling for 10 min. Further DNA purification was conducted using a bead beating technique where 150 µL of each sample was mixed with 200 µL phenol saturated with Tris–HCl (pH 8.0), 250 µL glass beads (0.1 mm) suspended in TE buffer and 200 µL lysis buffer. Samples were then placed in a BioSpec Mini-Beadbeater for 2 min at 2100 oscillations/min and DNA extracted with the AGOWA mag Mini DNA Isolation Kit (AGOWA, Berlin, Germany). For each sample, the V3–V4 region (which gives optimal taxonomic coverage and taxonomic resolution) of the bacterial 16S rRNA gene was generated by PCR with primers 341F (CCTACGGGNGGCWGCAG) and 806R (GGACTACHVGGGTWTCTAAT). Primers contained Illumina adapters and a unique 8-nucleotide sample index sequence key . Amplicon libraries were pooled in equimolar amounts and purified using the Illustra™ GFXTM PCR DNA and Gel Band Purification Kit (GE Healthcare, Eindhoven, The Netherlands). Amplicon quality and size was analysed on an Agilent 2100 Bioanalyzer (Santa Clara, CA, USA). Paired-end sequencing of amplicons was conducted on the Illumina MiSeq platform using the v3 kit generating 2 × 301 nucleotide reads (Illumina, San Diego, USA). Analysis of sequencing data Sequencing reads were merged , processed and clustered with USEARCH version 8.0.1623 . After merging (minimum and maximum merged length, 380 and 438, respectively), the sequences were quality filtered (max. expected error rate 0.002, no ambiguous bases allowed) and clustered into operational taxonomic units (OTUs) using the following settings: -uparse_maxdball 1500, only de novo chimera checking, usearch_global with -maxaccepts 8 -maxrejects 64 -maxhits 1. QIIME version 1.8.0 was used to select the most abundant sequence of each OTU and assigned a taxonomy using the RDP classifier with a minimum confidence of 0.8 and the 97% representative sequence set based on the SILVA rRNA database, release 119 for QIIME . Attributes such as oxygen utilisation, Gram stain and shape were assigned at genus level as previously described . In order to normalise the sequencing depth, the dataset was randomly sub-sampled to 16 000 reads per sample. Diversity analysis (Shannon Diversity Index, Chao-1 estimate of total species richness), data ordination by principal component analysis (PCA) and assessment of differences between microbial profiles of the two groups by one-way PERMANOVA were performed using PAleontological STatistics (PAST; v3.02) software . PERMANOVA was used with Bray–Curtis similarity distance. For PCA, the OTU dataset was additionally normalized by log2-transformation. The difference in diversity of the genera detected in both health and disease was compared and analysed statistically using the Mann–Whitney U test in SPSS (version 21.0). To determine which OTUs and taxa contribute to differences between the groups, linear discriminant analysis effect size (LEfSe) was used. The majority (16 of 24; 66.7%) of the periodontitis samples originated from the Royal (Dick) School of Veterinary Studies, University of Edinburgh, three (12.5%) originated from the Weipers Centre Equine Hospital, University of Glasgow and five (20.8%) were post-mortem samples. The mean age of sampled horses with periodontitis was 13.2 years (range 3–27 years); 13 (54%) of these horses were mares and 11 (46%) were geldings. Of the 24 orally healthy horses sampled, 20 (83.3%) were collected at the Weipers Centre Equine Hospital, University of Glasgow, two (8.3%) at the Royal (Dick) School of Veterinary Studies, University of Edinburgh and two (8.3%) were post-mortem samples. The average age of this group was 11.7 years (range 4–27 years); 16 (66.7%) of horses were geldings and eight (33.3%) were mares. Of all mares included in the study, 52% had periodontitis and 40% of all geldings had periodontitis. There was however no statistically significant difference between healthy and periodontitis affected horses by gender (p = 0.383; Chi square test) or by age (p = 0.242; Mann–Whitney test). A diverse range of breeds were sampled, although 19 of 48 (39.6%) were native ponies: Welsh Cob (n = 6), Welsh Pony (n = 4), Dartmoor Pony (n = 1), Shetland Pony (n = 2), Connemara Pony (n = 2), Exmoor Pony (n = 2), Highland Pony (n = 1), Fell Pony (n = 1). Eleven of 48 horses (22.9%) were Cobs or Cob crossbreeds and four horses (8.3%) were Thoroughbred (TB) or TB crossbreeds. Icelandic horses accounted for three (6.3%) of the samples. The remaining 11 (22.9%) horses were of a variety of breeds: Arabian (n = 3), Irish Sports Horse (n = 3), Gelderlander (n = 1), Trakehner (n = 1), Warmblood (n = 2), Irish Draft (n = 1). No significant difference was observed between breed and the presence of periodontitis. Sequencing generated a total of 4 170 177 reads. After quality processing the OTU table contained 1 342 927 reads that were clustered in 1334 OTUs. The number of reads per sample ranged from 16 272 to 49 685 (median 27 855, mean 28 573, SD 7943). After subsampling at equal depth of 16 000 reads/sample, 1308 OTUs remained in the dataset that was used for the further analyses. Microbial profile analyses Principal component analysis revealed clear differences between the equine oral microbiomes in oral health and periodontitis (Figure 1). Healthy samples clustered together and showed lower variability compared to periodontitis samples. The difference between microbial profiles of the two groups was statistically significant (p < 0.0001, F = 12.24, PERMANOVA). Microbial profiles from healthy horses were statistically significantly less diverse (p < 0.001, Mann–Whitney test), both by actual species richness (number of OTUs) (Figure 2A) as well as by estimated species richness or Chao-1 (Figure 2B) and Shannon Diversity Index (Figure 2C). On average, samples from healthy horses harboured 161 OTUs (SD 116, range 64–568), while samples from periodontitis affected horses contained 252 OTUs (SD 81, range 85–380). Compositional differences between the groups Linear discriminant analysis (LDA) effect size (LEfSe) was used to assess the differences between the two groups of samples both at the OTU level and at the genus or higher taxonomic level. From all 1308 OTUs, 266 OTUs were statistically significantly different between the healthy and periodontitis groups (p < 0.05, LDA > 2). Of these, 64 OTUs had an absolute LDA score above 3 (Additional file 1), the majority of which (51 of 64 OTUs) were associated with disease. At the genus level, from 356 genera or higher taxa, 107 taxa were statistically significantly different between the two groups (p < 0.05). Of these, 69 taxa had LDA scores above 3 and, again, the majority (52 of 69 taxa) were associated with disease (Figure 3). The most discriminative genera between health and disease were Gemella and Actinobacillus in health and Prevotella and Veillonella in periodontitis, respectively (Figure 4). From 179 entries at the family level, 51 were significantly different between health and disease (p < 0.05) (Figure 5). The majority (N = 38) of these were associated with disease, while only 13 microbial families were positively associated with health (Additional file 2). Interestingly, periodontitis samples had significantly higher relative abundance of Methanobacteriaceae (p = 0.0001) and Thermoplasmatales (p < 0.0001) (both families belong to the domain Archaea). With regard to inferred Gram stain and shape, strongly significant differences were observed between healthy and diseased samples (**p < 0.0001, *p < 0.05, Mann–Whitney test; data not shown). Despite the difficulty in permanently resolving equine periodontitis, its high prevalence and substantial effect on welfare, few original research studies on its aetiopathogenesis have been published. In humans, the disease is known to be multifactorial and although bacteria play a major role in the aetiopathogenesis of periodontitis in other species, their role in equine periodontitis has only recently received investigation . Few studies have investigated the oral microbiome of the horse in oral health or disease. Recently, the microbiome of the equine gingival sulcus was investigated by pyrosequencing pooled samples from 200 sulcus sites in two orally healthy horses . Twelve phyla were identified, the most prevalent being Gammaproteobacteria (28.8%), Firmicutes (27.6%) and Bacteroidetes (25.1%). The study suggested that there are many similarities between the equine subgingival microbiota and the subgingival microbiota detected in human, feline and canine studies. Putative periodontal pathogens such as Treponema, Tannerella and Porphyromonas species were detected at low levels in these samples. In addition, many bacteria identified were not closely related to other known bacteria and the authors suggested these may represent “equine-specific” taxa. As few previous studies have been performed investigating the equine oral microbiome, it is highly likely that novel, previously undetected bacteria will be identified when using modern, culture-independent techniques. The current study was the first to use high-throughput 16S rRNA gene sequencing to compare the bacterial populations present in equine oral health and periodontitis and revealed a statistically significant dissimilarity between the bacterial populations found in equine oral health and in equine periodontitis lesions and represents a considerable advance on what has previously been documented for the oral microbial community in both healthy and diseased horses. In the current study, 60% of horses aged 10 years or above were affected by periodontitis and of all diseased horses, 70% were 10 years or older. Mares were found to be slightly more likely to have periodontitis than geldings (52% of mares compared to 40% of geldings), although this difference was not statistically significant. Due to the large variety of breeds sampled and the relatively small sample numbers, no particular breed disposition to disease could be identified. Further larger scale studies may be useful to examine links between equine periodontitis and age, sex and breed. In this cross-sectional study it is impossible to equate the results with disease aetiology and pathogenesis. A potential limitation of this study is that samples were collected from both live and dead horses and that this could add further variability to the results. However, all samples were collected within 1 hour of death (usually much quicker) and, since DNA from live and dead bacteria was detected rather than live cells per se, it is very unlikely that any changes in the microbiomes would be attributable to death of the horses. In any case, individual healthy oral samples (whether from live or dead horses) demonstrated noticeable variation in the composition of their microbiomes but were more similar to each other than to those from horses with periodontitis, and vice versa. Longitudinal studies starting with young healthy horses, and follow-up on their periodontal status and microbiota of the oral cavity until development of periodontal disease would be required. The periodontal pocket found in diseased horses constitutes a new niche in an oral ecosystem that will select for a different microbiome and this may explain the significant increase in microbiome diversity noted in the periodontitis cases in comparison with the orally healthy horses. Increased microbiome diversity has also been noted in samples taken from human periodontitis patients in comparison to orally healthy controls [25, 26]. Environmental differences present between the healthy equine gingival sulcus and diseased periodontal pockets may be particularly striking in the horse, as equine dental anatomy allows for formation of particularly deep periodontal pockets which may measure over 15 mm in severe cases . It is possible that during disease progression, the environmental changes occurring as a shallow gingival sulcus becomes a deep periodontal pocket allows a new group of bacteria to flourish whilst providing a less optimal environment for the growth of others. In the current study, significant differences were seen in both the expected shape and Gram staining characteristics of bacteria detected in oral health and periodontal pockets, with Gram negative rods, spirochetes and mycoplasma more evident in periodontitis. Spirochetes have long been associated with human periodontitis and more recently spirochetes were detected within the epithelium of equine periodontal pockets . Treponema denticola is well recognised as a periodontal pathogen in man, acting as one of the three “red complex” bacteria found in severe periodontitis lesions alongside Porphyromonas gingivalis and Tannerella forsythia . In another study, DNA corresponding to Treponema species was detected in 78.2% of horses with clinically overt equine odontoclastic tooth resorption hypercementosis (EOTRH) compared to 38% of unaffected horses and Tannerella DNA was found in 38.4% of diseased horses compared to 19% of unaffected horses . In the current study, abundance of both the Tannerella and Treponema genera was significantly increased in periodontitis. The most discriminative genera between health and disease were the genera Gemella and Actinobacillus in health and Prevotella and Veillonella in periodontitis, respectively. In equine periodontitis, the abundance of bacteria belonging to the Prevotella and Veillonella genera was significantly increased in comparison to oral health. Several species of Prevotella have been shown to be involved in human periodontitis, such as Prevotella intermedia and Prevotella melaninogenica . Several species of Veillonella have been isolated from both healthy gingival sulci and diseased periodontal pockets in man. However, Veillonella parvula has been significantly associated with chronic periodontitis . Interestingly, Prevotella intermedia and Prevotella nigrescens have been shown to stimulate cytokine production by activation of Toll-like receptor 2 and Veillonella parvula has been shown to stimulate cytokine production by activation of both Toll-like receptor 2 and Toll-like receptor 4 . This is of potential importance as the production of a destructive inflammatory response in periodontal tissue by stimulation of the innate immune system by periodontopathogenic bacteria is thought to be central in disease pathogenesis in man . In equine oral health, significantly higher relative abundances of the genera Gemella (p < 0.0001) and Actinobacillus were noted in comparison to periodontitis, indicating that these genera comprise part of the normal oral flora of the horse. Bacteria belonging to the Gemella genus have been found to constitute high proportions of the microbiota of the dorsal surface of the human tongue . In addition, Actinobacillus equi has been frequently isolated from the oral cavity of healthy horses [34, 35]. Given that no previous studies have characterised the equine oral microbiome in such detail, it is highly likely that many novel or previously uncharacterised bacteria are present in both oral health and periodontitis and additional studies would be required to further determine the composition of the equine oral microbiome. In conclusion, the two cohorts of horses examined harboured highly distinct microbial profiles, with samples from periodontally affected horses being more diverse than samples from the healthy horses. Further, preferably longitudinal, studies are required to determine which bacteria are actively involved in the pathogenesis of disease. fastidious anaerobe broth linear discriminant analysis linear discriminant analysis effect size operational taxonomic unit principal component analysis Colyer JF (1906) Variations and diseases of the teeth of horses. Trans Odontol Soc GB 38:42–74 Little WL (1913) Periodontal disease in the horse. J Comp Pathol Therap 26:240–249 Baker GJ (1970) Some aspects of equine dental disease. Equine Vet J 2:105–110 Ireland JL, McGowan CM, Clegg PD, Chandler KJ, Pinchbeck GL (2012) A survey of health care and disease in geriatric horses aged 30 years or older. Vet J 192:57–64 Dixon PM, Tremaine WH, Pickles K, Kuhns L, Hawe C, McCann J, McGorum BC, Railton DI, Brammer S (1999) Equine dental disease part 2: a long-term study of 400 cases: disorders of development and eruption and variations in position of the cheek teeth. Equine Vet J 31:519–528 Dixon PM, Ceen S, Barnett T, O’Leary JM, Parkin TD, Barakzai S (2014) A long-term study on the clinical effects of mechanical widening of cheek teeth diastemata for treatment of periodontitis in 202 horses (2008–2011). Equine Vet J 46:76–80 Casey MB, Tremaine WH (2010) Dental diastemata and periodontal disease secondary to axially rotated maxillary cheek teeth in three horses. Equine Vet Educ 22:439–444 Dixon PM, Barakzai S, Collins N, Yates J (2008) Treatment of equine cheek teeth by mechanical widening of diastemata in 60 horses (2000–2006). Equine Vet J 40:22–28 Cox A, Dixon P, Smith S (2012) Histopathological lesions associated with equine periodontal disease. Vet J 194:386–391 Sykora S, Pieber K, Simhofer H, Hackl V, Brodesser D, Brandt S (2014) Isolation of Treponema and Tannerella spp. from equine odontoclastic tooth resorption and hypercementosis related periodontal disease. Equine Vet J 46:358–363 Socransky SS, Gibbons RJ, Dale AC, Bortnick L, Rosenthal E, Macdonald JB (1963) The microbiota of the gingival crevice of man. I. Total microscopic and viable counts and counts of specific organisms. Arch Oral Biol 8:275–280 Riggio MP, Lennon A, Taylor DJ, Bennett D (2011) Molecular identification of bacteria associated with canine periodontal disease. Vet Microbiol 150:394–400 Riggio MP, Jonsson N, Bennett D (2013) Culture-independent identification of bacteria associated with ovine ‘broken mouth’ periodontitis. Vet Microbiol 166:664–669 Song S, Jarvie T, Hattori M (2013) Our second genome–human metagenome: how next-generation sequencer changes our life through microbiology. Adv Microb Physiol 62:119–144 Kozich JJ, Westcott SL, Baxter NT, Highlander SK, Schloss PD (2013) Development of a dual-index sequencing strategy and curation pipeline for analyzing amplicon sequence data on the MiSeq Illumina sequencing platform. Appl Environ Microbiol 79:5112–5120 Edgar RC, Flyvbjerg H (2015) Error filtering, pair assembly and error correction for next-generation sequencing reads. Bioinformatics 31:3476–3482 Edgar RC (2013) UPARSE: highly accurate OTU sequences from microbial amplicon reads. Nat Methods 10:996–998 Caporaso JG, Kuczynski J, Stombaugh J, Bittinger K, Bushman FD, Costello EK, Fierer N, Peña AG, Goodrich JK, Gordon JI, Huttley GA, Kelley ST, Knights D, Koenig JE, Ley RE, Lozupone CA, McDonald D, Muegge BD, Pirrung M, Reeder J, Sevinsky JR, Turnbaugh PJ, Walters WA, Widmann J, Yatsunenko T, Zaneveld J, Knight R (2010) QIIME allows analysis of high-throughput community sequencing data. Nat Methods 7:335–336 Cole JR, Wang Q, Cardenas E, Fish J, Chai B, Farris RJ, Kulam-Syed-Mohideen AS, McGarrell DM, Marsh T, Garrity GM, Tiedje JM (2009) The Ribosomal Database Project: improved alignments and new tools for rRNA analysis. Nucleic Acids Res 37:D141–D145 Quast C, Pruesse E, Yilmaz P, Gerken J, Schweer T, Yarza P, Peplies J, Glöckner FO (2013) The SILVA ribosomal RNA gene database project: improved data processing and web-based tools. Nucleic Acids Res 41:D590–D596 Whitman WB, Goodfellow M, Kämpfer P, Busse H-J, Trujillo ME, Ludwig W, Suzuki K-I, Parte A (eds) (2012) Bergey’s manual of systematic bacteriology, parts A and B, vol 5, 2nd edn. Springer-Verlag, New York Hammer Ø, Harper DAT, Ryan PD (2001) PAST: paleontological statistics software package for education and data analysis. Palaeont Electronica 4:9 Segata N, Izard J, Waldron L, Gevers D, Miropolsky L, Garrett W, Huttenhower C (2011) Metagenomic biomarker discovery and explanation. Genome Biol 12:R60 Gao W, Chan Y, You M, Lacap-Bugler DC, Leung WK, Watt RM (2015) In-depth snapshot of the equine subgingival microbiome. Microb Pathog pii:S0882–4010(15)00174–6. doi:10.1016/j.micpath.2015.11.002 Paster BJ, Olsen I, Aas JA, Dewhirst FE (2006) The breadth of bacterial diversity in the human periodontal pocket and other oral sites. Periodontol 2000 42:80–87 Abusleme L, Dupuy AK, Dutzan N, Silva N, Burleson JA, Strausbaugh LD, Gamonal J, Diaz PI (2013) The subgingival microbiome in health and periodontitis and its relationship with community biomass and inflammation. ISME J 7:1016–1025 Listgarten MA, Helldén L (1978) Relative distribution of bacteria at clinically healthy and periodontally diseased sites in humans. J Clin Periodontol 5:115–132 Holt SC, Ebersole JL (2005) Porphyromonas gingivalis, Treponema denticola, and Tannerella forsythia: the ‘red complex’, a prototype polybacterial pathogenic consortium in periodontitis. Periodontol 2000 38:72–122 Haffajee AD, Socransky SS (1994) Microbial etiological agents of destructive periodontal diseases. Periodontol 2000 5:78–111 Mashima I, Fujita M, Nakatsuka Y, Kado T, Furuichi Y, Herastuti S, Nakazawa F (2015) The distribution and frequency of oral Veillonella spp. associated with chronic periodontitis. Int J Curr Microbiol App Sci 4:150–160 Kikkert R, Laine ML, Aarden LA, van Winkelhoff AJ (2007) Activation of toll-like receptors 2 and 4 by gram-negative periodontal bacteria. Oral Microbiol Immunol 22:145–151 Graves DT, Cochran D (2003) The contribution of interleukin-1 and tumor necrosis factor to periodontal tissue destruction. J Periodontol 74:391–401 Mager DL, Ximenez-Fyvie LA, Haffajee AD, Socransky SS (2003) Distribution of selected bacterial species on intraoral surfaces. J Clin Periodontol 30:644–654 Sternberg S (1998) Isolation of Actinobacillus equuli from the oral cavity of healthy horses and comparison of isolates by restriction enzyme digestion and pulsed-field gel electrophoresis. Vet Microbiol 59:147–156 Bisgaard M, Piechulla K, Ying Y-T, Frederiksen W, Mannheim W (2009) Prevalence of organisms described as Actinobacillus suis or haemolytic Actinobacillus equuli in the oral cavity of horses. Comparative investigations of strains obtained and porcine strains of A. suis sensu stricto. Acta Pathol Microbiol Immunol Scand B 92:291–298 The authors declare that they have no competing interests. RK conducted the experiments and was involved in preparation of the manuscript. DFL participated in study design, analysis of data and preparation of the manuscript. PMD collected and provided clinical specimens for the study and was involved in preparation of the manuscript. MJB conducted the high-throughput sequencing. EZ carried out bioinformatics analysis and was involved in preparing the manuscript. WC was involved in high-throughput sequencing and manuscript preparation. LEO assisted with interpretation of data. DB was involved in study design and preparation of the manuscript. BWB carried out bioinformatics analysis and was involved in preparing the manuscript. MPR conceived the study, participated in its design and was involved in manuscript preparation. All authors read and approved the final manuscript. The authors thank the Horserace Betting Levy Board for their generous financial support through a Veterinary Research Training Scholarship (VET/RS/249) for RK, and staff at the Royal (Dick) School of Veterinary Studies, University of Edinburgh and at the School of Veterinary Medicine, University of Glasgow for their assistance with sample collection. LEO was funded by a BBSRC Case studentship (BB/K501013/1). About this article Cite this article Kennedy, R., Lappin, D.F., Dixon, P.M. et al. The microbiome associated with equine periodontitis and oral health. Vet Res 47, 49 (2016). https://doi.org/10.1186/s13567-016-0333-1 - Oral Health - Periodontal Pocket - Cheek Tooth - Microbial Profile
<urn:uuid:df9ecbc3-c7d1-42c6-b323-3a94988e1553>
CC-MAIN-2022-33
https://veterinaryresearch.biomedcentral.com/articles/10.1186/s13567-016-0333-1
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573104.24/warc/CC-MAIN-20220817183340-20220817213340-00500.warc.gz
en
0.917632
7,606
2.5625
3
The name "chlorhexidine" breaks down as chlor(o) + hex(ane) + id(e) + (am)ine. Chlorhexidine is used in disinfectants (disinfection of the skin and hands), cosmetics (additive to creams, toothpaste, deodorants, and antiperspirants), and pharmaceutical products (preservative in eye drops, active substance in wound dressings and antiseptic mouthwashes). A 2019 Cochrane review concluded that based on very low certainty evidence in those who are critically ill "it is not clear whether bathing with chlorhexidine reduces hospital‐acquired infections, mortality, or length of stay in the ICU, or whether the use of chlorhexidine results in more skin reactions." CHG is active against Gram-positive and Gram-negative organisms, facultative anaerobes, aerobes, and yeasts. It is particularly effective against Gram-positive bacteria (in concentrations ≥ 1 μg/l). Significantly higher concentrations (10 to more than 73 μg/ml) are required for Gram-negative bacteria and fungi. Chlorhexidine is ineffective against polioviruses and adenoviruses. The effectiveness against herpes viruses has not yet been established unequivocally. There is strong evidence that chlorhexidine is more effective than povidone-iodine for clean surgery. Evidence shows that it is the most effective antiseptic for upper limb surgery, and there is no data to suggest that alcoholic chlorhexidine increases the risk of tourniquet-related burns, ignition fires or allergic episodes during surgery. Chlorhexidine, like other cation-active compounds, remains on the skin. It is frequently combined with alcohols (ethanol and isopropyl alcohol). Use of a CHG-based mouthwash in combination with normal tooth care can help reduce the build-up of plaque and improve mild gingivitis. There is not enough evidence to determine the effect in moderate to severe gingivitis. About 20 mL twice a day of concentrations of 0.1% to 0.2% is recommended for mouth-rinse solutions with a duration of at least 30 seconds. Such mouthwash also has a number of adverse effects including damage to the mouth lining, tooth discoloration, tartar build-up, and impaired taste. Extrinsic tooth staining occurs when chlorhexidine rinse has been used for 4 weeks or longer. Mouthwashes containing chlorhexidine which stain teeth less than the classic solution have been developed, many of which contain chelatedzinc. Using chlorhexidine as a supplement to everyday mechanical oral hygiene procedures for 4 to 6 weeks and 6 months leads to a moderate reduction in gingivitis compared to placebo, control or mechanical oral hygiene alone. Chlorhexidine is a cation which interacts with anionic components of toothpaste, such as sodium lauryl sulfate and sodium monofluorophosphate, and forms salts of low solubility and antibacterial activity. Hence, to enhance the antiplaque effect of chlorhexidine, "it seems best that the interval between toothbrushing and rinsing with CHX [chlorhexidine] be more than 30 minutes, cautiously close to 2 hours after brushing". Chlorhexidine gluconate is used as a skin cleanser for surgical scrubs, as a cleanser for skin wounds, for preoperative skin preparation, and for germicidal hand rinses. Chlorhexidine eye drops have been used as a treatment for eyes affected by Acanthamoeba keratitis. Chlorhexidine is very effective for poor countries like Nepal and its use is growing in the world for treating the umbilical cord. A 2015 Cochrane review has yielded high-quality evidence that within the community setting, chlorhexidine skin or cord care can reduce the incidence of omphalitis (inflammation of the umbilical cord) by 50% and neonatal mortality by 12%. CHG is ototoxic; if put into an ear canal which has a ruptured eardrum, it can lead to deafness. CHG does not meet current European specifications for a hand disinfectant. Under the test conditions of the European Standard EN 1499, no significant difference in the efficacy was found between a 4% solution of chlorhexidine digluconate and soap. In the U.S., between 2007 and 2009, Hunter Holmes McGuire Veterans Administration Medical Center conducted a cluster-randomized trial and concluded that daily bathing of patients in intensive care units with washcloths saturated with chlorhexidine gluconate reduced the risk of hospital-acquired infections. Whether prolonged exposure over many years may have carcinogenic potential is still not clear. The US Food and Drug Administration recommendation is to limit the use of a chlorhexidine gluconate mouthwash to a maximum of six months. When ingested, CHG is poorly absorbed in the gastrointestinal tract and can cause stomach irritation or nausea. If aspirated into the lungs at high enough concentration, as reported in one case, it can be fatal due to the high risk of acute respiratory distress syndrome. Mechanism of actionEdit At physiologic pH, chlorhexidine salts dissociate and release the positively charged chlorhexidine cation. The bactericidal effect is a result of the binding of this cationic molecule to negatively charged bacterial cell walls. At low concentrations of chlorhexidine, this results in a bacteriostatic effect; at high concentrations, membrane disruption results in cell death. It is a cationic polybiguanide (bisbiguanide). It is used primarily as its salts (e.g., the dihydrochloride, diacetate, and digluconate). Chlorhexidine is deactivated by forming insoluble salts with anionic compounds, including the anionic surfactants commonly used as detergents in toothpastes and mouthwashes, anionic thickeners such as carbomer, and anionic emulsifiers such as acrylates/C10-30 alkyl acrylate crosspolymer, among many others. For this reason, chlorhexidine mouth rinses should be used at least 30 minutes after other dental products. For best effectiveness, food, drink, smoking, and mouth rinses should be avoided for at least one hour after use. Many topical skin products, cleansers, and hand sanitizers should also be avoided to prevent deactivation when chlorhexidine (as a topical by itself or as a residue from a cleanser) is meant to remain on the skin. ^ abcdeBritish national formulary : BNF 69 (69 ed.). British Medical Association. 2015. pp. 568, 791, 839. ISBN 9780857111562. ^ abWade, Ryckie G; Bourke, Gráinne; Wormald, Justin C R (9 November 2021). "Chlorhexidine versus povidone–iodine skin antisepsis before upper limb surgery (CIPHUR): an international multicentre prospective cohort study". BJS Open. 5 (6): zrab117. doi:10.1093/bjsopen/zrab117. PMC8677347. PMID34915557. ^Wade, Ryckie G.; Burr, Nicholas E.; McCauley, Gordon; Bourke, Grainne; Efthimiou, Orestis (December 2021). "The Comparative Efficacy of Chlorhexidine Gluconate and Povidone-iodine Antiseptics for the Prevention of Infection in Clean Surgery: A Systematic Review and Network Meta-analysis". Annals of Surgery. 274 (6): e481–e488. doi:10.1097/SLA.0000000000004076. PMID32773627. S2CID 225289226. ^Briggs, Gerald G.; Freeman, Roger K.; Yaffe, Sumner J. (2011). Drugs in Pregnancy and Lactation: A Reference Guide to Fetal and Neonatal Risk. Lippincott Williams & Wilkins. p. 252. ISBN 9781608317080. Archived from the original on 2017-01-13. ^Schmalz, Gottfried; Bindslev, Dorthe Arenholt (2008). Biocompatibility of Dental Materials. Springer Science & Business Media. p. 351. ISBN 9783540777823. Archived from the original on 2017-01-13. ^World Health Organization (2019). World Health Organization model list of essential medicines: 21st list 2019. Geneva: World Health Organization. hdl:10665/325771. WHO/MVP/EMP/IAU/2019.06. License: CC BY-NC-SA 3.0 IGO. ^"The Top 300 of 2020". ClinCalc. Retrieved 11 April 2020. ^"Chlorhexidine - Drug Usage Statistics". ClinCalc. Retrieved 11 April 2020. ^Thomas Güthner; et al. (2007), "Guanidine and Derivatives", Ullman's Encyclopedia of Industrial Chemistry (7th ed.), Wiley, p. 13 ^Lewis, Sharon R.; Schofield-Robinson, Oliver J.; Rhodes, Sarah; Smith, Andrew F. (30 August 2019). "Chlorhexidine bathing of the critically ill for the prevention of hospital‐acquired infection". Cochrane Database of Systematic Reviews. 8: CD012248. doi:10.1002/14651858.CD012248.pub2. PMC6718196. PMID31476022. ^Raab D: Preparation of contaminated root canal systems – the importance of antimicrobial irrigants. DENTAL INC 2008: July / August 34–36. ^Raab D, Ma A: Preparation of contaminated root canal systems – the importance of antimicrobial irrigants. 经感染的根管系统的修复 – 化学冲洗对根管治疗的重要性DENTAL INC Chinese Edition 2008: August 18–20. ^Raab D: "Die Bedeutung chemischer Spülungen in der Endodontie". Endodontie Journal 2010: 2; 22–23. http://www.oemus.com/archiv/pub/sim/ej/2010/ej0210/ej0210_22_23_raab.pdf ^ abcLeikin, Jerrold B.; Paloucek, Frank P., eds. (2008), "Chlorhexidine Gluconate", Poisoning and Toxicology Handbook (4th ed.), Informa, pp. 183–84 ^ abHans-P. Harke (2007), "Disinfectants", Ullman's Encyclopedia of Industrial Chemistry (7th ed.), Wiley, pp. 10–11 ^Wade, Ryckie G.; Burr, Nicholas E.; McCauley, Gordon; Bourke, Grainne; Efthimiou, Orestis (1 September 2020). "The Comparative Efficacy of Chlorhexidine Gluconate and Povidone-iodine Antiseptics for the Prevention of Infection in Clean Surgery: A Systematic Review and Network Meta-analysis". Annals of Surgery. Publish Ahead of Print (6): e481–e488. doi:10.1097/SLA.0000000000004076. PMID32773627. ^Dumville, JC; McFarlane, E; Edwards, P; Lipp, A; Holmes, A; Liu, Z (21 April 2015). "Preoperative skin antiseptics for preventing surgical wound infections after clean surgery". The Cochrane Database of Systematic Reviews (4): CD003949. doi:10.1002/14651858.CD003949.pub4. PMC6485388. PMID25897764. ^ abcdefJames P, Worthington HV, Parnell C, Harding M, Lamont T, Cheung A, Whelton H, Riley P (2017). "Chlorhexidine mouthrinse as an adjunctive treatment for gingival health". Cochrane Database Syst Rev. 3 (12): CD008676. doi:10.1002/14651858.CD008676.pub2. PMC6464488. PMID28362061. ^Bernardi F, Pincelli MR, Carloni S, Gatto MR, Montebugnoli L (August 2004). "Chlorhexidine with an Anti Discoloration System. A comparative study". International Journal of Dental Hygiene. 2 (3): 122–26. doi:10.1111/j.1601-5037.2004.00083.x. PMID16451475. ^Sanz, M.; Vallcorba, N.; Fabregues, S.; Muller, I.; Herkstroter, F. (1994). "The effect of a dentifrice containing chlorhexidine and zinc on plaque, gingivitis, calculus and tooth staining". Journal of Clinical Periodontology. 21 (6): 431–37. doi:10.1111/j.1600-051X.1994.tb00741.x. PMID8089246. ^Kumar, S; Patel, S; Tadakamadla, J; Tibdewal, H; Duraiswamy, P; Kulkarni, S (2013). "Effectiveness of a mouthrinse containing active ingredients in addition to chlorhexidine and triclosan compared with chlorhexidine and triclosan rinses on plaque, gingivitis, supragingival calculus and extrinsic staining". International Journal of Dental Hygiene. 11 (1): 35–40. doi:10.1111/j.1601-5037.2012.00560.x. PMID22672130. ^Kolahi, J; Soolari, A (September 2006). "Rinsing with chlorhexidine gluconate solution after brushing and flossing teeth: a systematic review of effectiveness". Quintessence International. 37 (8): 605–12. PMID16922019. ^Alkharashi M, Lindsley K, Law HA, Sikder S (2015). "Medical interventions for acanthamoeba keratitis". Cochrane Database Syst Rev. 2 (2): CD0010792. doi:10.1002/14651858.CD010792.pub2. PMC4730543. PMID25710134. ^Sinha A, Sazawal S, Pradhan A, Ramji S, Opiyo N (2015). "Chlorhexidine skin or cord care for prevention of mortality and infections in neonates". Cochrane Database Syst Rev. 3 (3): CD007835. doi:10.1002/14651858.CD007835.pub2. PMID25739381. S2CID 16586836. ^Lai, P; Coulson, C; Pothier, D. D; Rutka, J (2011). "Chlorhexidine ototoxicity in ear surgery, part 1: Review of the literature". Journal of Otolaryngology - Head & Neck Surgery. 40 (6): 437–40. PMID22420428. ^"Daily Bathing With Antiseptic Agent Significantly Reduces Risk of Hospital-Acquired Infections in Intensive Care Unit Patients". Agency for Healthcare Research and Quality. 2014-04-23. Archived from the original on 2017-01-13. Retrieved 2014-04-29. ^Below, H.; Assadian, O.; Baguhl, R.; Hildebrandt, U.; Jäger, B.; Meissner, K.; Leaper, D.J.; Kramer, A. (2017). "Measurements of chlorhexidine, p-chloroaniline, and p-chloronitrobenzene in saliva after mouth wash before and after operation with 0.2% chlorhexidine digluconate in maxillofacial surgery: a randomised controlled trial". British Journal of Oral and Maxillofacial Surgery. 55 (2): 150–155. doi:10.1016/j.bjoms.2016.10.007. PMID27789177. ^Hirata, Kiyotaka; Kurokawa, Akira (April 2002). "Chlorhexidine gluconate ingestion resulting in fatal respiratory distress syndrome". Veterinary and Human Toxicology. 44 (2): 89–91. ISSN 0145-6296. PMID11931511. An 80-y-old woman with dementia accidentally ingested approximately 200 ml of Maskin (5% CHG) in a nursing home and then presumably aspirated gastric contents. ^Tanzer JM, Slee AM, Kamay BA (1977). "Structural requirements of guanide, biguanide, and bisbiguanide agents for antiplaque activity". Antimicrob. Agents Chemother. 12 (6): 721–9. doi:10.1128/aac.12.6.721. PMC430011. PMID931371. ^Denton, Graham W (2000). "Chlorhexidine". In Block, Seymour S (ed.). Disinfection, Sterilization, and Preservation (5th ed.). Lippincott Williams & Wilkins. pp. 321–36. ISBN 978-0-683-30740-5. ^Rose, F. L.; Swain, G. (1956). "850. Bisdiguanides having antibacterial activity". Journal of the Chemical Society (Resumed): 4422. doi:10.1039/JR9560004422. ^"Hibiclens Uses, Side Effects & Warnings - Drugs.com". Drugs.com. ^"Chlorhexidine gluconate Uses, Side Effects & Warnings - Drugs.com". Drugs.com. Retrieved 4 August 2018. ^van Hengel, Tosca; ter Haar, Gert; Kirpensteijn, Jolle (2013). "Chapter 2. Wound management: a new protocol for dogs and cats. Chlorhexidine solution". In Kirpensteijn, Jolle; ter Haar, Gert (eds.). Reconstructive Surgery and Wound Management of the Dog and Cat. CRC Press. ISBN 9781482261455. ^Maddison, Jill E.; Page, Stephen W.; Church, David B., eds. (2008). "Antimicrobial agents. Chlorhexidine". Small Animal Clinical Pharmacology. Elsevier Health Sciences. p. 552. ISBN 978-0702028588. ^Blowey, Roger William; Edmondson, Peter (2010). Mastitis Control in Dairy Herds. CABI. p. 120. ISBN 9781845937515. Zeman, D; Mosley, J; Leslie-Steen, P (Winter 1996). "Post-Surgical Respiratory Distress in Cats Associated with Chlorhexidine Surgical Scrubs". ADDL Newsletters. Indiana Animal Disease Diagnostic Laboratory. Archived from the original on 2011-09-27. Retrieved 2011-09-11. "Chlorhexidine". Drug Information Portal. U.S. National Library of Medicine.
<urn:uuid:9f48eecc-c240-4c67-8375-00bbfc6b8e9a>
CC-MAIN-2022-33
https://www.knowpia.com/knowpedia/Chlorhexidine
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00699.warc.gz
en
0.723578
4,298
2.765625
3
Editor’s Note: Excerpted from Shouting Won’t Help: Why I—and 50 Million Other Americans—Can’t Hear You, by Katherine Bouton, published by Sarah Crichton Books, an imprint of Farrar, Straus and Giroux, LLC. Copyright © 2013 Katherine Bouton. All rights reserved. “You’ll never be deaf,” Dr. Hoffman said to me years ago. At the time, I thought he meant I’d never lose all my hearing. But what I know now is that technology would take over when my ears no longer worked. Through a cochlear implant, I would continue to hear long after my ears ceased to function. Research holds the promise that the kind of hearing loss I have may someday be reversible, returning the ear to close to its original pristine condition. Probably not soon and not for me, but most researchers think that within a decade they may have the tools that will eventually allow doctors to stop the progression of sensorineural hearing loss, including age-related hearing loss. Putting those tools into practice will take much longer. (Gene therapy, for people whose hearing loss has a genetic basis, will probably come sooner, possibly in the next decade.) The best guesses for hair cell regeneration—for the much larger group of people whose sensorineural loss is caused by noise or ototoxins or age—range anywhere from twenty to fifty years. Until recently, scientists focused on the development of devices that would take the place of normal hearing: hearing aids and cochlear implants. The pharmaceutical industry, usually so quick to jump on the opportunity to medicalize a chronic age-related condition—dry eyes and wrinkles, trouble sleeping, lagging sexual function, bladder control, memory loss—has not paid much attention to age-related hearing loss, in terms either of prevention or cure. There are no FDA-approved drugs for the treatment of hearing loss. Demographics alone would suggest they are missing a big opportunity. In October 2011, the Hearing Health Foundation (formerly the Deafness Research Foundation) held a symposium in New York to kick off its new campaign, called the Hearing Restoration Project, an ambitious program that had enlisted, at that point, fourteen researchers from ten major hearing & loss research centers in the United States. This consortium will share findings, with the goal of developing a biological cure for hearing loss in the next ten years. With a fund-raising target of $50 million, or $5 million a year, the Hearing Restoration Project will tackle the problem of hearing loss with the aim of curing it, not treating it. The funding is relatively small right now, but there is hope that the foundation will be able to raise more in future years. Individual consortium members may currently receive somewhere between 5 to 20 percent of a laboratory’s annual bud get from the Hearing Health Foundation. But the collaborative nature of the venture is unusual. (A similar consortium exists for the study of myelin diseases—a factor in multiple sclerosis as well as hereditary neurodegenerative diseases.) Under its previous name, the Deafness Research Foundation, funding was limited to early career support to researchers. They’ve now added the Hearing Restoration Project. The symposium, titled “The Promise of Cell Regeneration,” brought together leading researchers in the field of hearing loss. Dr. George A. Gates, an M.D. and the scientific director of the Hearing Restoration Project, chaired the program. The speakers included Sujana Chandrasekhar, an M.D. and director of New York Otology, who talked from a clinical perspective about the current state of hearing loss research. Ed Rubel, from the University of Washington, discussed the history of hair cell regeneration research and his current work on regenerating hair cells through pharmaceutical applications. Stefan Heller discussed his lab’s announcement in May of 2010 of the first successful attempt at generating mammalian hair cells (of mice) in a laboratory setting from stem cell transplants. Andy Groves, from Baylor, discussed the many still-existing hurdles to hair cell regeneration in humans. Unable to attend was Douglas Cotanche, currently working at Harvard on noise-induced hearing loss in military personnel. Humans have 30,000 cochlear and vestibular hair cells. By contrast, the human retina has 120 million photoreceptors. The 30,000 hair cells, arranged in four rows and protected by the hard shell of the cochlea, determine how well you can hear. If you lose the outer cells, you suffer up to a 60-decibel hearing loss. That degree of hearing loss can usually be corrected with a hearing aid. If you lose the inner row of cells, you may have a total loss. The more inner cells damaged, the greater the degree of loss. Sharon Kujawa, speaking at the 2011 HLAA meeting, had described the damaged cells as lying flat, like a field of wheat after a storm. Stefan Heller drew an even more graphic picture of severe damage. The flattened cells, he said, may be “followed by a collapse of the tunnel of Corti, resulting in a structure that often features an unorganized mound of inconspicuous cells.” Surrounding the inner and outer hair cells are the so-called supporting cells, which come in all varieties: Deiters’ cells, Claudius’ cells, Hensen’s cells, inner pillar, and outer pillar cells. Supporting cells are the magical cells that instigate regeneration in damaged inner ears of chicks and fish. And they are where someday regeneration may occur in humans. That limited number of hair cells, as well as their fragility and inaccessibility has hampered research. In his 2010 Cell paper, Stefan Heller noted, “The inner ear shelters the last of our senses for which the molecular basis is unknown.” So little is known about the structure of the inner ear that, as Dr. Gates said, “we have a hard time clinically knowing how much [loss] is outer and how much is inner. That’s why we use the term sensorineural.” • • • Ed Rubel’s photo on the University of Washington website shows a balding middle-aged man, elbows on the table, with two yellow chicks. That whimsical photo belies a seriously impressive academic c.v.: Virginia Merrill Bloedel Professor of Hearing Science; Professor of Otolaryngology—Head and Neck Surgery; Professor of Physiology and Biophysics; Adjunct Professor of Psychology. Dr. Gates referred to him as the godfather of hair cell regeneration. Rubel and his colleagues at the Virginia Bloedel Hearing Research Center see four clinical scenarios that lend themselves to pharmaceutical fixes. The first would reverse sudden sensorineural hearing loss. The second would prevent ototoxic and/or noise-related hearing loss. The third would retard the progression of hearing loss, especially age-related hearing loss. The fourth would restore hearing after it had been lost. Until 1985 it was thought that no animals could regenerate hair cells once they were destroyed. Rubel, then at the University of Virginia, discovered, quite inadvertently, that some do. The purpose of his research was to determine how long it took for ototoxic drugs to damage hair cells. He and his lab partners chose chicks as their animal model. Chicks have an easily accessible inner ear, and their ears in many ways resemble the human inner ear. Rubel gave the chicks hair-cell-destroying aminoglycocides—a class of antibiotic known to be ototoxic—and then assigned the new guy at the lab, as Rubel put it in his talk at the conference, a resident named Raul Cruz, to sacrifice the chicks after a certain number of days and study the degree of deterioration in the hair cells. After eight days, Cruz found the chicks had, as expected, lost many cells. But when he studied the slides taken from chicks sacrificed at twenty-two days, instead of more dead cells they showed fewer. Where there had been dead cells, there were now healthy ones. “Raul, you must have mixed up your animals, go back and do it again,” Rubel recounted, adding, to laughs, “Because he was just a resident, he didn’t know what he was doing.” Again, Cruz brought similar data. This time Rubel told him to change his counting criteria. Even then, the regenerated cells were still there. “Well, maybe I better look in the microscope,” Rubel said. Cruz was right, of course. But no one understood the mechanism. “What’s going on here?” they asked themselves. Around the same time, Doug Cotanche, then a post doc at the University of Pennsylvania, saw the same results in chicks after damage due to intense noise exposure. Rubel and Cotanche published separate papers in different scientific journals, but continued communicating and soon got together to publish dual papers in the prestigious journal Science, showing, as Rubel said, that these were indeed brand-new hair cells “due to new cell division and the creation of new cells in the inner ear.” This was a stunning scientific development. “And wow, we had a new field.” The next step was to figure out how chickens did it. Studying the cochlea of chicks and other birds, Rubel and others found eventually—over eighteen years!—that bird hair cells do indeed regenerate. Over the same period they discovered many other important molecular and functional details related to this remarkable ability. He showed some slides. The first slide showed the condition of the hair cells shortly after the animals were exposed to noise: “It looks just terrible. All these hair cells are blebbing out and being discarded.” Five days later, however, they could see “baby cells” budding, some of them with the distinctive hair, or microvillus, on top. Then, after a few days, a high-power scanning electron microscope showed that all the hair cells were back. Not perfect, a few small abnormalities, but perfectly functional. Interestingly, Rubel went on, they found that this happened no matter the age of the bird. Brenda Ryals, a former student of Rubel’s, had a colony of senile quail that, they found, “regenerate cells just as well as a baby chicken does,” Rubel said. New cells are created not only in the cochlea but also in the vestibular epithelium, important for balance. And, perhaps most important, the new cells are appropriately connected to the brain. “The new cells restore near-normal hearing and perfectly normal vestibular reflexes. They restore perception and production of complex vocalizations. Birds lose their song when they lose their hearing, but they gain their song back when they restore their hearing.” In 2001 Rubel joined forces with David Raible, also at UW, who was using zebrafish, a popular aquarium species, to study development of the nervous system. Eleven years later the two labs are still collaborating on understanding how to prevent and cure hearing loss, and on hair cell regeneration. Zebrafish proved to be an even better animal model for studying some aspects of hearing-loss prevention and regeneration than birds. In addition to hair cells in the inner ear, aquatic vertebrates like fish have hair cells on the outside of the body, in something called the lateral line. The lateral line is used for detecting change in water currents and its cells are physiologically very similar to human inner ear cells. At electron microscope level, intracellular structure is similar. It turns out that fish and reptiles, like birds, regenerate hair cells, as do frogs and other animals. “So why can’t we?” Rubel asked. The Rubel/Raible team subjected the zebrafish larvae to ototoxic screening, again using aminoglycoside antibiotics. They tested drugs and druglike compounds to find ones that inhibit hair cell death in the fish. This work may lead to the development of protective cocktails to preserve hair cells before exposure to antibiotics or ototoxic chemotherapy drugs. They may also be given to humans after ototoxic assaults, which include noise exposure. So far, testing on mammals, not to mention humans, is preliminary. Each human cochlea has only those 15,000 hair cells (the other 15,000 are in the vestibular system), and they are inaccessible in a living person. These generally decrease as we age, although not always. “Some animals and some humans seem resistant to noise and drugs and some humans hear perfectly until old age,” Rubel said. “What grants this protection? Do some people have genetically ‘tough’ ears and others have ‘weak’ ears? If so, what are the genes responsible for this difference, and can we use them to protect hearing?” By doing genetic screening in zebrafish, it may be possible to find these genes and then find the cellular pathways to turn “weak ears” into “tough ears.” In March of 2012, I met with Rubel and a group of younger fellow researchers at the Virginia Merrill Bloedel Center. Rubel is a charismatic leader, but he insisted on referring to these researchers not as part of his lab but as independent scientists, with their own NIH grants, some with their own Hearing Restoration projects. David Raible was out of town. Raible, Jennifer Stone, and Elizabeth Oesterle collaborate on different projects with each other and with Rubel. But, Rubel said, “Having sort of started the hair-cell regeneration field, I feel very comfortable getting out of it and doing other things.” Jennifer Stone is a cell biologist and neuroanatomist, who works primarily on avian hair cell regeneration. About five or six years ago she started working with mice, with several of the other participants, including Rubel and Elizabeth Oesterle, a cell biologist, and Clifford Hume, an M.D./Ph.D. clinician scientist. Stone led a recent study which found that after virtually all the vestibular hair cells in adult mice are killed, 16 percent of the hair cell population comes back spontaneously. “It’s a new discovery,” Stone said. “It’s not entirely surprising, but I think we’ve demonstrated it pretty definitively.” Because spontaneous regeneration happens in only certain regions of the vestibular system, it helps the researchers narrow the field. By comparing the tissue in this region to tissue in others, they may understand the factor that allows regeneration in one place but not in another. Once we understand what allows the tissue to make new hair cells in these regions, Stone said, we can determine what would be needed to “release the brake,” as she put it. The p27 gene, which regulates cell division and helps prevent cancer, is one such molecule. To allow these hair cells to divide, the p27 gene would need to be turned off. Or maybe, she added, “it could be that we need to push the pedal on the gas: add something to promote division. It could be that we have to both put on the brakes and push on the pedal to start this process in mammals.” Julian Simon is a chemist, a Ph.D. pharmacologist, who got interested in the toxicity of cancer therapeutic drugs when clinicians at the Seattle Cancer Care Alliance, the patient arm of the Fred Hutchinson Cancer Research Center and the University of Washington, complained about the ototoxicity of certain chemotherapy drugs, Cisplatin prominently among them. Simon said that 30 to 40 percent of patients who go on Cisplatin regimens for lung cancer sustain significant and permanent hearing loss. (Rubel told me that some reports suggest an even higher percentage, up to 80 percent.) Simon’s approach is to use small molecules to “perturb” biological systems. “We know what we’d like the cells to do, and in this case we’d like to take cells that would otherwise die and keep them living.” Because the whole process of sensory hair cell death is—“with all due respect to present company” (meaning his fellow researchers)—“poorly understood, by learning how we can protect these cells from dying, maybe we can also learn something about the way cells die. Why they die.” Clifford Hume and Henry Ou are clinicians. Ou is a pediatric otolaryngologist at Seattle Children’s Hospital. Both split their time between clinical work and research. As Ou said, “I help families understand hearing loss, try to diagnose the cause of the hearing loss in their child. And I try to figure out the etiology of hearing loss in general—in both kids who develop it and kids who are born with it.” The team’s approach is multidisciplinary, involving not only research scientists and clinicians but also psychologists, genetic counselors, audiologists, and special education specialists. In adult hearing loss, they are also looking at the role of prescription medications in age-related hearing loss. Many are life-saving medications, but sometimes less toxic substitutes may be available. The UW group moved on to a lively discussion about how they would advise the parents of a young child getting implants. Should the child get implants in both ears? Cochlear implants cause the destruction of the so-called support cells that might give rise to new hair cells. Hence, should the parents “save” one ear in the hope that cell regeneration technology will eventually enable the child to hear normally out of that ear? Henry Ou said that parents often ask him about a second implant. “Sometimes they ask, ‘Do you think there’s hope that this is going to be fixed?’ I say, ‘Yeah.’ But at the same time, if I don’t think there’s hope, I shouldn’t be doing research on it. I’m a conflicted person to ask.” Simon added: “Parents don’t want to find out when their kid is eighteen that there is something better.” He cited the substantial evidence that children do better in school when they’re implanted earlier, and bilaterally. Rubel agreed with the basic premise that early intervention is enormously important and that cochlear implants in children have become an essential therapeutic option, but expressed skepticism about the value of always doing bilateral cochlear implant surgery. Referring to one study in particular, he said, “The little known fact about this work is that it includes only the top 20 percent of single implant users.” Another study found different results. “So I think it’s still up in the air,” Rubel said. We just don’t have enough information yet to know the impact that implants make at that critical learning period for language and speech comprehension. But, as Jenny Stone said, the same question could be asked about regenerated hair cells. “The big elephant in the room, I think,” she said, “is that we don’t know whether regenerated hair cells will result in better hearing—appreciation of music, noise, speech—than a cochlear implant can. And I think it’s a huge jump to assume that in twenty years we’ll be there.” “Well but in fifty years?” Rubel interjected. “Maybe in fifty years,” Stone replied. “I keep going back to the bird,” Rubel said, “and we absolutely know that the bird gets great hearing back. They can recognize their own songs, they can learn new songs, not only speech but song recognition!” “He loves birds,” Jenny Stone said. “I’m not trying to be pessimistic. But it’s going to take a lot of time to really get concrete evidence for what the best type of repair is going to look like.” • • • How is it that mammals got shortchanged in the hair cell regeneration department? Birds and mammals split 300 million years ago. Birds share a more recent common ancestor with reptiles. The hair cells of a bird are “scattered in a mosaic all over the surface of the hearing organ,” Andy Groves told the Hearing Restoration conference. Mammals, by contrast, have decreased the number of hair cells and specialized the function of the supporting cells surrounding them. Supporting cells physically position hair cells, Groves explained, and they provide structural integrity to the cochlea to make it mechanically sensitive. Why would this evolutionary adaptation have occurred? Groves speculates that mammals made a trade-off: in the course of developing high-frequency hearing, their hair cells became more specialized, and in the process they lost the ability to regenerate. Although we humans have devised many ways of inflicting hearing loss on ourselves (such as rock concerts, iPods, and heavy machinery), one of the few naturally occurring things that kills hair cells is the wear and tear of old age. (Unless it turns out that even that is the result of accumulated noise exposure.) “From an evolutionary point of view,” Groves said, “and this sounds rather brutal, but evolution doesn’t care about old age, as long as you live long enough to have kids.” Once your reproductive years are over, your body has done its evolutionary job. As a result, mammals would not suffer a selective disadvantage by losing the capacity to regenerate their hair cells. Bruce Tempel, at the University of Washington, echoed that Darwinian opinion. For the past twenty to twenty-five years he has been looking at the genes implicated at one or another level in hearing loss. “Truth be told,” he said in an interview, “the reason that I got really interested in the auditory system is because you don’t need it. From a geneticist’s point of view, basically, this is great. This system can be completely nonfunctional and the animal still survives.” He added that stress and hormonal influences on hearing loss are part of the reason the auditory system is so useful to geneticists: “You’re able to identify the genes, the proteins, and from studying the protein itself find out whether there’s a hormone or an influence on the expression of that protein. You can find out if there are interacting proteins that become a cascade linking the different individual proteins and the genes. And what’s really cool about the auditory system is that we can do all that and still have a viable animal.” • • • Andy Groves also studies the genetics of hearing loss. One of the mammalian genes whose function is to stop cells dividing (necessary, to regulate the size of organs and protect against cancer) is the p27 gene, which Jenny Stone talked about at the UW group meeting. Figuring out how to switch off that gene is one of the biggest obstacles researchers face. After a great deal of work in cell-culture dishes and looking through microscopes, Groves and his colleague Neil Segil discovered that when they isolated mouse supporting cells from the newborn cochlea, the action triggered the p27 gene to switch off and the supporting cells to start dividing. They don’t know why. Unlike humans, mice cannot hear when they are born. By the time they begin to hear—at about two weeks after birth—mouse supporting cells stubbornly refuse to divide even when isolated from the cochlea. Groves, Segil, and their colleagues are now trying to understand what happens to the aging supporting cells that makes them unable to divide. How can supporting cells be coaxed into making more hair cells? Almost twenty years ago, it was proposed that hair cells and support cells, side by side, participate in an ongoing conversation using an evolutionarily ancient communication system called the Notch signaling pathway. The hair cell commands the support cell not to divide and prevents it from becoming a hair cell. Because the mammalian cochlea has evolved to have only four rows of cells, Groves explained, the creation of more cells would disrupt the mechanical properties of the cochlea, possibly preventing it from working properly. The role of the Notch pathway in regulating the activity of the p27 gene is controversial. Groves mentioned work that Amy Kiernan, currently on the faculty at the University of Rochester, carried out when she was a postdoc with Tom Gridley at the Jackson Laboratory in Bar Harbor, Maine. She managed to inactivate the Notch signaling pathway in mice genetically. Her mice produced extra hair cells and showed some precocious cell division in the cochlea. Another researcher working with Groves and Segil, Angelika Doetzlhofer, did the same, using a pharmacological approach with drugs that blocked Notch signaling. When they blocked the signaling in newborn mice, they saw a 50 percent increase in hair cells and fewer supporting cells. These findings are preliminary, Groves cautioned, and the role of the Notch pathway is still being studied. Following up on this, Groves and colleagues repeated their Notch blocking experiments in older mice. By the time the mice were three days old, the increase in hair cells had dropped to 30 percent. In six-day-old mice, new hair cells were no longer produced. Although extrapolating this timetable to humans is tricky, the current data suggests that the human cochlea may no longer respond to Notch inhibitors by the time the fetus is five to six months old. “So here is the take-home message,” Groves concluded. “Our challenge—if you want to set a ten-year challenge—is to understand these roadblocks and then devise methods to get rid of them, and ultimately to apply these methods in a clinical setting.” A clinical setting populated by humans. As Groves said at the beginning of his talk, “We’re not here to treat hearing loss in birds.” • • • Stefan Heller and his colleagues are taking a different approach to regenerating hair cells. They are attempting to get stem cells—undifferentiated cells that can develop into various specialized cells—to turn into hair cells, by mimicking the naturally occurring developmental processes that lead to formation of the inner ear. They do this in a culture dish and in a laboratory setting, which allows them to learn a lot about the process, such as what it actually takes to make sensory hair cells from scratch. In March 2012, I visited Heller’s lab at Stanford in Palo Alto. We literally ran into each other as I was looking for his office. Heller is formidably smart but completely unimposing in manner. He was wearing a well-worn T-shirt with a coffee cup on it (half full? half empty? “Definitely half full,” he said), jeans, and sneakers. We talked in his office with a huge humming fish tank taking up about a sixth of the office. I asked if he had zebrafish. He said he didn’t, but Dr. Robert Jackler, the chair of the Stanford Otolaryngology Department and the force behind the accumulation of brain power that makes Stanford’s one of the most important hearing research departments in the world, told me that Heller raises anemones to get novel fluorochromes for his research. It had been two years since his article appeared in Cell, under the characteristically cryptic (to laymen) title: “Mechanosensitive Hair Cell-like Cells from Embryonic and Induced Pluripotent Stem Cells.” As he had explained to the Hearing Restoration audience, his lab works with three kinds of stem cells. The first are embryonic stem cells, which are derived from the inner cell mass of a blastocyst, an early embryo. The lab uses both mouse embryonic stem cells and human embryonic stem cells. (In 2009 President Obama lifted an eight-year ban on federal funding of human embryonic stem cell research, vastly increasing the number of cells available to researchers. The cells are derived primarily from human embryos left after fertility treatments.) Dr. Heller noted that a scientist has to be really “talented” to grow these cells, which involve an underlying structure with other cells on top: if left on their own, they would overgrow everything. “This is quite a bit of maintenance. It’s actually labor-intensive work.” The second type are the induced pluripotent stem cells (iPSCs) referred to in the title of the 2010 article. These are, according to the NIH website, “adult cells that have been genetically reprogrammed to an embryonic cell-like state.” The NIH definition goes on: “It is not known if iPSCs and embryonic stem cells differ in clinically significant ways.” That Heller and his lab were able to produce sensory hair cells in mice using both these kinds of stem cells is significant. Further, that they were “mechanosensitive” means that they were responsive to mechanical stimulation, and that these responses were similar to those in immature hair cells. The third type are somatic stem cells, cells isolated from a specific organ—like the human ear. As attractive as these cells are to religious conservatives who oppose embryonic stem cell use, up until now they have not seemed to be a viable option because, as Heller said, “these cells are very rare.” Embryonic stem cells and pluripotent stem cells share an unfortunate feature: they can generate tumors. Heller said that he’s received many e-mails from patients offering to be subjects for human trials. He showed the audience at the hearing regeneration conference a slide of a mouse that had been injected with a small number of these cells: “After one month, this mouse grows an enormous tumor.” Before they can be used to regenerate hair cells, these stem cells will have to be rendered non-tumorigenic. Somatic stem cells don’t cause tumors, but there aren’t enough of them. Scientists have not been able to isolate enough of these cells from the ear to study their advantages and disadvantages over the more abundant but problematic embryonic and pluripotent cells. Induced pluripotent stem cells appear to be the perfect compromise. These cells can be generated from virtually any cell of someone’s body, and Heller’s lab has been working with somatic cells derived from skin biopsies, usually from a patient’s arm, a human patient with hearing loss. “The work is very exciting,” he told me. “Treating the cells from the biopsy with reprogramming factors, they can turn a somatic cell into an induced pluripotent stem cell (iPS cell). They can then grow them in a culture much the way they do embryonic cells, but without the religious or ethical controversy. “We are basically making hair cells from human skin cells,” he said. “These cells are not from the ear, so making the claim that these are hair cells is a difficult one. But they do have all the features of hair cells. They look like hair cells, they express genes that one would expect to find in hair cells, and they are functional, and moreover, we are approaching the point where we can generate human hair cells.” Many steps remain before this becomes anything like a clinical reality, however, and each step takes a long time and a lot of money. Just as a mouse embryo takes only three weeks to develop, compared to a human’s nine months, the mouse embryonic stem cells take eighteen to twenty days to become hair cells. The human cells take forty. They require constant monitoring and tending, Heller said. “You can’t just close the incubator and come back in a week and hope for the best. You have to—every day—replace the culture medium. You have to look at the cells. You have to clean out areas you don’t like. It’s a little bit like a garden. You’re nurturing a very precious plant.” The iPS cells have to reproduce for about thirty generations before they can be used for experiments, which means it takes about 150 days to successfully culture-these cells from patients. By the spring of 2012 they had cultured biopsies from three genetically hearing-impaired patients. They had funding for about twelve altogether, from the NIH. It’s a long way from mouse to man, but Heller said at the Hearing Restoration symposium that despite the challenge, “we’re getting close.” One of the major findings of the last five or ten years, Heller said, is coming to understand the roadblocks. Once they know what obstacles stand in the way of transplantation, they can begin to figure out how to get around them. The first roadblock is the fact that these cells cause tumors. Looking ahead, Heller said, scientists need another five to ten years to solve that problem—a knotty one that involves learning how to generate pure cells and cells that are not tumorigenic. Once they solve that one, they will encounter new roadblocks: how to deliver the stem cells into the ear, determining the appropriate site for integration of cells, how to ensure their long-term survival, how to block immune system responses, how to make sure the cells function—“and, of course, whether the cells improve hearing.” As a young assistant professor, Heller told the Hearing Restoration symposium, if he had been asked how long it would take to cure hearing loss, he would have said, “You know, in five years we’ll have a solution for certain things.” Over time, he went on, “I got a sense of the difficulty of the problem and of all the roadblocks and the issues we have to deal with. And I’m getting frustrated myself, how long it takes to overcome a single one of these roadblocks. And then you’re over one hill and there’s another one.” The difference now, he said, is that “we know where we have to go, and what we have to do. It’s difficult to assess whether it will take ten years or twenty years or even fifty years.” Later, he went back to the time line again. “I think for transplantation we need another five to ten years before we are at the point where we can generate pure cells and cells that are not tumorigenic, to start doing experiments with animals.” Ed Rubel, in our interview, also gave a time line for his lab’s work: “I think with proper funding, we can, in ten years, develop ways to get sufficient numbers of hair cells in a laboratory mammal cochlea as a model. We [meaning researchers in the field] will then go on to optimize the drug or drugs in all the ways needed to use them safely in humans, and only then go to clinical trials.” He pointed out that they already know some genes and some compounds that facilitate the production of new hair cells in some conditions, but they don’t have the lead compound. Even once they find that lead compound, he added, “all the safety trials, in vitro trials and small animal trials, all that preclinical work, usually takes eight to ten years.” As for gene therapy, for those whose hearing loss has a genetic basis, Stefan Heller cites what happened with research on vision and blindness: “Twenty years ago this was an open field, and now it has evolved into a flourishing clinical field and a very lively biotechnology field” with a market for drugs and procedures. “I think we can use the vectors and tools they’ve developed and bring them into our field. So I think five to seven years is probably a reasonable time frame for seeing results in animal studies. Gene therapy would be used on people whose deafness is caused by a mutation in a certain gene. If you could deliver the correct gene into the inner ear, you might be able to repair hearing loss before it progresses too far for repair. “There are hurdles to overcome in this therapy as well: First, as always, safety concerns. Second, how to deliver the virus carrying the corrective gene into all the regions of the cochlea, that tiny inaccessible spiral. An injection that succeeds in getting only partway into the cochlea would leave the patient with middle- and low-frequency loss. To reach these areas might require opening the cochlea, which would carry a high risk of doing further damage. One further problem is ensuring that the hair cells grow where they are supposed to. Hair cells not at the correct location in the organ of Corti can themselves contribute to profound hearing loss.” As for the development of prophylactic drugs, the use of “high throughput methods” will help make the time line a little shorter. High throughput methods—also called high content screens—use multiple cell culture dishes testing hundreds or thousands of compounds. Robots may also be used to speed the testing process. This requires work “with big pharma—because we cannot do this in our lab,” Heller said. High throughput screening and the backing of big pharma would increase efficiency, allow the earlier use of screens directly with human cells so they don’t have to go through mice first and then on to humans.
<urn:uuid:a33b3050-76dd-4d0f-acf0-2eed404da13e>
CC-MAIN-2022-33
https://www.scientificamerican.com/article/researchers-home-in-on-biological-ways-to-restore-hearing/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572221.38/warc/CC-MAIN-20220816060335-20220816090335-00098.warc.gz
en
0.962237
7,873
2.8125
3
Phylum : Amoebozoa Text © Prof. Giorgio Venturini English translation by Mario Beltramini Amoeba proteus Pallas, 1766, is the most known species of a genus of protists previously inserted in the phylum Rhizopoda, but presently, on the base of molecular data, classified as phylum Amoebozoa, belonging to the Kingdom of the protists. We have however to keep in mind that the classification of the protists is presently much debated. Most of the species are free-living in the soil, in the mud or in the water, where they nourish of bacteria, small protists and other unicellular organisms; only few species are parasitic. Amoeba proteus is the best studied species: it is an unicellular organism of about 250 micron (μm) of diametre, that can reach even the 400-500 micron, able to emit lobe-shaped cytoplasmic extensions, the pseudopodia, utilized for the locomotion and the capture of the preys. The genus Chaos, cognate with Amoeba, includes the largest known amoebas, among which we recall Chaos carolinense that may reach the 5 mm. Observed under the microscope fresh and without colourations the amoeba looks like an irregularly shaped structure, devoid of colouration and translucent, animated by continuous movements that consist in changhe of shape, emission of pseudopodia and continuous flow of cytoplasmic granular material. Inside it, besides several granules we observe a nucleus and a large roundish vacuole. Contrary of to the genus Chaos whose members are plurinuclear, Amoeba proteus is uninuclear. The habitat of the amoeba consists essentially in the mud on the bottom of freshwater pools, lakes and water low flowing streams, often under the aquatic vegetation, and also in the very much humid soils. The name of the genus Amoeba, in Italian ameba, comes from the Greek “amoibé” (ἀμοιβή) = change; Proteus in the Greek mythology is the name of a sea-god, called also the “Old Man of the Sea”, able to transform assuming the appearance of any animal or plant, and also of objects like water or fire. The oldest description of an amoeba is due to August Johann Rösel von Rosenhof, Austrian nobleman, painter of miniatures, naturalist and entomologist who in 1755 has baptized it “Der kleine Proteus” (“the little Proteus”). Thanks to his ability as painter this scholar has left us beautiful pictures of the amoeba and of many other organisms. Historically, the microscopists have subdivided the cytoplasm of the amoeba in two parts, an outer “ectoplasm” little granular and in contact with the cellular membrane and an inner granular “endoplasm” in its turn distinguishable in a more viscose portion, called plasmagel, and one less viscose, called plasmasol. Actually, these subdivisions reflect different functional conditions of the cytoplasm and are continuously variable. To understand the locomotion of an amoeba we can think to our step, that involves the forward projection of one leg and the following contact of the foot with the soil. The foot, now well placed on the ground serves as anchorage for pulling forward the body thanks to the contraction of the muscles of the hip whilst the other foot, staying previously on the soil, raises. Also in the amoeba the principle is the same: forward projection of a pseudopod, anchorage of the same to the substratum and traction of the cell body forward, while the previous anchorages detach. We have now to understand how forms a pseudopod, how it elongates and anchor to the substratum and how occurs the traction forward of the cell body. In order to explain the movements of the amoeba and of other cells endowed of amoeboid movements have been proposed in the time various theories that usually evoke the changes of consistency of the various parts of the cytoplasm that alternatively pass from a status of sol to one of gel (that is, from liquid to gelatinous and vice versa), allowing a sort of squeezing of the cytoplasm towards the front of advancement. In these theories is not clear what are the mechanisms causing these changes of viscosity, how they are directed, how actually they generate a movement and which are the sources of energy to accomplish this work. Most recently it was examined the role of some proteins present in the cytoplasm of the amoeba and well known in the pluricellular organisms, that form a true and real flexible intracellular skeleton (cytoskeleton), able to lengthen and shorten actively besides to act as tracks where can move other proteins, real molecular motors able to drag in their turn other protein filaments. These cytoskeletal proteins and the related molecular motors stand also at the base of the movements of our muscles. We try here to draw a possible mechanism (that however does not exclude the possible intervention of other phenomena). The main protein forming the cytoskeletal filaments is called actin and is an approximately globular molecule, abundant in the cytoplasm of the amoeba as well as that in our cells. The molecules of actins may join reversibly to form a filament, like a sort of pearl necklace, that can elongate or shorten by addiction or subtraction of globular molecules. It is thought that the protrusion of the pseudopodia may be at least in part generated by the elongation of the filaments of actin that should come to push ahead the membrane merging in parallel bundles. In addition or as an alternative to this explanation has been invoked a role of the actin and of the motor proteins to it associated in producing a contractile force able to induce by pressure flows of cytoplasm towards the front of advancement. Also the change of the consistency of the cytoplasm from a more liquid status to a more solid one is consequence of the aggregation of the molecules of actin. Other proteins interact with the actin forming crossed links between the filaments and consequently generating a tridimensional grid. For a better understanding of this phenomenon we can think to what happens when the meat broth cools down and turns into jelly: the protein molecules of the collagen, separated from each other in the boiling broth, aggregate to form filaments that create a tridimensional grid that hardens the jelly. In the example of the jelly gets form a grid with filaments casually oriented, whilst in our tissues, thanks to the intervention of the cells, the collagen aggregates to form robust parallel bands, the tendons. When the pseudopod has elongated some proteins of the membrane form anchorages to the substratum. What are the factors that stimulate the forming of the filaments of action? What is for an amoeba “anterior” or “posterior”? We know that the amoebae are sensitive to signals present in the environment (positive or negative tropisms), for instance, they are attracted by molecules released by sources of food: these molecules bind receptor proteins present on the membrane of the amoeba and activate a molecular response that stimulates the aggregation of the globular actin to form the filaments. It is interesting to note that the molecules stimulating the aggregation of the actin in the amoeba are the same performing the same task in our white blood cells or in many other mobile cells. It is therefore the chemical signal coming from the environment that stimulates the protrusion of the pseudopods and that determines what will be the anterior pole of the cell. When the signal disappears the pseudopods stop elongating in that direction since the filaments of actin disaggregate. The next step requires the dragging or the pushing of the cell body in direction of the pseudopod: here come into play the motor proteins (myosins) that like real and true molecular motors “walk” on the filaments of actin and determine the movement of the cytoskeletal scaffoldings of the cytoplasm in direction of the advancing extremity, whilst the old protein anchorages to the substratum detach. This interaction between actin and myosins is similar to that operating in our muscles. Of course, the elongation of the filaments by apposition of new sub units as well as the activity of the molecular motors like the myosin require energy that is furnished, like for most cellular phenomena requiring energy, by the well known molecule of ATP, molecule rich of energy continuously produced by the metabolism of all cells. Also fragments of cytoplasm isolated from the cell body, exhibit movements in presence of ATP. A confirmation of the essential role of the filaments of actin in the locomotion of the amoeba comes from experiments where to the amoebae has been administered a drug (cytochalasin) which prevents the formation of the filaments of actin. In these conditions the amoeba is no longer able to emit pseudopods and to move. It can be amazing the fact that the same molecular mechanisms operate in the movements of the amoeba, in our white blood cells or in our muscles: we can explain these similitudes reflecting on the fact that the mechanisms of the cellular movement have evolved in the distant ancestor who, almost a billion of years ago, has originated the evolutionary line of the protozoa and that this will lead to the human beings. Hence, the actin of the amoeba is the same as the human one? Surely not identical, seen that the actin of the common ancestor underwent cumulative evolutionary changes (mutations) along almost a billion of years, evolving towards the actin of the modern amoeba or towards that of the modern man, but its basic functional characteristics have remained the same. The amoeba is a predatory organism that nourishes engulfing by phagocytosis other protists or bacteria. The phenomenon is made possible by its capacity of perceiving the presence of the preys thanks to chemical signals or of contact and mainly of emitting pseudopods by means of which it surrounds the microorganism and engulfs it in a membrane vesicle (phagocytosis vacuole or phagosome). Actually, for a more accurate representation of the phenomenon we must keep in mind its tridimensionality and think in this case to the pseudopods like some lips that close around the prey to insert it into the “mouth”, that is into the phagosome. It is intuitive that the membrane surrounding the phagosoma comes from the cell membrane and therefore every phagocytosis subtracts material to the involucre. If we consider that an amoeba in one day probably carries out an average of 100 endocytoses, in each of which it internalize about the 10% of its own surface of membrane, we can calculate that every day is subtracted the equivalent of 10 cell surfaces. This is made possible by concurrent phenomena of exocytosis that return to the surface the membrane of the old phagosomes or newly formed membrane. For understanding correctly this continous recycling of membranes in a cell and the transferring of the contents of vesicles we must think that membrane is a fluid film and the behaviour of a cell and of its systems of vesicles may be compared with that of oil drops put in a dish full of water: the drops may merge, uniting their contents or split apart to then merge again. In the same way from the membrane of the cell may detach vesicles ready to merge again, transferring their contents and their membrane in a continuous interchange. The phagocytosed material is now sent to the digestion by fusing the phagosome with other membranous vesicles containing digestive enzymes, functionally equivalent to those present in our digestive system. Since these enzymes work correctly only in a very acidic environment, some molecular pumps present on the membrane of the phagosome actively transport hydrogen ions inside the vesicle (acidity corresponds to a high concentration of hydrogen ions). The resulting vesicle, acidic and containing the phagocytosed material and the digestive enzymes, takes the name of lysosome and carries out the function of cell digestive apparatus. When the enzymes will have completed their action the lysosome will contain digested material, useful for the amoeba, besides waste products and undigested material. At this point transporter proteins present in the membrane of the lysosome transfer into the cytoplasm the useful substances (for instance, sugars or amino acids coming from the digestion), while the lysosome gets close to the cell membrane and fuses with it, pouring out the waste products. Anyhow, never occurs the breaking of the lysosome and the pouring of all its contents into the cytoplasm; should this happen the cell would self-digest! The comparison with our feeding is easy: the food ingested is digested by the enzymes of the stomach and the intestine, the useful substances obtained from the digestion are absorbed by the intestinal wall and transferred to the blood and the waste products are expelled with the feces. The breaking of the intestine wall and of the stomach (perforation) have extremely serious consequences. Whilst the phagocytosis allows the amoeba to incorporate voluminous structures such as bacteria or protists, this organism may get from the environment nutrients through a process called pinocytosis (from the Greek “πινω” (pino), that means I drink), that anyway is based on principles similar to the phagocytoses, as it allows to insert in membranous vesicles water and macromolecular solutes that will be then went to the lysosome. It should be finally remembered that the cell membrane of the amoeba holds sophisticated mechanisms of selective permeability that allow the passage of single molecules of small size, like the sugars, from the outside to the cytoplasm, without formation of vesicles. The contractile vacuole and the water regulation Living in freshwater environments the amoeba in order to survive must face the problem of the regulation of its own contents of water. In fact, the cytoplasm can be considered as a very concentrated solution of organic substances (such as, for instance, proteins or sugars) and inorganic, separated, thanks to the cell membrane, from a watery environment at low concentration of solutes, and therefore richer of water. Being the cell membrane permeable to the water the inner high concentration of solutes attracts inward water by osmotic effect. The inevitable result, in absence of regulation mechanims, should be that of swelling the cell and possibly of its lysis (rupturing). The amoeba can face this problem thanks to a cellular organelle, the so-called contractile vacuole, membranous vesicle that stores inside the water in excess and then expels it out from the cell. The mechanism through which the vacuole can accumulate the water has long time been obscure, its recent explanation has allowed to unveil also other functions of this structure. The membrane of the vacuole is equipped with channels permeable to the water and with molecular pumps able to transport actively inside it various substances, among which, mainly, hydrogen ions (protons). Thanks to this protons pumping, the interior of the vacuole has a very high concentration of ions that, by osmotic effect, attracts water from the cytoplasm. In such way the water, that due to the osmotic action of the cytoplasm had entered the cell, by osmotic effect accumulates in the vacuole (if we consider the gradient of the concentration of the solutes, this results minimal in the external environment, intermediate in the cytoplasm and maximal in the vacuole, therefore the water will transfer from outside to the cytoplasm and from this one to the vacuole). At this point the vacuole moves to get in touch with the cell membrane and opens outwards pouring its contents of water and of solutes. Actually, on the membrane of the vacuole are present, besides the pumps for the hydrogen ions, also other transporters that accumulate inside it various substances, such as for instance metabolic wastes like carbon dioxide, urea or uric acid, that will be expelled outwards when the vacuole pours its contents out of the cell. As a consequence, the vacuole does not act only as organelle regulator of the contents of water but also as excretory organelle that eliminates the wastes of the metabolism. The expulsion of the contents of the vacuole implies that its membrane fuses with the cell membrane and therefore at this point the vacuole disappears: a new one will form “ex novo” through the production of a new vesicle. Obviously, the transportation of hydrogen ions and of other substances inside the vacuole requires energy, and this is provided by the molecule of ATP. The term contractile vacuole suggests a capacity of contracting actively, suggested by the observations of the microscopists that had noted its periodical changes of volume. Actually, it seems that these changes are mainly due to the water accumulation and not to phenomena of active contraction. In the amoebae living in sea water, that do not have the problem of the swelling seen that the sea water has a saline concentration higher than that of the cytoplasm, the vacuole does not exist (these amoebae will have, conversely, the opposite problem of the dehydration). Genetics and reproduction As in the case of other giant amoebae, also Amoeba proteus has in its nucleus a huge number of chromosomes, about 250 pairs, and an enormous quantity of DNA, estimated in about 290 billions of bases (in the man we have 23 pairs of chromosomes and 3 billions of bases). The very high values found in Amoeba proteus come from phenomena of polyploidy, that is, the chromosomes have repeatedly duplicated in identical pairs. The cited numbers are furthermore variable during the vital cycle of the protozoan. The functional meanings of these phenomena are not clear. The reproduction of the Amoeba proteus is a periodical process repeating in intervals that depend from the rate of increase. An amoeba will start dividing when reaches a sufficient size, for instance, 0,2-0,3 mm. The reproduction is asexual and usually occurs by binary fission. Before dividing, the cell retracts the pseudopods and assumes a roundish shape, therefore occurs a nuclear mitotic division followed by subdivision of the cytoplasm that happens thanks to a cleavage furrow (due to the same contractile proteins acting in the locomotion) followed by the activity of the pseudopods that drag in opposite directions of the two daughter cells. The process usually lasts 30 to 60 minutes. They have described phenomena of multiple fission, that should occur in unfavourable conditions of poor feeding or of dehydration: the amoeba gets round and covers with a resistant and impermeable cyst that enables it to survive till when the conditions become favourable. At this point a multiple nuclear division produces many nuclei that gather at the periphery of the cell and finally separate in many new cells that get out when the cyst hydrates and bursts. The encystment seems to happen also independently to the multiplication and represents therefore a system for overcoming periods of adverse conditions, such as the drying of the pool of water. Some microscopists have described phenomena that lead to think to a conjugation, but the meaning of these observations is not clear. Symbiosis and parasitism Even if Ameba proteus is able to phagocytize and digest most bacteria, there is at least one species of bacterium, similar to the Legionella, the pathogenic agent of the Legionnaires’ disease, that is able to survive inside the phagosome, avoiding the digestion. This bacterium establishes a symbiotic relation with the amoeba which induces in both species changes that render them mutually dependent. In some species of amoeba have been found infections by viruses characterized by exceptionally large dimensions, such as the Mimivirus or the Pandoravirus. These viruses have diametres between the 400 and the 1000 nanometres (nm, billionths of metre) and their DNA amounts to one or two millions of bases. We consider tha most viruses have dimensions of few tens of nm and their DNA has few tens of thousands of bases. Predators of the amoeba The amoebae are object of predation by various organims, among which nematodes, small fishes, crustaceans and mollusks, from which they can defend by secreting toxic or repellent substances. Amoebae pathogenic for the man The pathogenicity of the amoebae is a not completely clarified matter, since that only few strains of the potentially involved species result permanently pathogenic for the man and that in most of cases they easily lose their infectivity. The most important genera are Naegleria and Balamuthia, that may cause serious encephalitis, Acanthamoeba, cause of meningoencephalitis and of keratitis, or Hartmannella, cause of keratitis. In addition to this, we must recall that some amoebae may be infected by bacteria pathogenic for the man and then act as disease vectors. Surely the most important pathogenic species is however Entamoeba histolytica, agent of the intestinal amoebiasis, intestinal infection that can be extremely serious, with even fatal complications. It is estimated that 50 million of persons in the world are infected by this amoeba, especially in the tropical contries, with 50000-100000 deaths per year. In most cases the infection is asymptomatic, but in about the 10% of cases it causes forms of dysentery. The amoebae invade the intestinal mucosa and destroys its cells, producing ulcerations, with loss of blood that can cause serious severe anemia and possible perforations with extremely serious results. The ulcerations can allow the protozoans to reach the blood stream and therefore to be transported into organs such as the liver or the brain causing very serious, even deadly, abscesses. When they reach the colon, due to the progressive dehydration of the intestinal contents, the amoebae produce a cyst that will be expelled with the feces and that represents the form able to infect other individuals. The mature cyst has four nuclei and rightly its microscopical research in the feces represents the classical diagnostic method. The cyst is extremely resistant and can survive long time in the water or in the ground. The contagion occurs by ingestion of water or of food contaminated by the feces containing the cysts that, when they reach the intestine of the new host, transform in the trophozoite, that is in the active amoeba able to multiply and invade the mucosa. The amoebiasis is a typical example of fecal contagion infection, described by the English as “F Cycle”: feces, fingers, flies, food. The parasites present in the feces, through the hands or the flies reach the food. Numerous anedoctes are reported concerning cases of multiple contagion, among which that of a cook of a cruise vessel that, clearly paying little attention to the hygiene of his hands, should have infected all passengers! (We recall that in the Anglo-Saxon countries in the public toilets are posted notices “Now wash your hands!”). To the genus Entamoeba belong numerous species, some of which, like Entamoeba histolytica, Entamoeba dispar, Entamoeba moshkovskii, Entamoeba polecki, Entamoeba coli, and Entamoeba hartmanni may live in the human intestine. Only Entamoeba histolytica is surely pathogenic. The differential diagnosis is based on the characteristics of the cyst, tetranuclear in Entamoeba histolytica, and mainly on immunological or molecular biology methods. Amoeba in the popular culture In the song “A very cellular song” of “The incredible string band” the life is described from an amoeba’s point of view: “Amoebas are very small, There’s absolutely no strife Living the timeless life, I don’t need a wife Living the timeless life. If I need a friend, I just give a wriggle split right down the middle and when I look there’s two of me both as handsome as can be. Oh, here we go slithering, here we go slithering and squelching on …” Volvox proteus Pallas, 1766; Proteus diffluens O.F. Müller, 1786; Chaos diffluens (O.F. Müller, 1786) Schaeffer, 1926.
<urn:uuid:f6e93d2d-9b9a-42f3-9b11-3c4265beb859>
CC-MAIN-2022-33
https://www.monaconatureencyclopedia.com/amoeba-proteus/?lang=en
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00299.warc.gz
en
0.937621
5,373
3.328125
3
Table of Contents The alkaline diet promotes consuming foods high in antioxidants and minerals. An acidic atmosphere may create health issues, including cancer and heart disease. However, an alkaline diet emphasizes fresh fruits and vegetables, complete grains, lean meats, and healthy fats. There are many different diets out there, some excellent, some terrible, but a mainly plant-based alkaline diet is probably the best for lifespan and illness prevention. I was hoping you could take my word for it, but don’t take my word for it. According to a 2012 study published in the Journal of Environmental Health, eating an alkaline diet may help reduce morbidity and mortality from various chronic illnesses and disorders, including hypertension, diabetes, arthritis, vitamin D insufficiency, and poor bone density mention a few. What are the benefits of an alkaline diet? Fresh vegetables, fruits, and unprocessed plant-based sources of protein, for example, have been shown to result in a more alkaline urine pH level, which helps preserve healthy cells and balance vital mineral levels, according to research. In addition, because hormone levels may be affected by intermittent fasting and the keto diet, this is particularly relevant for women. Alkaline diets (also known as alkaline ash diets) have been proven to help prevent plaque development in blood vessels, prevent calcium from collecting in urine, avoid kidney stones, strengthen bones, decrease muscle atrophy or spasms, and much more. An alkaline diet aims to assist your body’s fluids, such as blood and urine, maintain a healthy pH level. The alkaline ash diet, alkaline acid diet, acid ash diet, pH diet, and Dr. Sebi’s alkaline diet are all names for the same diet (Dr. Sebi was an herbalist who created a plant-based version of the diet). The mineral density of the meals you consume has a role in determining your pH. All living creatures and life forms on the planet rely on maintaining proper pH levels, and it’s frequently stated that illness and disorder cannot thrive in a pH-balanced body. The alkaline diet’s foundations are based on the concepts of the acid ash theory. “The acid-ash hypothesis posits that protein and grain foods, with a low potassium intake, produce a diet acid load, net acid excretion (NAE), increased urine calcium, and calcium release from the skeleton, leading to osteoporosis,” according to research published in Journal of Bone and Mineral Research. The alkaline diet attempts to prevent this by carefully considering food pH values to minimize dietary acid consumption. Although some specialists may disagree with this assertion, almost all experts believe that a pH level of 7.365–7.4 in the blood is required for human existence. “Our bodies go to great lengths to maintain acceptable pH levels,” writes Forbes Magazine. Depending on the time of day, your diet, what you last ate, and when you later went to the toilet, your pH may vary between 7.35 to 7.45. If you have electrolyte imbalances and eat too many acidic meals — also known as acid ash foods — your body’s pH level will change, resulting in more significant “acidosis.” What “pH Level” Actually Means? The potential of hydrogen is referred to as pH. It’s a metric for how acidic or alkaline our body’s fluids and tissues are. It is graded on a scale of 0 to 14 points. The lower the pH of a solution, the more acidic it is. The higher the value, the more alkaline the body is. A pH of about 7 is considered neutral, but since the optimum human body pH is around 7.4, we consider a slightly alkaline pH the healthiest. The stomach is the most acidic part of the body, with pH values varying throughout. Therefore, even little changes in the pH of different species may create serious issues. For example, the pH of the ocean has decreased from 8.2 to 8.1 due to environmental problems such as increased CO2 deposition, and different ocean life forms have suffered significantly as a result. The pH level is also essential for plant growth. Therefore it has a significant impact on the mineral content of the meals we consume. Minerals in the ocean, soil, and human body act as buffers to keep pH levels in check; thus, as acidity rises, minerals plummet. What Is an Alkaline Diet? Here’s some history on acidity and alkalinity in the human diet, as well as some important facts regarding how alkaline diets may help: - When it comes to the overall acid load of the human diet, researchers think “there have been significant shifts from hunter-gatherer cultures to the present.” In comparison to diets of the previous 200 years, our food supply contains considerably less potassium, magnesium, and chloride, as well as significantly more salt, thanks to the agricultural revolution and subsequent widespread industrialization of our food supply. - The kidneys usually keep our electrolyte levels in check (calcium, magnesium, potassium, and sodium). These electrolytes are utilized to counteract acidity when we are exposed to excessively acidic substances. - According to a study published in the Journal of Environmental Health, the potassium-to-sodium ratio in most people’s diets has shifted significantly. Potassium used to outnumber sodium by a factor of ten to one, but today the balance is one to three. On average, those who follow a “Standard American Diet” now ingest three times as much salt as potassium! This significantly adds to our bodies’ alkaline environment. - Many children and adults nowadays eat a high-sodium diet that is deficient in antioxidants, fiber, critical vitamins, and magnesium and potassium. Furthermore, processed fats, simple carbohydrates, salt, and chloride are abundant in the average Western diet. - All of these dietary modifications have increased by “metabolic acidosis.” To put it another way, many people’s pH levels are no longer optimum. Furthermore, many people have poor nutritional intake and issues like potassium and magnesium insufficiency. So, why is an alkaline diet beneficial to your health? Because alkaline meals include essential elements that aid in preventing premature aging and the loss of organ and cellular functioning. As described further below, alkaline diet advantages may include assisting in preventing tissue and bone deterioration, which may be harmed when too much acidity depletes us of essential minerals. 1. Bone density and muscle mass are protected Mineral intake is critical for the formation and maintenance of bone structures. For example, according to research, the more alkalizing fruits and vegetables a person consumes, the less likely they are to develop sarcopenia or a loss of bone strength and muscle mass as they age. An alkaline diet may help with bone health by regulating the ratio of minerals like calcium, magnesium, and phosphate, essential for growing bones and preserving lean muscle mass. The diet may also aid in synthesizing growth hormones and the absorption of vitamin D, which preserves bones while also reducing the risk of many other chronic illnesses. 2. Reduces Hypertension and Stroke Risk Reduced inflammation and increased growth hormone production are two of the anti-aging benefits of an alkaline diet. This has been proven to enhance cardiovascular health and protect against excessive cholesterol, hypertension (high blood pressure), kidney stones, stroke, and even memory loss. 3. Reduces Inflammation and Chronic Pain According to research, an alkaline diet has been linked to lower levels of chronic pain. Back discomfort, headaches, muscular spasms, menstruation symptoms, inflammation, and joint pain have all been linked to persistent acidosis. Patients with chronic back pain who were given an alkaline supplement daily for four weeks reported substantial reductions in pain as assessed by the “Arhus low back pain assessment scale,” according to research performed by the Society for Minerals and Traced Elements Germany. 4. Helps to prevent Magnesium Deficiency by increasing vitamin absorption Magnesium is needed to properly function hundreds of enzyme systems and physiological functions. Unfortunately, many individuals are magnesium deficient, resulting in heart problems, muscular discomfort, migraines, sleep problems, and anxiety. Magnesium is also needed to activate vitamin D and avoid vitamin D insufficiency, critical for general immunological and endocrine health. 5. Aids in the improvement of immune function and cancer prevention When cells lack the minerals they need to dispose of waste effectively or adequately oxygenate the body, the whole body suffers. Mineral loss impairs vitamin absorption, while toxins and infections build up in the body, weakening the immune system. Is it true that eating an alkaline diet may help you avoid cancer? While the subject is debatable and untested, a study published in the British Journal of Radiology revealed indications that malignant cell death (apoptosis) was more probable in an alkaline body. An alkaline change in pH owing to an adjustment in electric charges and the release of essential components of proteins is thought to be linked to cancer prevention. In addition, alkalinity has been proven to be more helpful for certain chemotherapeutic drugs that need a higher pH to function correctly and reduce inflammation and the risk of illnesses like cancer. 6. It may assist you in maintaining a healthy weight Although the diet isn’t exclusively for weight reduction, sticking to an alkaline diet meal plan for weight loss may assist you in avoiding becoming obese. Due to the diet’s potential to lower leptin levels and inflammation, limiting acid-forming meals and eating more alkaline-forming foods may make it simpler to lose weight. This has an impact on your appetite as well as your fat-burning ability. Because alkaline-forming foods are anti-inflammatory, following an alkaline diet allows your body to reach normal leptin levels and feel satiated with eating the correct number of calories. A keto alkaline diet, which is low in carbohydrates and rich in healthy fats, is one of the most excellent methods to attempt if weight reduction is one of your primary objectives. How to Comply How can you stay alkaline in your body? Here are some essential points to remember while eating an alkaline diet: 1. Buy organic alkaline foods wherever feasible One essential aspect of eating an alkaline diet, according to experts, is to learn about the kind of soil your food was produced in since fruits and vegetables cultivated in organic, mineral-dense soil are more alkalizing. However, according to research, the type of soil in which plants are produced has a significant impact on their vitamin and mineral content, implying that not all “alkaline foods” are made equal. For the most significant overall availability of critical nutrients in plants, the pH of the soil should be between 6 and 7. Acidic soils with a pH below 6 may have lower calcium and magnesium levels, whereas soils with a pH of more than 7 may contain chemically inaccessible iron, manganese, copper, and zinc. In addition, the healthiest soil is well-rotated, organically maintained, and exposed to wildlife/grazing cattle. 2. Consume alkaline water The pH of alkaline water ranges from 9 to 11. Therefore, it’s perfectly safe to consume distilled water. Although reverse osmosis filtered water is somewhat acidic, it is still preferable to tap water or purified bottled water. Alkalinity may also be increased by adding pH drops, lemon or lime, or baking soda to your water. 3. Check your pH level (optional) You may test your pH level by buying strips at your local health food shop or pharmacy if you’re interested in your pH level before following the suggestions below. Saliva or urine may be used to determine your pH. The greatest effects will come from your second urine of the morning. The colors on your test strip are compared to a chart included with your test strip kit. One hour before a meal and two hours after a meal is ideal for testing your pH throughout the day. If you’re pushing your saliva, aim for a reading of 6.8 to 7.2. Alkaline Foods to Eat A high alkaline diet is mainly plant-based, but you don’t have to be a devout vegetarian to follow it. Here’s a list of the meals you should focus on the most: - Fresh fruits and vegetables are the best sources of alkalinity. So which foods are the healthiest; for instance, are bananas alkaline? What about broccoli, for example? Mushrooms, citrus, dates, raisins, spinach, grapefruit, tomatoes, avocado, summer black radish, alfalfa grass, barley grass, cucumber, kale, jicama, wheatgrass, broccoli, oregano, garlic, ginger, green beans, endive, cabbage, celery, red beet, watermelon, figs, and ripe bananas are just a few of the top choices. - Raw foods in general: Try to eat as much of your vegetables as possible uncooked. Fruits and vegetables that haven’t been cooked are considered to be biogenic or “life-giving.” In addition, cooking depletes alkalizing minerals in meals. Try juicing or gently boiling fruits and vegetables to increase your raw food consumption. - Almonds, navy beans, lima beans, and most other beans are excellent sources of plant protein. - Alkaline water. - Green beverages: Alkaline-forming foods and chlorophyll are abundant in drinks prepared from powdered green vegetables and grasses. Chlorophyll has a similar structure to our blood and helps to alkalize it. - Sprouts, wheatgrass, Kamut, fermented soy like natto or tempeh, and seeds are good additions to an alkaline diet. What foods should you stay away from if you’re on an alkaline diet? Acidic foods include the following: - Processed meals are high in sodium chloride, often known as table salt, which constricts blood vessels and causes acidity. - Cold cuts and traditional cuts of meat - Cereals that have been processed (such as corn flakes) - Caffeinated beverages and alcoholic beverages - Oats and whole wheat products: Acidity is produced in the body by all grains, complete or not. The majority of Americans’ plant food allotment is consumed in the form of processed maize or wheat. - Calcium-rich dairy products, such as milk, are responsible for some of the highest incidences of osteoporosis. This is because they induce acidity in the body! When your circulation gets excessively acidic, it will attempt to adjust the pH by stealing calcium (a more alkaline material) from your bones. So eating enough alkaline green leafy vegetables is the greatest approach to avoid osteoporosis! - Walnuts and peanuts - Pasta, rice, bread, and packaged grain goods are just a few examples. What other behaviors may lead your body to become acidic? Among the worst offenders are: - Use of alcohol and other drugs - Caffeine consumption is high. - Overuse of antibiotics - Sweeteners made from artificial sources - Stress that lasts a long time - Food nutrition levels are declining as a result of industrial farming. - Fiber deficiency in the diet - Lack of physical activity - Excessive consumption of animal foods in the diet (from non-grass-fed sources) - Nutrition, health and beauty products, and plastics that contain too many hormones - Chemicals and radiation from home cleaners, construction materials, computers, mobile phones, and microwaves are all sources of exposure. - Preservatives and food coloring - Herbicides and pesticides are used to control weeds and pests. - Chewing and eating habits that aren’t up to par - Foods that have been refined and processed - Breathing is shallow. Paleo Diet vs. Alkaline Diet - Many aspects of the Paleo and alkaline diets are similar, as are many advantages, such as the decreased risk of nutritional shortages, reduced inflammation, improved digestion, weight reduction or management, and so on. - Removing added sugars, lowering consumption of pro-inflammatory omega-6 fatty acids, eliminating grains and processed carbohydrates, reducing or eliminating dairy/milk intake, and boosting intake of fruits and vegetables are some of the similarities between the two. - However, if you intend to follow the Paleo diet, there are a few things to keep in mind. For many individuals, the Paleo diet excludes all dairy products, including yogurt and kefir, which may be significant sources of probiotics and nutrients — Furthermore, the Paleo diet doesn’t necessarily stress consuming organic foods or grass-fed/free-range meat (in moderation/limited amounts). - Furthermore, the Paleo diet includes a lot of meat, pig, and seafood, all of which have their own set of disadvantages. - In general, overeating animal protein may lead to acidity rather than alkalinity. As amino acids are broken down, beef, chicken, cold cuts, seafood, and pig may accumulate sulfuric acid in the blood. To keep your pH level balanced, try to get the highest quality animal products you can and diversify your protein consumption. Risks and Consequences Some items on the “very acidic list,” such as eggs and walnuts, may surprise you. Although they are acidic in your body, don’t let that stop you from consuming them. They are nevertheless vital since they include various additional health advantages, such as antioxidants and omega-3 fatty acids. The bottom line is that we’re aiming for a healthy balance. It’s possible to become overly alkaline in terms of pH, and eating some acidic foods is both normal and beneficial. Our issue is due to a lack of alkaline-promoting foods rather than consuming too many acids from nutritious, complete meals. You’ll be OK if you eat a variety of simple, whole meals (mainly vegetables and fruit) and minimize your intake of packaged goods. - What does it mean to eat an alkaline diet? It’s a mainly plant-based diet that contains whole foods and has a beneficial impact on blood and urine pH levels. - Better heart health, stronger bones, reduced discomfort, weight loss assistance, and nutritional deficiency reversal are some of the health advantages of an alkaline diet. - Whole fruits and vegetables, raw meals, green juices, legumes, and nuts are part of an alkaline diet. - High-sodium meals, processed cereals, too much meat and animal protein, added sweets, and regular milk are all acidic items that should be avoided on an alkaline diet. Frequently Asked Questions What do you eat on an alkaline diet? A: Fresh fruits and vegetables are the best sources of alkalinity. What are the most alkaline foods? A: Most alkaline foods are fruits and vegetables that have a high pH level. How can I make my body more alkaline? A: You can do this by eating more green vegetables, drinking plenty of water, and avoiding processed foods. - alkaline foods list pdf - juice recipes for alkaline diet - alkaline foods benefits - 25 most alkaline foods - top 10 alkaline foods The information on this website has not been evaluated by the Food & Drug Administration or any other medical body. We do not aim to diagnose, treat, cure or prevent any illness or disease. Information is shared for educational purposes only. You must consult your doctor before acting on any content on this website, especially if you are pregnant, nursing, taking medication, or have a medical condition. HOW WOULD YOU RATE THIS ARTICLE?
<urn:uuid:efc95716-4332-4fdf-b45e-345f8fa1b718>
CC-MAIN-2022-33
https://www.well-beingsecrets.com/alkaline-diet-foods/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00699.warc.gz
en
0.930736
4,208
2.640625
3
1. A first look at Macroeconomics 2. Real GDP-real gross domestic product 3. GDP and the Circular Flow of Expenditure and Income The circular flow illustrates the equality of income, expenditure, and the value of production. The circular flow diagram shows the transactions among four economic agents-households, firms, governments, and the rest of the world-in two aggregate markets-goods markets and factor markets. In the goods market, households, firms, governments, and foreigners buy goods and services. For analytical purposes, we can categorize spending by these four agents in the calculation of GDP: · The total payment for goods and services by households in the goods markets is consumption expenditure, C. · The purchases of new plants, equipment, and buildings and the additions to inventories are investment, I. · Governments buy goods and services, called government expenditure or G, from firms. · Firms sell goods and services to the rest of the world, exports or X, and buy goods and services from the rest of the world, imports or M. Exports minus imports are called net exports, X - M. In factor markets households receive income from selling the services of resources to firms. The total income received is aggregate income. It includes wages paid to workers, interest for the use of capital, rent for the use of land and natural resources, and profits paid to entrepreneurs; retained profits can be viewed as part of household income, lent back to firms. GDP Equals Expenditure Equals Income Aggregate expenditure equals C + I + G + (X - M). Aggregate expenditure equals GDP because all the goods and services that are produced are sold to households, firms, governments, or foreigners. (Goods and services not sold are included in investment as inventories and hence are “sold” to the producing firm.) · Because firms pay out as income everything they receive as revenue from selling goods and services, aggregate income equals aggregate expenditure equals GDP. The “Gross” in gross domestic product reflects the fact that the investment in GDP is gross investment and so part of it goes to replace depreciating capital. Net domestic product subtracts depreciation from GDP. 4. The Expenditure Approach 5. The Income Approach The income approach measures GDP as the sum of compensation of employees, net interest, rental income, corporate profits, and proprietors’ income. This sum equals net domestic income at factor costs. To obtain GDP, indirect taxes (which are taxes paid by consumers when they buy goods and services) minus subsidies plus depreciation are included. Finally any discrepancy between the expenditure approach and income approach is included in the income approach as “statistical discrepancy.” 6. Nominal GDP and real GDP The market value of production and hence GDP can increase either because the production of goods and services are higher or because the prices of goods and services are higher. Real GDP allows the quantities of production to be compared across time. Real GDP is the value of final goods and services produced in a given year when valued at the prices of a reference base year. Nominal GDP is the value of the final goods and services produced in a given year valued at the prices that prevailed in that same year. Calculating Real GDP Traditionally, real GDP is calculated using prices of the reference base year (the year in which real GDP=nominal GDP). 7. The Uses and Limitations of Real GDP One measure of the standard of living over time is real GDP per person, or real GDP divided by the population. Real GDP per person tells us the value of goods and services that the average person can enjoy. The value of real GDP when all the economy’s labor, capital, land, and entrepreneurial ability are fully employed is called potential GDP. Potential GDP grows at a steady pace because the quantities of the factors of production and their productivity grow at a steady pace. Fluctuations in the pace of expansion of real GDP is denoted the business cycle, periodic but irregular increases and decreases in the total production and other measures of economic activity. Each cycle is categorized by: trough, expansion, peak, recession. Real GDP can be used to compare living standards across countries. But two problems arise in using real GDP to compare living standards: First, the real GDP of one country must be converted into the same currency unit as the real GDP of the other country. Second, the goods and services in both countries must be valued at the same prices. Relative prices in countries will differ, so goods and services should be weighted accordingly. For example, if more prices are lower in China than in the United States, China’s prices put a lower value on China’s production than would U.S. prices. If all the goods and services produced in China are valued using U.S. prices, than a more valid comparison can be made of real GDP in the two countries. This comparison using the same prices is called purchasing power parity (PPP) prices. Limitations of Real GDP Some of the factors that influence the standard of living are not part of real GDP. Omitted from GDP are: Household Production: As more services, such as childcare, are provided in the marketplace, the measured growth rate overstates development of all economic activity. Underground Economic Activity: If the underground economy is a reasonably stable proportion of all economic activity, though the level of GDP will be too low, the growth rate will be accurate. Health and Life Expectancy: Better health and long life are not directly included in real GDP. Leisure Time: Increases in leisure time lower the economic growth rate, but we value our leisure time and we are better off with it. Environmental Quality: Pollution does not directly lower the economic growth rate. Political Freedom and Social Justice: Political freedom and social justice are not measured by real GDP. Economic growth leads to large changes in standards of living from one generation to the next. Economic growth rates vary across countries and across time. There are different economic theories to explain these variations in growth rates. I. The Basics of Economic Growth The economic growth rate is the annual percentage change of real GDP. This growth rate is equal to: Real GDP growth rate = The standard of living depends on real GDP per person, which is real GDP divided by the population. The growth rate of real GDP per person can be calculated using the formula above, though substituting real GDP per person. The growth rate of real GDP per person also approximately equals the growth rate of real GDP minus the population growth rate. 8. Economic Growth Trends 9. How Potential GDP Grows Potential GDP is the amount of real GDP that is produced when the quantity of labor employed is the full-employment amount. To determine potential GDP we use the aggregate production function and the aggregate labor market. The aggregate production function is the relationship between real GDP and the quantity of labor employed when all other influences on production remain the same. The figure shows an aggregate production function. The additional real GDP produced by an additional hour of labor when all other influences on production remain the same is subject to the law of diminishing returns, which states that as the quantity of labor increases, other things remaining the same, the additional output produced by the labor decreases. The production function in the figure shows the law of diminishing returns because its shape demonstrates that as additional labor is employed, the additional GDP produced diminishes. 10. The Labor Market The demand for labor is the relationship between the quantity of labor demanded and the real wage rate. The real wage rate equals the money wage rate divided by the price level. The real wage rate is the quantity of goods and services that an hour of labor earns and the money wage rate is the number of dollars that an hour of labor earns. Because of diminishing returns, firms hire more labor only if the real wage falls to reflect the fall in the additional output the labor produces. There is a negative relationship between the real wage rate and the quantity of labor demanded so, as illustrated in the figure, the demand for labor curve is downward sloping. The supply of labor is the relationship between the quantity of labor supplied and the real wage rate. An increase in the real wage rate influences people to work more hours and also increases labor force participation. These factors mean there is a positive relationship between the real wage rate and the quantity of labor supplied so, as illustrated in the figure, the supply of labor curve is upward sloping. In the labor market, the real wage rate adjusts to equate the quantity of labor supplied to the quantity of labor demanded. In equilibrium, the labor market is at full employment. In the figure, the equilibrium quantity of employment is 200 billions of hours per year. Potential GDP is the level of production produced by the full employment quantity of labor. In combination with the production function shown in the previous figure, the labor market equilibrium in the figure of 200 billion hours per year means that potential GDP is $12 trillion. 11. What Makes Potential GDP Grow? Potential GDP grows when the supply of labor grows and when labor productivity grows. The supply of labor increases if average hours per worker increases, if the employment-to-population ratio increases, or if the working-age population increases. Only increases in the working-age population can cause persisting economic growth. Persisting increases in the working-age population result from population growth. An increase in population increases the supply of labor, which shifts the labor supply curve rightward. The real wage rate falls and the quantity of employment increases. The increase in employment leads to a movement along the production function to a higher level of potential GDP. An increase in labor productivity increases the demand for labor and shifts the production function upward. As the top figure illustrates, the increase in the demand for labor from LD0 to LD1 raises the real wage rate. The bottom figure shows that the production function has shifted upward, from PF0 to PF1. An increase in labor productivity leads to an increase in real GDP per person and increases the standard of living. 12. Why Labor Productivity Grows Preconditions for Labor Productivity Growth The institutions of markets, property rights, and monetary exchange create incentives for people to engage in activities that create economic growth and are preconditions for growth in labor productivity. Market prices send signals to buyers and sellers that create incentives to increase or decrease the quantities demanded and supplied. Property rights create incentives save and invest in new capital and develop new technologies. Monetary exchange creates incentives for people to specialize and trade. Persistent growth requires that people face incentives to create: Physical Capital Growth: Saving and investing in new capital expands production possibilities. Human Capital Growth: Investing in human capital speeds growth because human capital is a fundamental source of increased productivity and technological advance. Technological Advances: Technological change, the discovery and the application of new technologies and new goods, has made the largest contribution to economic growth. 13. Growth Theories Classical growth theory is the view that real GDP growth is temporary and that when real GDP per person rises above the subsistence level, a population explosion eventually brings real GDP per person back to the subsistence level. A problem with the classical theory is that population growth is independent of economic growth rate. Neoclassical growth theory is the proposition that the real GDP per person grows because technological change induces a level of saving and investment that makes capital per hour of labor grow. A technological advance increases productivity. Real GDP per person increases. The technological advances increase expected profit. Investment and saving increase so that capital increases. The increase in capital raises real GDP per person. As more capital is accumulated, eventually projects with lower rates of return must be undertaken so that the incentive to invest and saving decrease. Eventually capital stops increasing so that economic growth stops. The improvement in technology permanently increases real GDP per person. A problem with the neoclassical theory is that it predicts that real GDP per person in different nations will converge to the same level. But in reality, convergence does not seem to be taking place for all nations. New growth theory holds that real GDP per person grows because of the choices people make in the pursuit of profit and that growth can persist indefinitely. The theory emphasizes that discoveries result from choices, discoveries bring profit, and competition then destroys the profit. It also stresses that knowledge can be used by everyone at no cost and knowledge is not subject to diminishing returns. The ability to innovate means new technologies are developed and capital accumulated as in the neoclassical model. The production function shifts upward. Real GDP per person increases. The pursuit of profit means that more technological advances occur and the production function continues to shift upward. Nothing stops the upward shifts of the production function because the lure of profit is always present. The ability to innovate determines how capital accumulation feeds into technological change and the resulting growth path for the economy. Productivity and real GDP constantly grow. 14. Employment and Unemployment The working-age population is the total number of people aged 16 years and over who are not in jail, a hospital, or some other form of institutional care. The labor force is the sum of the employed and the unemployed. Unemployment occurs when someone who wants a job cannot find one. To be counted as unemployed, a person must be available for work and must be in one of three categories: Without work but has made specific efforts to find a job within the previous four weeks Waiting to be called back to a job from which he or she has been laid off Waiting to start a new job within 30 days. Unemployment is a state in which a person does not have a job but is available for work, willing to work, and has made some effort to find work within the previous four weeks. The labour force is the total number of people who are employed and unemployed. The unemployment rate is the percentage of the people in the labour force who are unemployed. A discouraged worker is a person who available for work, willing to work, but who has given up the effort to find work. Why Unemployment Is a Problem Unemployment is a serious economic, social, and personal problem for two main reasons: Lost production and incomes Lost human capital The loss of a job brings an immediate loss of income and production-a temporary problem. A prolonged spell of unemployment can bring permanent damage through the loss of human capital. 15. Three Labor Market Indicators The unemployment rate is the percentage of the people in the labor force who are unemployed. It equals and Labor force = Number of people employed + Number of people unemployed. The employment-to-population ratio is the percentage of people of working age who have jobs. It equals The labor force participation rate is the percentage of working-age population who are members of the labor force. It equals Marginally attached workers are people who are available and willing to work but currently are neither working nor looking for work. These workers often temporarily leave the labor force during a recession and decrease the labor force participation rate. Because they are no longer counted as unemployed, marginally attached workers lower the unemployment rate. Adiscouraged worker is a marginally attached worker who has stopped looking for work because of repeated failures to find a job. 16. Types of Unemployment Frictional unemployment is the unemployment that arises from normal labor turnover. These workers are searching for jobs. The unemployment related to this search process is a permanent phenomenon in a dynamic, growing economy. Frictional unemployment increases when more people enter the labor market or when unemployment compensation payments increase. Structural unemployment is the unemployment that arises when changes in technology or international competition change the skills needed to perform jobs or change the locations of jobs. Sometimes there is a mismatch between skills demanded by firms and skills provided by workers, especially when there are great technological changes in an industry. Structural unemployment generally lasts longer than frictional unemployment. Minimum wages and efficiency wages create structural unemployment. Cyclical unemployment is the fluctuating unemployment over the business cycle. Cyclical unemployment increases during a recession and decreases during an expansion. Natural unemployment is the unemployment that arises from frictions and structural change when there is no cyclical unemployment-when all the unemployment is frictional and structural. Natural unemployment as a percentage of the labor force is called the natural unemployment rate. Full employment is defined as a situation in which the unemployment rate equals the natural unemployment rate. 17. What Determines the Natural Unemployment Rate? The Age Distribution of the Population An economy with a young population has a large number of new job seekers every year and has a high level of frictional unemployment. The Scale of Structural Change The scale of structural change is sometimes small but sometimes there is a technological upheaval. When the pace and volume of technological change and when the change driven by international competition increase, natural unemployment rises. The Real Wage Rate The natural unemployment rate increases if minimum wage is raised to exceed the equilibrium wage rate or if more firms use an efficiency wage (a wage set above the equilibrium real wage to enable the firm to attract the most productive workers and motivate them to work hard and discourage them from quitting). Unemployment benefits increase the natural unemployment rate by lowering the opportunity cost of job search. Real GDP and Unemployment Over the Business Cycle When the economy is at full employment, the unemployment rate equals the natural unemployment rate and real GDP equals potential GDP. When the unemployment rate is greater than the natural unemployment rate, real GDP is less than potential GDP. And when the unemployment rate is less than the natural unemployment rate, real GDP is greater than potential GDP. The gap between real GDP and potential GDP is called the output gap. 18. The Price Level, Inflation, and Deflation The price level is the average level of prices. The average level of prices can be rising, falling, or stable. Inflation is a process of rising prices. We measure the inflation rate as the percentage change in the average level of prices or the price level. Inflation occurs when the price level persistently rises; deflation occurs when the price level persistently falls. The inflation rate is the percentage change in the price level. Why Inflation and Deflation are Problems Unexpected inflation or deflation is a problem for society because they redistribute income and wealth. Unexpected inflation benefits workers and borrowers; unexpected deflation benefits employers and lenders. They motivate people to divert resources from producing goods and services to forecasting and protecting themselves from the inflation or deflation. Unexpected deflation hurts businesses and households that are in debt (borrowers) who in turn cut their spending. A fall in total spending brings a recession and rising unemployment. Hyperinflation is an inflation rate of 50 percent a month or higher 19. The Consumer Price Index A consumer price index (CPI) measures changes in the price level of a market basket of consumer goods and services purchased by households. The CPI is a statistical estimate constructed using the prices of a sample of representative items whose prices are collected periodically. Sub-indexes and sub-sub-indexes are computed for different categories and sub-categories of goods and services, being combined to produce the overall index with weights reflecting their shares in the total of the consumer expenditures covered by the index. It is one of several price indices calculated by most national statistical agencies. The annual percentage change in a CPI is used as a measure of inflation. A CPI can be used to index (i.e., adjust for the effect of inflation) the real value of wages, salaries, pensions, for regulating prices and for deflating monetary magnitudes to show changes in real values. In most countries, the CPI is, along with the population census and the USA National Income and Product Accounts, one of the most closely watched national economic statistics. The Consumer Price Index (CPI) is a measure of the average of the prices paid by urban consumer for a fixed “basket” of consumer goods and services. The CPI is defined to equal 100 for a period called the reference base period. The CPI has four biases that lead it to overstate the inflation rate. The biases are: New Goods Bias: New goods are often more expensive than the goods they replace. Quality Change Bias: Sometimes price increases reflect quality improvements (safer cars, improved health care) and should not be counted as part of inflation. Commodity Substitution Bias: Consumers substitute away from goods and services with large relative price increases. Outlet Substitution Bias: When prices rise, people use discount stores more frequently and convenience stores less frequently.
<urn:uuid:ec6ccdc3-61a8-4a6f-bc7d-96950e153f2f>
CC-MAIN-2022-33
https://thelib.info/tehnologii/1731875-macroeconomics-examination-questions/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00498.warc.gz
en
0.929577
4,272
4.40625
4
I. Mitral Regurgitation: What every physician needs to know. Primary mitral regurgitation The mitral valve is composed of its leaflets, the chordae tendineae, the mitral annulus, and the papillary muscles that link the chordae to the left ventricle (LV). Primary mitral regurgitation (PMR) may occur from pathology of any of these valve components. In developed countries the most common cause of MR is myxomatous valve degeneration associated with mitral valve prolapse. Other causes include infective endocarditis, rheumatic heart disease, and collagen vascular disease. Primary MR is a distinct entity from functional or secondary MR (Please see the chapter on Functional Mitral Regurgitation) wherein the mitral valve itself is normal but regurgitation results from ventricular disease caused by either myocardial infarction or cardiomyopathy. Mitral regurgitation exerts a volume overload on the LV, compensated by eccentric hypertrophy and remodeling (Figure 1, Figure 2). The pure volume overload of MR is nearly unique in Cardiology. In most other forms of volume overload (aortic regurgitation, anemia, heart block, etc.), the extra volume is pumped into the aorta where increased stroke volume increases systolic blood pressure. Thus, most LV volume overloads are in fact combined pressure and volume overloads. In MR the extra volume is delivered into the left atrium (LA) and systolic pressure tends to be lower than normal. This pattern of overload causes the LV to remodel as a thin-walled enlarged chamber that permits supernormal diastolic function, allowing the LV in chronic compensated MR to fill at a normal filling pressure. Increased preload in concert with normal afterload allows the enlarged LV to deliver increased total stroke volume and a normal forward stroke volume. A common misconception is that afterload in MR is reduced. A reasonable measure of afterload is systolic wall stress (σ), where σ = P × r/2h and P = LV pressure, r = radius and h = thickness. While the extra pathway for LV ejection into the LA does unload the ventricle, the remodeling pattern of a large LV radius and a thin LV wall offset the extra ejection pathway for unloading and return stress (afterload) to normal. Mild to moderate MR may be tolerated indefinitely provided the magnitude of MR does not increase. Severe MR may also be compensated by the mechanisms noted above, but eventually the overload damages the LV and heart failure ensues. II. Diagnostic Confirmation: Are you sure your patient has Mitral Regurgitation? A. History, Part 1: Pattern Recognition. Mitral regurgitation is usually first identified when the patient’s provider hears a systolic murmur. In compensated MR, the patient may be asymptomatic. In acute MR, before compensation from remodeling has occurred or in chronic severe decompensated MR, typical symptoms of left heart failure arise. These include progressive dyspnea on exertion, orthopnea and paroxysmal nocturnal dyspnea. When pulmonary hypertension complicates decompensated MR, the symptoms of right heart failure, including fatigue, ascites and edema, may develop. B. History, Part 2: Prevalence. It is estimated that 4 million Americans have some degree of MR. Most are asymptomatic and unaware of the disease. A history of rheumatic fever, collagen vascular disease or of infective endocarditis may raise the suspicion that MR might be present. C. History, Part 3: Competing diagnoses that can mimic Mitral Regurgitation. Heart diseases that cause a systolic murmur are those most likely to be confused with MR. Conversely, in acute MR the rapid rise in LA pressure due to the filling of that chamber from both the pulmonary veins and from the regurgitant flow limits the gradient for LV to LA transfer of blood. In such cases, the MR murmur may be short and unimpressive and the diagnosis missed. D. Physical Examination Findings. Mitral valve prolapse As noted above, mitral valve prolapse due to myxomatous valve degeneration is the most common cause of primary MR in developed countries. Mitral prolapse has often been termed the “click-murmur” syndrome because of the physical findings it produces, i.e., a mid-systolic click followed by a late systolic murmur. The click arises from the tightening of the redundant chordae tendineae as the valve closes. The murmur commences when the leaflets move past their point of coaptation. Maneuvers that reduce LV volume, such as standing or the Valsalva maneuver, lengthen the valve apparatus and cause the click to occur earlier and the murmur to become louder and more holosystolic. The opposite is true of conditions that increase LV volume, such as lying down or squatting. Chronic severe MR As valve degeneration proceeds, prolapse becomes more severe and the MR worsens until it eventually becomes pan-systolic. In chronic severe MR, LV to LA flow begins when LV pressure exceeds LA pressure, hemodynamics that occur almost immediately at the onset of systole. Thus, the murmur of MR is holosystolic. It usually radiates to the axilla, but may also radiate to the top of the head or to the elbow. Murmur intensity does not vary very much with changes in cycle length because while longer R-R intervals allow for increased LV filling and thus greater stroke volume, aortic pressure is lower after a long pause. In turn, lower aortic pressure preferentially increases aortic flow so that regurgitant flow does not change and murmur intensity does not increase. The high volume of blood stored in the LA during systole often causes an S3 when it is discharged into the LV during early diastole. An S3 in severe primary MR is more likely a sign of volume overload than of heart failure, but it might signal the presence of both. In chronic severe MR, LV enlargement moves the apical beat downward and to the left of its normal position. If pulmonary hypertension develops, a right ventricular impulse may be felt over the sternum, and the pulmonic component of the second heart sound (P2) may become accentuated. E. What Diagnostic Tests Should Be Performed? Because atrial fibrillation often accompanies MR, an EKG should be obtained to establish baseline cardiac rhythm. The EKG may also demonstrate evidence of left ventricular hypertrophy and left atrial enlargement, but both conditions are much more accurately assessed during echocardiography. Increasing levels of natriuretic peptides may presage worsening of MR, but specific levels that should be used in management decisions are not yet established. However, increasing levels from baseline are worrisome and can be viewed as a supportive evidence for moving toward valve intervention. What imaging studies (if any) should be ordered to help establish the diagnosis? How should the results be interpreted? Echocardiography forms the mainstay of diagnosis. Transthoracic imaging is usually adequate to completely assess the heart in MR. Echocardiography can establish the mechanism of MR, its severity, its effect on LA and LV size and function, all of which provide data that determine management. Criteria for assessing severity are given in Table I. Quantification of MR is often performed using the proximal isovelocity surface area (PISA) method. As the regurgitant flow approaches the mitral valve from the ventricular side, the MR jet often assumes a hemispheric shape (Figure 3). The area (a) of a hemisphere = 2π r2 where r is the radius of the hemisphere. Multiplying area × jet velocity (v) (determined by the echo machine settings) yields flow (f). In turn f/v = regurgitant orifice area. However, in general it is unwise to grade the severity of MR on a single parameter, and most experienced echocardiographers use an integrated approach incorporating several parameters into severity assessment. It is especially important to consider chamber volume in assessing MR severity. In severe MR the LV must enlarge to compensate for stroke volume lost to regurgitation. The LA must also enlarge to accommodate the volume overload at a tolerable filling pressure. Failure of the LA and LV to enlarge indicates that either the MR is not severe or it is acute (in which case the patient should be symptomatic). LV ejection fraction, volumes and dimensions should be measured carefully as these will be used to help time surgical intervention (see below). Most patients experience symptoms during exercise; yet, echocardiography is usually performed at rest. If symptoms seem out of proportion to the less severe findings of the resting echocardiogram, exercise echo may demonstrate worsened MR with exercise and/or exercise-induced pulmonary hypertension that help explain the clinical picture. As LV and LA filling pressure increase, the pulmonary pressure and right ventricular must also increase to provide the force needed to fill the left heart. However, pulmonary hypertension adversely affects prognosis. If any tricuspid regurgitation is present, the tricuspid jet velocity (v) can be used to calculate RV pressure. The gradient (g) across the tricuspid valve is calculated using the modified Bernoulli equation, where g = 4v2. By adding an estimated right atrial pressure to the tricuspid gradient, peak RV pressure, and thus peak pulmonary artery pressure, can be estimated. In some cases, transthoracic echocardiography yields images that are inadequate to assess the MR patient fully. If patient characteristics such as size, chest configuration, etc., preclude diagnostic quality images from being obtained, transesophageal echocardiography almost always provides high quality diagnostic cardiac images. On the other hand, it should not be assumed that 2-D echo will automatically add to overall assessment of valve anatomy. Three-dimensional echocardiography can also be useful because it replicates the “surgeon’s view,” i.e., the view the surgeon will see when he/she opens the LA and looks down on the valve. Cardiac magnetic resonance imaging may be utilized to precisely measure LA and LV volume and ejection fraction if those data are needed in clinical decision-making. If, after non-invasive imaging, discordance remains between symptom severity and the apparent MR severity, invasive hemodynamic data should help resolve the issues. Direct measurement of LV filling pressure at rest or during exercise confirms or refutes a hemodynamic basis for symptoms. Left ventriculography that images LV to LA flow (instead of flow velocity visualized during echocardiography) adds another way of assessing MR severity. If surgical correction of MR is contemplated and the patient has risk factors for coronary artery disease, coronary arteriography is performed during catheterization. In acute severe MR, as might be encountered after a ruptured chorda tendineae, there has been no time for compensatory enlargement of the LV; thus, forward stroke volume and cardiac output are reduced. The increased volume filling the small unprepared LA causes high left atrial pressure that is referred to the lungs, causing pulmonary congestion. As noted above, the rapid systolic rise in LA pressure limits the pressure gradient from LV to LA; thus, the murmur of acute MR may be short and unimpressive. A high index of suspicion raised by unexplained heart failure and a new murmur leads to an echocardiogram, confirming the diagnosis. Diuretics may be used to lower LA pressure, but reduced cardiac output with reduced renal perfusion may limit their use. Arterial vasodilators such as sodium nitroprusside may be administered to reduce aortic impedance in an effort to preferentially increase aortic outflow while decreasing the amount of regurgitation. However, reduced cardiac output and hypotension may limit the use of vasodilators. In such cases, aortic balloon counterpulsation is used to reduce afterload and regurgitant flow while augmenting mean arterial blood pressure. In most cases, urgent surgery for mitral valve repair is necessary to restore normal circulation. Laboratory Tests to Monitor Response to, and Adjustments in, Management. Implicit in the strategy for timing of surgery noted above is the need to conduct surveillance for changes in symptom status or in LV function that would then trigger the need for surgery. Patients with severe asymptomatic MR should have an office visit that includes an echocardiogram at least yearly. If the LV is approaching the “trigger” benchmarks, the frequency should be increased to every-6-month intervals. Following surgery, an echo should be performed to establish the baseline function of the repair or of the inserted valve prosthesis. Some clinicians perform this exam to take place prior to discharge, while other clinicians wait for the first post-operative office visit. Afterwards, the echo need not be repeated unless there is change in symptoms or in the physical examination. Chronic Severe MR Primary mitral regurgitation is a mechanical problem wherein an anatomic abnormality of the mitral valve permits backflow into the LA. The only effective management for MR is its mechanical correction. There is no evidence from large trials to support the use of afterload reducing agents to treat chronic MR, and most of the data that do exist are disappointing. Conversely, patients with hypertension should receive standard therapy for that condition. Mild to moderate MR is usually tolerated indefinitely as long as it does not worsen. However, because MR causes ventricular enlargement, in turn placing additional stress on the valve, MR tends to cause worsening of MR. Severe MR may be tolerated for several years, but most patients reach a “trigger” for surgery within about 6 years of initial diagnosis. These triggers are demarcations in the disease which, if left unattended, lead to worsened prognosis and include: the onset of symptoms, evidence of LV dysfunction, and evidence of pulmonary hypertension. Because the increased preload of MR increases ejection fraction (EF), “normal” EF in MR is probably about 70%. When EF declines to less than 60% or when the LV becomes unable to contract to an end systolic dimension of 40 mm, prognosis following corrective surgery worsens. This suggests that these benchmarks are a sign that LV dysfunction has ensued. If the patient is seen for the first time when symptoms or LV dysfunction have occurred, a short course (3-6 months) of standard heart failure therapy that includes the administration of ACE inhibitors and beta blockers is probably advisable before proceeding to mitral valve surgery. However, even if the patient improves clinically after this therapy, the indications for surgery have been met and there is no evidence that surgery should be delayed any further. Mitral Valve Repair versus Mitral Valve Replacement Preventing systolic regurgitation of blood from the LV to the LA is only one of the mitral valve’s functions. The mitral valve is an integral part of the LV; the mitral apparatus aids in LV contraction and helps maintain the efficient prolate ellipsoid shape of the LV. Destruction of the mitral valve apparatus and concomitant mitral valve replacement cause increased operative mortality, poorer postoperative LV function, and decreased postoperative survival when compared with mitral valve repair. Thus, in the treatment of MR, when possible, the mitral valve should be conserved and repaired instead of replaced. Reparability depends upon valve pathology and surgical skill. In general, rheumatic valves are difficult to repair and the durability of the repair is undependable. Simple posterior leaflet prolapse is the easiest to repair and most durable; bileaflet myxomatous disease is in the middle between the two. It must be noted that surgical expertise varies widely, with some surgeons able to repair most non-rheumatic MR, while others have never performed a mitral valve repair. However some valves ultimately will require replacement. Preservation of the natural connections between the native valve and the papillary muscles can be maintained even when the valve is replaced, and this procedure helps preserve LV function. In cases of severe MR where the anatomy is consistent with almost certain valve repair, many would argue for early surgery before symptoms or evidence of LV dysfunction develop. This strategy can be carried out in experienced centers, with < 1% operative mortality and the high likelihood of a durable repair, thereby obviating the need for repetitive follow-up visits and echocardiographic observation. This strategy only works if a successful repair is carried out. If an unwanted mitral valve replacement ensues, with its higher risk of both operative mortality and long-term prosthetic valve complications, the strategy fails. Common Pitfalls and Side Effects of Management Many providers remain unaware of the nuances of therapy. Common mistakes in management include: Treating symptoms medically. Despite evidence that the presence of even mild symptoms worsens prognosis, many providers add diuretics or other therapies to improve symptoms. However, there is no evidence that medical therapy improves prognosis even if symptoms improve. Assessing MR severity “by eyeball”. In some cases, it is entirely obvious that the patient has severe MR from all aspects of the clinical presentation. However, in other cases visualizing, only the MR jet at color-flow Doppler examination may overestimate or underestimate MR severity because all the clues available are not considered. Under-appreciation of the importance of mitral repair. Many practitioners are willing to accept mitral replacement when repair could be performed by surgeons more skilled in the technique. IV. Management with Co-Morbidities As noted above, the only effective management of MR is mechanical correction. However, very elderly patients or those with advanced liver, lung or renal disease may be at unacceptable risk for mitral surgery. Recently, experimental approaches using transcatheter, percutaneous, or transapical methods for mitral repair or replacement have been attempted. One, the MitraClip, is now approved in the United States for mitral repair in inoperable patients with severe primary symptomatic MR. The technique employs trans-septal deployment of a device that clips the midportions of the two mitral leaflets together, reducing MR from severe to moderate or mild in most cases. The technique is less effective than surgery in eliminating MR but safer in this group of patients and has provided excellent relief of symptoms for up to 5 years. While currently the indications for use are limited as described above, in Europe it is most often used to treat secondary MR and trials for that use are currently underway in the United States. What's the Evidence for Specific Management and Treatment Recommendations? Carabello, BA. “Mitral Regurgitation: Basic pathophysiologic principles”. Part Mod Concepts Cardiovasc Dis. vol. 57. 1988. pp. 53-8. (Summarizes the pathophysiology of MR.) Wisenbaugh, T, Spann, JF, Carabello, BA. “Differences in myocardial performance and load between patients with similar amounts of chronic aortic versus chronic mitral regurgitation”. J Am Coll Cardiol. vol. 3. 1984. pp. 916-23. (Emphasizes the “pure” nature of the volume overload in MR.) Corin, WJ, Murakami, T, Monrad, ES, Hess, OM, Krayenbuehl, HP. “Left ventricular passive diastolic properties in chronic mitral regurgitation”. Circulation. vol. 83. 1991. pp. 797-807. (Demonstrates that diastolic function in MR is super-normal.) Corin, WJ, Monrad, ES, Murakami, T, Nonogi, H, Hess, OM, Krayenbuehl, HP. “The relationship of afterload to ejection performance in chronic mitral regurgitation”. Circulation. vol. 76. 1987. pp. 59-67. (Dispels the misconception that MR unloads the LV.) Rozich, JD, Carabello, BA, Usher, BW, Kratz, JM, Bell, AE, Zile, MR. “Mitral valve replacement with and without chordal preservation in patients with chronic mitral regurgitation: mechanisms for differences in postoperative ejection performance”. Circulation. vol. 86. 1992. pp. 1718-26. (Demonstrates the importance of the mitral apparatus in aiding LV function.) Ghoreishi, M, Evans, CF, DeFillippi, CR, Young, CA, Griffith, BP, Gammie, JS. “Pulmonary hypertension adversely affects short-and long-term survival after mitral valve operation for mitral regurgitation: implications for timing of surgery”. J Thorac Cardiovasc Surg. vol. 142. 2011. pp. 1439-52. (Demonstrates the risks of pulmonary hypertension in MR.) Enriquez-Sarano, M, Tajik, AJ, Schaff, HV, Orszulak, TA, Bailey, KR, Frye, RL. “Echocardiographic prediction of survival after surgical correction of organic mitral regurgitation”. Circulation. vol. 90. 1994. pp. 830-7. (One of the sources for using an EF falling toward 60% as a trigger for mitral surgery.) Tribouilloy, C, Grigioni, F, Avierinos, JF, Barbieri, A, Rusinaru, D, Szymanski, C. “MIDA Investigators. Survival implication of left ventricular end-systolic diameter in mitral regurgitation due to fall leaflets: a long-term follow-up multicenter study”. J Am Coll Cardiol. vol. 54. 2009. pp. 1961-8. (Cements LV systolic dimension as a predictor of outcome in MR.) Gillinov, AM, Mihaljevic, T, Blackstone, EH, George, K, Svensson, LG, Nowichi, ER. “Should patients with severe degenerative mitral regurgitation delay surgery until symptoms develop”. Ann Thorac Surg. vol. 90. 2010. pp. 481-8. (Shows that even mild preoperative symptoms in MR effect post-operative outcome negatively.) Enriquez-Sarano, M, Schaff, HV, Orszulak, TA. “Valve repair improves the outcome of surgery for mitral regurgitation: a multivariate analysis”. Circulation. vol. 91. 1995. pp. 1022-8. (Demonstrates the survival benefit of mitral repair compared to replacement.) Bolling, SF, Li, S, O’Brien, SM, Brennan, JM, Prager, RL, Gammie, JS. “Predictors of mitral valve repair: clinical and surgeon factors”. Ann Thorac Surg. vol. 90. 2010. pp. 1904-12. (Reveals the importance of repair volume in predicting the likelihood of a successful repair.) Glower, DD, Kar, S, Trento, A, Lim, DS, Bajwa, T, Quesada, R, Whitlow, PL, Rinaldi, MJ, Grayburn, P, Mack, MJ, Mauri, L, McCarthy, PM, Feldman, T. “Percutaneous mitral valve repair for mitral regurgitation in high-risk patients: results of the EVEREST II study”. J Am Coll Cardiol. vol. 64. 2014. pp. 172-81. Feldman, T, Kar, S, Elmariah, S. “Randomized comparison of percutaneous repair and surgery for mitral regurgitation: 5-year results of EVEREST II”. J Am Coll Cardiol. vol. 66. 2015. pp. 2844-54. (Five-year follow-up of the EVEREST trial that randomized MR patients to surgical vs MitraClip repair.) Copyright © 2017, 2013 Decision Support in Medicine, LLC. All rights reserved. No sponsor or advertiser has participated in, approved or paid for the content provided by Decision Support in Medicine LLC. The Licensed Content is the property of and copyrighted by DSM. - I. Mitral Regurgitation: What every physician needs to know. - II. Diagnostic Confirmation: Are you sure your patient has Mitral Regurgitation? - A. History, Part 1: Pattern Recognition. - B. History, Part 2: Prevalence. - C. History, Part 3: Competing diagnoses that can mimic Mitral Regurgitation. - D. Physical Examination Findings. - E. What Diagnostic Tests Should Be Performed? - What imaging studies (if any) should be ordered to help establish the diagnosis? How should the results be interpreted? - III. Management. - Immediate Management. - Laboratory Tests to Monitor Response to, and Adjustments in, Management. - Long-Term Management. - Common Pitfalls and Side Effects of Management - IV. Management with Co-Morbidities
<urn:uuid:bc13e08b-96ca-436c-82fe-e51c7c5d7d31>
CC-MAIN-2022-33
https://www.dermatologyadvisor.com/home/decision-support-in-medicine/cardiology/mitral-regurgitation-2/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00496.warc.gz
en
0.886502
5,415
3.234375
3
ERBS (Earth Radiation Budget Satellite) ERBS is a pioneering Earth radiation budget satellite mission within NASA's ERBE (Earth Radiation Budget Experiment) Research Program - a three-satellite mission, designed to investigate how energy from the sun is absorbed and re-emitted by the Earth. The ERBE payload, three identical sets of two instruments each, represent a new generation radiometer of NASA/LaRC, first flown on ERBS (launch Oct. 5, 1984), then on NOAA-9 (launch Dec. 12, 1984), and on NOAA-10 (launch Sept. 17, 1986). Objective: Measurement of reflected and emitted energy at various spatial levels (this process of absorption and re-radiation is one of the principal drivers of the Earth's weather patterns). The observations provided useful data for studies of geographical-seasonal variations of the Earth's radiation budget. In fact, the data of ERBS/ERBE were compared and combined with those ERBE data collected on the NOAA-9 and -10 spacecraft. ERBS is a three-axis momentum-biased spacecraft (1º pointing using magnetic torquers and a hydrazine backup system) built for NASA/GSFC by Ball Aerospace Systems of Boulder, CO. The ERBS spacecraft structure is composed of three basic modules: the keel module, the base module, and the instrument module. The keel module is a torque-box structure providing structural support for the propulsion system, the solar array panels, and the antennas. The base module provided a direct interface to the Shuttle. Figure 1: Line drawing of the ERBS spacecraft (image credit: Friedrich Porsch, DLR) ERBS subsystems included TCS (Thermal Control Subsystem), EPS (Electrical Power Subsystem) which consisted of two 50 Ah, 22 cell NiCd batteries; PCU (Power Unit) for regulating electrical power; C&DH (Command and Data Handling Subsystem) for collection of instrument and S/C data for real-time transmission; CS (Communications Subsystem), which included NASA TDRSS transponders and antennas; ADCS (Attitude Determination and Control Subsystem), a three-axis, momentum system for attitude pointing, maneuvers, and thruster control; and OAPS (Orbit Adjust Propulsion System), a hydrazine propulsion system used for raising ERBS to its operating orbit after launch from the Shuttle. ERBS was held primarily in the Earth-pointing mode for most of the mission. S/C size: 4.6 m x 3.5 m x 1.5 m. S/C mass = 2307 kg, nominal power = 470 W. Design life of 2 years with a goal of 3 years. 1) 2) 3) 4) Figure 2: Functional diagram of the ADCS (image credit: NASA) Figure 3: Functional block diagram of the C&DH subsystem (image credit: NASA) Figure 4: Block diagram of the electric power subsystem (image credit: NASA) Launch: The launch of the free-flyer ERBS satellite took place on Oct. 5, 1984 on Space Shuttle flight STS-41G from KSC (Kennedy Space Center), FLA. The ERBS spacecraft was deployed from Space Shuttle Challenger on October 5, 1984 (first day of flight) using the Canadian-built RMS (Remote Manipulator System), a mechanical arm of about 16 m in length. On deployment, one of the solar panels of ERBS failed initially to extend properly. Hence, mission specialist Sally Ride had to shake the satellite with the remotely-controlled robotic arm and then finally place the stuck panel into sunlight for the panel to extend. The ERBS satellite was in fact the first spacecraft to be launched and deployed by a Space Shuttle mission. Orbit: Non-sun-synchronous circular orbit, nominal altitude = 610 km, inclination = 57º, period = 96.8 min. Hence, ERBE coverage on ERBS is restricted to ±57º latitude (with regard to reflection and emitted measurements from Earth). - Note: The orbit of ERBS has slowly dropped to an altitude of 585 km over a period of 15 years (1999). Figure 5: Time series of ERBS altitude (km) from 1985 to 2000 (image credit: NASA) 5) RF communications: Downlink data rate at 128 kbit/s, uplink via TDRSS using an electrically steerable spherical array antenna (ESSA). The ERBS mission is being controlled and operated at NASA/GSFC; the ERBS data is being processed, archived and distributed at NASA/LaRC. Figure 6: Artist's view of the deployed ERBS spacecraft in orbit (image credit: NASA) • SAGE-II, built by the Ball Aerospace Systems Group, added 18 years to the original mission life of twenty-four months on ERBS and continues to give scientists a wealth of data on the chemistry and motions of the upper troposphere and stratosphere. 9) • Of the ERBE instrument package only the nonscanner portion was still functioning (the scanner portion failed Feb. 28, 1990). • The ERBE nonscanner instrument is operational at all times, except during spacecraft yaw maneuvers. At these times the ERBE nonscanner instrument is powered off to conserve spacecraft battery power. The yaw maneuvers take place approximately every 36 days to align the spacecraft solar panels with the sun. • As of June 2001 the ERBS S/C was operational on only one battery. Thereafter, Ball Aerospace engineers were able to adjust the previously decommissioned NiCd battery and bring it back to service. • ERBE is no longer capable of movement to the internal calibration position. • The ERBE nonscanner unit is a real value, since it outlived it's design lifetime of 3 years by a factor of 5! Although the ERBS spacecraft would probably function beyond 2010, de-orbit plans were being developed for 2003. 10) • In July-August 2003 a series of de-orbiting ΔV maneuvers were performed on the ERBS satellite in preparation for decommissioning. However, the decommissioning was halted, and the de-orbiting maneuvers ended on August 15, 2003, with the satellite in an orbit of 507 km x 559 km. Current NASA plans call for ERBS operations until the summer of 2005. 11) • In particular, the ERBE observations have helped scientists world-wide to better understand how clouds and aerosols, as well as some chemical compounds in the atmosphere (so-called ”greenhouse” gases), affect the Earth's daily and long-term weather (the Earth's ”climate”). In addition, the ERBE data has helped scientists to better understand something as simple as how the amount of energy emitted by the Earth varies from day to night. These diurnal changes are also very important aspects of our daily weather and climate. Sensor complement: (ERBE, SAGE-II) Background: Although first measurements of Earth's radiation budget were gathered with the ERB (Earth Radiation Budget) instrument flown on NASA's Nimbus-6 and -7 spacecraft (launch June 12, 1975, launch Oct. 24, 1978, respectively) - however, the ERBE instruments were able to provide more accurate and systematic parameters for estimating the Earth's radiation budget. Note: The analysis of the ERB data on Nimbus-6 failed to detect any irradiance variability due to degraded responses of the ERB radiometer. An improved version of ERB was subsequently flown on Nimbus-7. The TSI (Total Solar Irradiance) broadband data from ERB on Nimbus-7 are available for the period Oct. 1978 until the end of 1993. This radiometer (on Nimbus-7) was stable enough to detect short-term and long-term solar irradiance variability. ERB on Nimbus-7 was the first long term solar monitor utilizing the ESCC (Electrically Self Calibrating Cavity) technique. ERBE (Earth Radiation Budget Experiment): ERBE is a multimission instrument package of NASA/LaRC (PI: B. R. Barkstrom) with the objective to measure the Earth's radiation budget [i.e., the balance between incoming energy from the sun and outgoing thermal (longwave and reflected shortwave) energy from the Earth]. The goals of ERBE are: 1) to understand the radiation balance between the Sun, Earth, atmosphere, and space; and 2) to establish an accurate, long-term baseline data set for detection of climate changes. Earth radiation budget data are fundamental to the development of realistic climate models and to the understanding of natural and anthropogenic perturbations of the climate system. 12) 13) 14) 15) 16) 17) 18) 19) 20) 21) 22) The instrument package was developed and built by TRW, Redondo Beach, CA. Each radiometric package (ERBE) contained four Earth-viewing nonscanning active-cavity radiometers, three scanning thermistor bolometer radiometers, and a solar monitor. The ERBE instrument has a mass of 61 kg, average power of 50 W, and an average data rate of 1.12 kbit/s. ERBE consists of a `scanner' and a `nonscanner' unit, providing measurements on several spatial and temporal scales. • The ERBE nonscanner unit features four Earth-viewing channels and a solar monitor used for solar calibration measurements. The Earth-viewing channels have two spatial resolutions: a horizon-to-horizon view (wide FOV or WFOV), and a FOV limited to about 1000 km in diameter (also referred to as medium FOV, or MFOV). For MFOF and WFOV there is a total spectral channel sensitive to all wavelengths, and a shortwave channel which uses a high purity, fused silica filter dome to transmit only the shortwave radiation from 0.2 to 5 µm. All five channels of the nonscanner are active cavity radiometers. Data rate: 160 bit/s. Figure 7: Schematic diagram of the ERBE solar monitor (image credit: NASA) • The ERBE scanner instrument unit contains three co-planar detectors (longwave, shortwave and total energy). Each detector scans the Earth perpendicular to the groundtrack from horizon to horizon. The detectors are thermistors (bolometers) which use space on every scan as a reference point to guard against drift. They are located at the focal point of a f/1.84 Cassegrain telescope whose aluminum-coated filters have been coated to enhance UV reflectivity. The IFOV for each channel is hexagonal, with an angular size of 3º (across track) x 4.5º (along track). Data rate = 960 bit/s. Figure 8: Detection unit of the ERBE instrument (image credit: NASA) Table 1: ERBE instrument parameters Figure 9: Schematic view of the ERBE scanner unit (image credit: NASA) Figure 10: Schematic view of the ERBE nonscanner unit (image credit: NASA) Figure 11: Photo of the ERBE instrument with scanner unit (left) and nonscanner unit (right), image credit: NASA ERBE instrument calibration: Inflight stability of all radiometric channels was monitored by internal calibration sources. An internal blackbody, evacuated tungsten lamps, and observations of the sun are used to check the stability and precision of the instruments. All seven of the Earth-viewing channels on ERBE have an inflight calibration capability. In addition, a state-of-the-art ground calibration facility was used, coupled with the complete inflight calibration system. The ground facility contained a MRBB (Master Reference Blackbody), an integrating sphere, and a reference solar monitor - as well as windows through which a solar simulator may be directed at the appropriate instrument. SAGE-II (Stratospheric Aerosol and Gas Experiment II): SAGE-II was built by Ball Aerospace; PI: M. P. McCormick, Hampton University, Hampton, VA. SAGE-II is an Earth limb-scanning grating spectrometer (with a Dall-Kirkham telescope, two-axis gimbaled system capable of rotating in azimuth). Objective: monitoring of concentrations and distributions of stratospheric aerosols, nitrogen dioxide, and water vapor. SAGE II has measured the decline in the amount of stratospheric ozone globally and over the Antarctic since the ozone hole was first described in 1985. The limb measurements of the Earth's upper troposphere and stratosphere are taken in the altitude range of 10-40 km. 23) 24) 25) The SAGE-II instrument is a seven-channel sun photometer using a Cassegrain-configured telescope, holographic grating, and seven silicon photodiodes, some with interference filters, to define the seven spectral channel bandpasses. Spectral range: 0.385 - 1.020 µm. Data rate = 6.3 kbit/s. Solar radiation is reflected off a pitch mirror into the telescope with an image of the Sun formed at the focal plane. The instrument's IFOV, defined by an aperture in the focal plane, is a 0.5 arcmin x 2.5 arcmin slit that produces a vertical resolution at the tangent point on the Earth's horizon of about 0.5 km. The SAGE-II instrument has a mass of 29.5 kg, average power consumption of 18 W, and a data rate of 6.3 kbit/s. Radiation passing through the aperture is transferred to the spectrometer section of the instrument containing the holographic grating and seven separate detector systems. The holographic grating disperses the incoming radiation into the various spectral regions centered at the 1020, 940, 600, 525, 453, 448, and 385 nm wavelengths. Four channels (385, 454, 600, and 1020 nm) allow separation of atmospheric extinction along the line-of-sight due to Rayleigh scattering, aerosols, ozone, and nitrogen dioxide. The 940 nm channel allows concentration profiles of water vapor to be mapped. The 448 nm channel provides an additional channel for nitrogen dioxide detection, and the 525 nm channel is used for aerosol detection. The spectrometer system is inside the azimuth gimbal to allow the instrument to be pointed at the sun without image rotation. The operation of the instrument during each sunrise and sunset measurement is totally automatic. Prior to each sunrise or sunset encounter, the instrument is rotated in azimuth to its predicted solar acquisition position. The radiometric channel data are sampled at a rate of 64 samples/s per channel, digitized to 12 bit resolution, and recorded for later transmission back to Earth. Sampling occurs twice per orbit for durations varying from 3 to 10 minutes each. Figure 12: Illustration of the SAGE-II instrument (image credit: NASA) The instrument provides self-calibrating near global measurements. Measurements taken from a tangent height of 150 km, where there is no attenuation, provide a self-calibration feature for every event. - The measurements are inverted using the “onion-peeling” approach to yield 1 km vertical resolution profiles of aerosol extinction (vertical resolution of 1 km below 25 km and a resolution of 5 km above 25 km). The focus of the measurements is on the lower and middle stratosphere. SAGE-II stratospheric ozone data have become a standard long-record reference field for comparison with other stratospheric ozone measurements. SAGE-II data has been used to measure the decline in the amount of stratospheric ozone over the Antarctic since the ozone hole was first noted in 1985. The high-resolution SAGE-II measurements allow scientists to study the vertical structure of ozone in the Antarctic and, more importantly, allow scientists to study the correlations between various trace gases. Status: In Oct. 2004, SAGE-II observations continued after 20 years in orbit providing the scientific community with a long-term, global depiction of the distribution of aerosol, ozone, water vapor, and nitrogen dioxide (NO2). 1) J.A.Dezio, C.A. Jensen, “Earth Radiation Budget Satellite,” in Monitoring Earth's Ocean, Land, and Atmosphere, Vol. 97 by AIAA, 1985, pp. 261-292 3) “Space Shuttle Mission STS-41G,” NASA Press Kit, October 1984, URL: http://www.jsc.nasa.gov/history/shuttle_pk/pk/Flight_013_STS-41G_Press_Kit.pdf 4) “Earth Radiation Budget Satellite- On-Orbit Reliability Investigation Final Report,” Hermandez Engineering Inc., Oct. 1996 5) Takmeng Wong, Bruce A. Wielicki, Robert B. Lee, III, “Decadal Variability of Earth Radiation Budget Deduced from Satellite Altitude Corrected ERBE/ERBS Nonscanning Data,” URL: http://ams.confex.com/ams/pdfpapers/79213.pdf 6) “Ball Aerospace Celebrates 21 Years of Ozone Research,” Oct. 20, 2005, URL: http://www.lexdon.com/article/Ball_Aerospace_Celebrates_21_Years/14704.html 7) Information provided by Kathryn A. Bush of NASA/LaRC, Hampton, VA 9) “The Battery Bunny Has Little on This Space SAGE,” October 6, 2004, URL: http://www.nasa.gov/vision/earth/lookingatearth/sage2.html 10) Information provided by Jack Paden and Bob Lee of NASA/LaRC 11) Information provided by William P. Chu of NASA/LaRC 12) Bruce R. Barkstrom, “The Earth Radiation Budget Experiment (ERBE),” Bulletin of the American Meteorological Society, Vol. 65, Issue 11, November 1984, URL: http://journals.ametsoc.org/doi/pdf/10.1175/1520-0477%281984%29065%3C1170%3ATERBE%3E2.0.CO%3B2 14) B. R. Barkstrom, J. B. Hall, Jr., “Earth Radiation Budget Experiment (ERBE): An Overview”, Journal of Energy, Vol. 6, 1982, pp. 141-146 15) B. R. Barkstrom, G. L. Smith, “The Earth Radiation Budget Experiment: science and implementation,” Review of Geophysics, Vol. 24, 1986, pp. 379-390. 16) L. P. Kopia, “Earth Radiation Budget Experiment scanner instrument,” Review of Geophysics, Vol. 24, 1986, pp. 400-406 17) R. B. Lee III, R. S. Wilson, “Accuracy and Precision of Earth Radiation Budget Experiment ERBE - Solar Monitor on the Earth Radiation Budget Satellite ERBE,” http://www.acrim.com/NASA_NIST%20TSI%20Workshop/Talks/RBLEEIII%20-%20TSI%20WORKSHOP%20NIST%20JULY%2018%202005%20REVISED.pdf 18) M. P. A. Haeffelin, J. R. Mahan, K. J. Priestley, ”Predicted dynamic electrothermal performance of thermistor bolometer radiometers for Earth radiation budget applications,” Applied Optics, Vol. 36, 1997, pp. 7129-7142 19) “Earth Radiation Budget,” URL: http://marine.rutgers.edu/mrs/education/class/yuri/erb.html#erbe 21) Robert B. Lee III, Robert S. Wilson, “1984-2003, Earth Radiation Budget Satellite (ERBS)/Earth Radiation Budget Experiment (ERBE) Total Solar Irradiance (TSI) measurements,” SORCE Meeting, Dec. 4-6, 2003, Sonoma, CA, URL: http://lasp.colorado.edu/sorce/news/2003ScienceMeeting/... 22) Takmeng Wong, Bruce A. Wielicki, Robert B. Lee III, G. Louis Smith, Kathryn A. Bush, Joshua K. Willis, “Reexamination of the Observed Decadal Variability of the Earth Radiation Budget Using Altitude-Corrected ERBE/ERBS Nonscanner WFOV Data,” Journal of Climate, Vol. 19, Aug. 15, 2006, pp. 4028-4040, URL: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.173.3459&rep=rep1&type=pdf 24) N. H. Zaun, L. E. Mauldin, M. P. McCormick III, “Design and performance of the stratospheric aerosol and gas experiment II (SAGE II) instrument,” Proceedings of SPIE, `Infrared Technology IX,' Edited by Richard A. Mollicone and Irving J. Spiro, Vol. 430, 1983, p. 99 25) “SAGE II: Understanding the Earth's Stratosphere,” NASA Facts, 1996, URL: http://www.nasa.gov/centers/langley/pdf/70813main_FS-1996-08-14-LaRC.pdf The information compiled and edited in this article was provided by Herbert J. Kramer from his documentation of: ”Observation of the Earth and Its Environment: Survey of Missions and Sensors” (Springer Verlag) as well as many other sources after the publication of the 4th edition in 2002. - Comments and corrections to this article are always welcome for further updates.
<urn:uuid:335cce86-648d-4eac-b6cd-7022c72437f4>
CC-MAIN-2022-33
https://www.eoportal.org/web/eoportal/satellite-missions/e/erbs
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00700.warc.gz
en
0.863864
4,669
3.75
4
CHAPTER 1- INTRODUCTION 1.1 Background of study It has been clear that in today's society males are dominating the sports games as compared to females. Also, male receives high amount of media attention as they think that there are some sports which are especially made for male and thus female football game is not given much attention. There lies a vast difference in media reporting as it has become gendered because when any male sports is being covered than at that time media pays attention towards focusing upon skills and performance of players. While, female athletes receive media attention due to covering their physical attractiveness or non sport related activities. It has been identified that female sports players are affected due to issue of gendered media reporting as they only cover their appearances and looks. Therefore, it is essential for media to overcome such reporting and using trivialising language used that impacts upon the morale of women athletes. It has also been noticed that there are several magazines that features only male sportsperson. But if any time female sportsperson are chosen they are being feminized or sexualized. For instance, when any male sportsperson is pictured on the cover of magazine then they are in their uniform looking strong and powerful. While, it is totally different in the case of female sportsperson as when they are pictured they are being exposed and not dressed in their uniform (Black and Fielding-Lloyd, 2017). Thus, gendered difference lies there and it is all because of wrong media reporting as they deliver wrong image and information about women athletes. It has been shown in the below mentioned image. However, it has been identified that female has been minorised in the sports industry as from early years, they are facing issues related to cat calling and sexual comments. It assesses that women are being trivialised by the media reporters and created it as a issue. Thus, women are underrepresented by the media reporting. They think that football is male specific game and women's playing such game are considered as weak. Therefore, media use trivialising language for women's football team (Lenskyj, 2015). As per the report, it has been reviewed that men's sports received 96.3% of airtime as compared women sports received only 1.6% of women's sports. Thus, it can be seen that there lies huge difference and women requires equal treatment in sports as they are not more weak or minorised. The main argument in relation to low coverage of women sports by media inspite of high number of women participation in sports is due to the misconception that sports is still male dominated game. While, female sportsperson are focused by media only when they are sexually targeted and thus trivialising comments are given by media reporters that harasses women at international level. It can be evaluated that the 1999 women world cup brought drastic changes in the view of women sports and turned women football team into a symbol for nation. However, before this world cup, women's participation in sports were not taken seriously at all. Also, the media gendered reporting was high at that time that impacts upon the women team. Gender differences are being noticed by the women soccer team due to the comments of media reporters as there were focusing upon their dressing and not skills and performance (Raab and Khalidi, 2016). Here, in the present dissertation, it discusses that media reporting is gendered bias as women's football team is harassed on several grounds as well as media do not prefer to cover their game. 1.2 Problem statement It has been identified that media is wrongly reporting and presenting the women football team as they think that they could not be compared with men's team. It is because they think that they do not think that they are competent enough and thus media gives less coverage to women football team. Therefore, they need to identify the perception of people watching sports and thus think that media needs to cover every sports equally and do not carry out biasness upon covering the game (Williams, 2015). The Women's Fifa World Cup 2015 has been considered as the sport in which football team were being trivialised and sexualised by media reporters upon the dressing style of players. They showcased that people come for watching women playing football because of their dressing and comment upon their playing style. Thus, media creates gender biasness and report upon it that affects the athletes morale of playing the game. It is essential for media to change their image and broadcast both men and women game equally so that skills and performance of players could be assessed. Television anchors are using trivialised and sexualised language upon women soccer team and considering them no more than artifacts and commenting upon their visibility and dressing. Thus, due to such issue it impacts upon the image of women playing sports therefore, it is essential for them to adopt effective strategies so that they could overcome the issue of gendered media reporting and attain desired results. 1.3 Aim and objectives To analyse the issue of gendered media reporting on FIFA Women's World Cup 2015. Below described are the objectives for the dissertation are as follows- - To explore the gender biasness done by media against women's football game. - To identify the issues associated with Trivialisation, sexualisation, under representation in the media in regards to women's football. - To assess the differences between media reporting on male and female football players. - To study reasons behind issue of gendered media reporting on FIFA Women's World Cup 2015 - To recommend strategies in relation to provide opportunities to women football players and overcome trivialising and sexualizing. 1.4 Research questions - What is the gender biasness done by media against women's football game? - What is the issues associated with Trivialisation, sexualisation, under representation in the media in regards to women's football? - What are the differences between media reporting on male and female football players? - What are the reasons behind issue of gendered media reporting on FIFA Women's World Cup 2015? - What are the recommended strategies in relation to provide opportunities to women football players and overcome trivialising and sexualizing? 1.5 Significance of study The present study is significant to be carried out because it helps in evaluating the importance of media coverage as they are using their rights in negative manner. Gendered media reporting upon women's world cup is affecting the team members as they are feeling that they are players. Media give wrong views and opinions about the team members that people are watching the game to see what female players are wearing and also the percentage of people watching the game is not equivalent to male (Ravel and Gareau, 2016). From the study it helps in enhancing the understanding of the issue being faced by women world cup team regarding trivialisation, sexualisation and under representation. It has been evaluated that researcher needs to understand the importance of the selected topic as it identifies the negative side of media as they are carrying out gender discrimination among male and female sports and using trivialising language that is unbearable. Hence, it is essential to understand the significance of the present topic and assess the issue of gendered media reporting on FIFA Women's World Cup 2015. 1.6 Dissertation structure Following is the dissertation structure- Chapter 1 Introduction The first chapter of dissertation involves the background and significance of the study. It also explains the aims and objectives formulated for research and thus understand the relevance of the topic. Also, problem statement needs to be identified by the researcher that helps them to understand the importance of the selected topic. Chapter 2 Literature Review Further, it is another dissertation chapter that is being carried out reviewing different secondary sources and thus in-depth research is conducted to understand the importance of selected topic. Chapter 3 Research Methodology In this chapter, it discusses that different research tools and techniques need to be adopted so that desired results could be attained. Here, qualitative study is being carried out to understand the issue of gendered media reporting on FIFA Women's World Cup 2015. Chapter 4 Data Analysis Here, researcher uses thematic analysis to evaluate the collected data and thus obtain proper responses in regard to understand the issues faced by women's in sports. Chapter 5 Conclusion and Recommendations At the end, it is the last chapter, that concludes and recommends about the strateges adopted in relation to provide opportunities to women football players and overcome trivialising and sexualizing issue. CHAPTER 2- LITERATURE REVIEW In this particular chapter it discusses the issues faced by women's football team due to media trivialising and sexualing so that various secondary sources involving media reports, news channel articles and sports magazines need to be reviewed. Thus, carrying out in-depth analysis that helps in enhancing the knowledge about the way media behaves with female football players (Pfister, 2015). Also, through carrying out literature review it understands that using trivialising language is worst because it hampers the skills and performance of players either it be male or female. 2.2 Gender biasness done by media against women's football game Adubato, (2016) states that Women in sports always look as not athletic and serious as compare to male. Men's sports are going to get more coverage. There is still no cultural investment in the idea which describes the sports is space where talent and hard work are the key factors which matters a lot. According to Coche, (2016) men's sports have higher production values, higher quality of coverage and also higher quality of commentary where as women's sports there are fewer angles of camera, lesser cuts to shot, lesser instant replays so the coverage work by the side of media is not excited enough. As per accordance to Eagleman, (2015) there is various complaints about gender biasness by media can be found on the social media platform. There is a complaint on Twitter about an article which was published in a daily mirror “ Watching lioness is such a roar deal”, on this article one reporter commented that the world cup describes the women's football is not good enough and also states that the place of women is not on a foreign field and playing second rate football. The comment was criticize for a longer time on a various social media platform. Hjelseth and Hovden, (2014) states that the comments by various reporters are not in a proper manner when it comer to the argument about women's world cup, comment such as female players should wear tighter shorts are not spreading positive notion. The amount of coverage by media in the men's football world cup was very higher as compare to women's world cup. It is clear that discrimination is taking place in a way of treatment which women football is getting from the media. As per the view of McKenna, (2016) after comparing the salaries of men and women England football players, the fact which one can encountered with is there is lack of parity in the pay also . Men's football players are biggest celebrity then the women football player, the biggest reason behind that is the amount of media coverage Ndimande-Hlongwa, 2016 share the viewpoint that the men's football celebrity are endorsing all the leading brands which makes them rich and famous as compare to women's football players. According to Nordstrom, Warner and Barnes, (2016) sport media is also assessing the gender discrimination as they are not covering women's sports at all. Several studies done by him is stating that ESPN sports centres provide only 2 percent of its air time for covering the women's world cup. As per the view of Pollard, R. and Gómez, (2014) facts and figures are depicting the fact that why female soccer stars are not having the equal level of celebrity status, the bias attitude by media is negatively influencing the level of corporate sponsorship for women's football. Most of the national women's football teams are facing financial crisis even the reputable body such as FIFA failed in using the massive finances which helps in transforming the game radically, as the approach is indirectly describing that the mentality is that the women's football is not valuable as compare to men's. 2.3 Issues associated with Trivialisation, sexualisation, under representation in the media in regards to women's football As per the view of Fink, J. S., (2015), it could be assessed that media reporting the sports game are required to understand the importance of game and then make any comments related to gender and public interest upon the game. It has been assessed that in the recent FIFA Women's World Cup 2015 media carried out gendered biasness and comments upon the looks and dressing of women football team (Fink, 2015). Media presented them badly and thus comments that very few number of people come to watch the game as compared to football matches played by male team. However, Ayala, Berisha and Imholte (2016). argued that trivialisation, sexualisation and under representation is considered as significant issues and thus media showcased the women's football team as they are of no use and thus it impacts upon their morale. Also, female players are not paid equivalent to male players and beside that using trivialising language affects them to focus upon their game. It has been assessed that there are various issues being identified that media reporting is carrying out gender biasness and thus it affects the morale of team members. Schallhorn, Knoll and Schramm (2016), stated that trivialisation, sexualisation and under representation of female football team impacts upon the game spirit played by sportsperson. Here, media plays a crucial role in building the image of any sportsperson. Therefore, at such instance, media is criticing the game played by women football team (Schallhorn, Knoll and Schramm, 2016). They are using trivialising language that is abusing and harassing them and also comparing them to the male players. However, male football players are considered as celebrities and also receiving high amount of payment while women are getting low payment as well as no reputation as world football players. Further, even media stated that football is male centered game and thus female should not play such game. Therefore, it is essential for female players to become more strong athlete and thus prove to the media that they are wrong. However, Schallhorn and Hempel (2015), argued that female football team is always under representation as media think that they are not able to give their best as compared to males. Therefore, it is stated that media is biased and therefore, they need to understand that if any female player is being trivialised by media reporters than it would be considered as serious offence and punishment would be given for carrying out discrimination. According to the opinion of Burch, Billings and Zimmerman (2017), sexualisation, trivialisation and under representation are considered as effective issue which is being faced by women football team. Therefore, it has been assessed that media is carrying out gendered reporting and thus embarrassing female players through affecting them on the basis of gender and sexual reasons. Media reports that number of viewers arriving to the see the women world cup is very few as compared to game played by males (Burch, Billings and Zimmerman, 2017). Therefore, they are regularly commenting upon this as women are not made to play soccer and thus it comments upon their dressing and looks while playing the game. Even sometimes commentators make weird comments upon the looks and game played by female. Thus, such type of trivialising comments weakens the game of women football team. Media is considered as one of the main factor that impacts upon the socialization and thus generate gender differences among male and female football players. It assesses that still women playing on ground is being viewed just for their looks, appearances as compared to male who are being looked for their sports, experience and strong personalities. Thus, media gives their gendered biasness comments upon women football team and thus trivialising upon their looks, dress etc. is harassing them upon gender grounds (Black and Fielding-Lloyd, 2017). Therefore, media needs to understand their role and they should not make any decision upon the sports played by either male or female while they should promote the same as women are giving tough competition in every sports to men. 2.4 Differences between media reporting on male and female football players As per the view of Raab and Khalidi, (2016), it could be addressed that there has been great differences being viewed within the media reporting of male and female football players. It has been noticed that media represents male football players are the best and therefore given more priviledge as celebrities as compared to female football players. Also, they are being paid high amount and thus discrimination is being done upon such grounds. The media reporting is different upon gendered grounds and thus give different comments to male and female football team. They review that female football match is not viewed by much audiences therefore it is not being covered by media as well (Raab and Khalidi, 2016). While, there are huge number of fan following of male football matches therefore, various media is covering such matches. It assesses that the audiences visiting to watch female football match is for another reason that harass the players. Media also representing the football match of female as they are not efficiently played by females as it is male dominant game. Thus, it can be reviewed from the above given image that the number of viewers who watched the game with the prize money offered as it is considered that it is clear disparity between the perceived value of male and female games. The number of viewers arrived to view males football game were more than 4 times and prize money offered was around 38 times more than women football team (Gender Balance in Global Sport Report 2016). As per the view of Williams (2015), media reporting within men and female football game is also very diverse as they are more focused to cover male football game as number of viewers are also high. Thus, as per the demand, female game is not covered through such high rate. Here, gendered biasness could be seen by the media reporters and therefore, it affects the female sportsperson as they are not given the equal weightage as compared to male sportsperson (Williams, 2015). Ravel and Gareau (2016), stated that media reporting for women football matches are less as compared to men it is because male are still considered as they are not being viewed by people. While, as per the evaluation it could be found that in 2014 only 3.2% television network is covering women sports. However, it has been declining upon year by year and thus women are just considered as sexual objects as their skills and performance is not being viewed at all (Ravel and Gareau, 2016). Thus, such decline in viewers of women football is considered as insulting and humorously sexualized stories regarding women athletes. Media reporting is affecting the coverage of men and women sports games and thus it is decreasing the morale of women players (There is less women's sports coverage on TV news today than there was in 1989. 2015). It has been assessed that media reporting is different within male and female football games due to the differences in their games playing ability as well as the attraction of viewers. Hence, more number of people are watching the men's football game as compared to female therefore, media is supposed to cover and broadcast that more as there are very few number of people interested in viewing the female football game, media is not covering that enthusiastically. Also, media is reporting sexualised and trivialised comments upon women football team that is decreasing their morale so that it is decreasing women participation in sports due to such gendered discrimination (Pfister, 2015). It is essential for media person to support the game played by sportsperson so that more number of viewers could be attracted as raise the image of sports in market. 2.5 Reasons behind issue of gendered media reporting on FIFA Women's World Cup 2015 As per the view of Fink (2015), it is male dominant society from early years and thus male are given more importance as compared to females. Similarly, in sports as well as media is carrying out gendered discrimination within male and female football team. Therefore, it is essential for football boards to overcome such gendered differences and do not carry out gendered differences so that it does not impact upon the football game of sportsperson. It has been reviewed that historical context could be identified that from last several years, women's are not given much preference in sports as compared to male (Fink, 2015). Therefore, things need to be change and thus it is essential for sportsperson to bring more number of viewers towards women centered sports so that TV coverage could be enhanced. Thus, it is essential for media reporting to overcome gendered biasness and thus report upon FIFA Women's World Cup as women's are not less than men and therefore they could play the game equivalent to men. Also, they give proper answer to media reporting who says that people watch women playing because of their dressing, and sexual reasons. While, men sportsperson are considered as the specimen of skills and performance (Ayala, Berisha and Imholte, 2016). Furthermore, researchers found that men players are also paid more than female players and thus it makes the difference in paying. Therefore, gendered differences should not be done by media reporting and thus women should not be considered that they use trivialising language that is impacting upon the game of sportsperson. Trivialisation, sexualisation and under representation are considered as crucial issue and therefore it considered that women are lower skilled as compared to men in football (Schallhorn, Knoll and Schramm, 2016). It is essential for media to represent in such a manner that it should not affect the sports spirit of women playing FIFA World Cup. Also, there are concept that is being assessed that male gaze towards those people who are considering women as just sexual object. Therefore, it is essential for them to understand that female are not just sexual object but they can do excel in every game. With the help of this women football players during FIFA Women's World Cup 2015 assess that difference within male and female game is being done due to media reporting as they present that female are less capable of playing football. It is stated that media reporting needs to be improved as it is carrying out gendered difference within male and female football players. Therefore, it harasses the female players and consider them as low skilled and performance as compared to male. Also, they are being differentiated upon payment basis as male football players are celebrities and paid high amount while women players are not given even half of the amount received by male players (Mataruna, Range and Melo, 2015). Thus, carrying out gendered differences is considered as one of the main reason that affects the image of women football players that are being trivialised. Also, sexualisation is another concept that also affects the female football players and thus they are being under represented which impacts upon their image in sports market. - Adubato, B., 2016. The promise of violence: televised, professional football games and domestic violence. Journal of Sport and Social Issues. 40(1). pp.22-37. - Ayala, A., Berisha, V. and Imholte, S., 2016. Public Health Surveillance Strategies for Mass Gatherings: Super Bowl XLIX and Related Events, Maricopa County, Arizona, 2015. Health security. 14(3). pp.173-184. - Black, J. and Fielding-Lloyd, B., 2017. Re-establishing the ‘outsiders’: English press coverage of the 2015 FIFA Women’s World Cup. International Review for the Sociology of Sport. p.1012690217706192. - Burch, L. M., Billings, A. C. and Zimmerman, M. H., 2017. Comparing American soccer dialogues: social media commentary Surrounding the 2014 US men’s and 2015 US women’s World Cup teams. Sport in Society. pp.1-16. - Coche, R., 2016. Promoting women’s soccer through social media: how the US federation used Twitter for the 2011 World Cup. Soccer & Society. 17(1). pp.90-108.
<urn:uuid:df8fc641-e2b9-4dca-8a63-e2b7b6324380>
CC-MAIN-2022-33
https://www.assignmentprime.com/free-samples/journalism/gendered-media-reporting-analysis
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571502.25/warc/CC-MAIN-20220811194507-20220811224507-00100.warc.gz
en
0.980992
4,918
2.984375
3
like us, you were touched by the 2012 film Big Miracle, you'll already know something about this magnificent animal. You'll know that despite our differences, millions of people care about whales no matter what colour or where on the globe they are found. Politicians all around the world know of this common bond and that whales stand for peace and understanding. Yet whaling continues to threaten these gentle giants and humans don't seem to be able to live side by side. We seem to be happier killing our neighbor just because he occupies land with mineral riches, or because we don't agree with their beliefs. Enough on that I think. But is it right to let our hostility to our fellow man and general greed endanger other species? Not if the film Big Miracle means anything. The gray whale, (Eschrichtius robustus), is a baleen whale that migrates between feeding and breeding grounds yearly. It reaches a length of about 16 m (52 ft), a weight of 36 tonnes (35 long tons; 40 short tons), and lives 50–70 years. The common name of the whale comes from the gray patches and white mottling on its dark skin. Gray whales were once called devil fish because of their fighting behavior when hunted. The gray whale is the sole living species in the genus Eschrichtius, which in turn is the sole living genus in the family Eschrichtiidae. This mammal descended from filter-feeding whales that developed at the beginning of the Oligocene, over 30 million years ago. The gray whale is distributed in an eastern North Pacific (North American) population and a critically endangered western North Pacific (Asian) population. North Atlantic populations were extirpated (perhaps by whaling) on the European coast before 500 AD and on the American coast around the late 17th to early 18th centuries. However, on May 8, 2010, a sighting of a gray whale was confirmed off the coast of Israel in the Mediterranean Sea, leading some scientists to think they might be repopulating old breeding grounds that have not been used for centuries. The gray whale is a dark slate-gray in color and covered by characteristic gray-white patterns, scars left by parasites which drop off in its cold feeding grounds. Individual whales are typically identified using photographs of their dorsal surface and matching the scars and patches associated with parasites that have fallen off the whale or are still attached. Gray whales measure from 16 feet (4.9 m) in length for newborns to 43–50 feet (13–15 m) for adults (females tend to be slightly larger than adult males). Newborns are a darker gray to black in color. A mature gray whale can reach 40 tonnes (39 long tons; 44 short tons), with a typical range of 15 to 33 tonnes (15 to 32 long tons; 17 to 36 short They have two blowholes on top of their head, which can create a distinctive V-shaped blow at the surface in calm wind conditions. Notable features that distinguish the gray whale from other mysticetes include its baleen that is variously described as cream, off-white, or blond in color and is unusually short. Small depressions on the upper jaw each contain a lone stiff hair, but are only visible on close inspection. Its head's ventral surface lacks the numerous prominent furrows of the related rorquals, instead bearing two to five shallow furrows on the throat's underside. The gray whale also lacks a dorsal fin, instead bearing 6 to 12 dorsal crenulations ("knuckles"), which are raised bumps on the midline of its rear quarter, leading to the flukes. The tail itself is 10–12 feet (3.0–3.7 m) across and deeply notched at the center while its edges taper to a point. Two Pacific Ocean populations are known to exist: one of not more than 130 individuals (according to the most recent population assessment in 2008) whose migratory route is presumed to be between the Sea of Okhotsk and southern Korea, and a larger one with a population between 20,000 and 22,000 individuals in the eastern Pacific travelling between the waters off Alaska and Baja California Sur. The western population is listed as critically endangered by the IUCN. No new reproductive females were recorded in 2010, resulting in a minimum of 26 reproductive females being observed since 1995. Even a very small number of additional annual female deaths will cause the subpopulation to In 2007, S. Elizabeth Alter used a genetic approach to estimate prewhaling abundance based on samples from 42 California gray whales, and reported DNA variability at 10 genetic loci consistent with a population size of 76,000–118,000 individuals, three to five times larger than the average census size as measured through 2007. NOAA conducted a new populations study in 2010-2011; those data will be available by 2012. The ocean ecosystem has likely changed since the prewhaling era, making a return to prewhaling numbers infeasible; many marine ecologists argue that existing gray whale numbers in the eastern Pacific Ocean are approximately at the population's carrying The gray whale became extinct in the North Atlantic in the 18th century. Radiocarbon dating of subfossil or fossil European (Belgium, the Netherlands, Sweden, the United Kingdom) coastal remains confirms this, with whaling the possible cause. Remains dating from the Roman epoch were found in the Mediterranean during excavation of the antique harbor of Lattara near Montpellier in 1997, raising the question of whether Atlantic gray whales migrated up and down the coast of Europe to calve in the Mediterranean. Similarly, radiocarbon dating of American east coastal subfossil remains confirm gray whales existed at least through the 17th century. This population ranged at least from Southampton, New York, to Jupiter Island, Florida, the latest from 1675. In his 1835 history of Nantucket Island, Obed Macy wrote that in the early pre-1672 colony a whale of the kind called "scragg" entered the harbor and was pursued and killed by the settlers. A. B. Van Deinse points out that the "scrag whale", described by P. Dudley in 1725 as one of the species hunted by the early New England whalers, was almost certainly the gray In mid-1980, there were three gray whale sightings in the eastern Beaufort Sea, placing them 585 kilometers (364 mi) further east than their known range at the time. In May 2010, a gray whale was sighted off the Mediterranean shore of Israel. It has been speculated that this whale crossed from the Pacific to the Atlantic via the Northwest Passage, since alternative routes through the Panama Canal or Cape Horn are not contiguous to the whale's established territory. There has been gradual melting and recession of Arctic sea ice with extreme loss in 2007 rendering the Northwest Passage "fully navigable". The same whale was sighted again on June 8, 2010, off the coast of Barcelona, In January 2011, a gray whale that had been tagged in the western population was tracked as far east as the eastern population range off the coast of Humans and the killer whale (orca) are the adult gray whale's only predators. Aboriginal hunters, including those on Vancouver Island and the Makah in Washington, have hunted gray whales. The Japanese began to catch gray whales beginning in the 1570s. At Kawajiri, Nagato, 169 gray whales were caught between 1698 and 1889. At Tsuro, Shikoku, 201 were taken between 1849 and 1896. Several hundred more were probably caught by American and European whalemen in the Sea of Okhotsk from the 1840s to the early 20th century. Whalemen caught 44 with nets in Japan during the 1890s. The real damage was done between 1911 and 1933, when Japanese whalemen killed 1,449. By 1934, the western gray whale was near extinction. From 1891 to 1966, an estimated 1,800–2,000 gray whales were caught, with peak catches of 100–200 annually occurring in the Commercial whaling by Europeans of the species in the North Pacific began in the winter of 1845-46, when two United States ships, the Hibernia and the United States, under Captains Smith and Stevens, caught 32 in Magdalena Bay. More ships followed in the two following winters, after which gray whaling in the bay was nearly abandoned because "of the inferior quality and low price of the dark-colored gray whale oil, the low quality and quantity of whalebone from the gray, and the dangers of lagoon Gray whaling in Magdalena Bay was revived in the winter of 1855-56 by several vessels, mainly from San Francisco, including the ship Leonore, under Captain Charles Melville Scammon. This was the first of 11 winters from 1855 through 1865 known as the "bonanza period", during which gray whaling along the coast of Baja California reached its peak. Not only were the whales taken in Magdalena Bay, but also by ships anchored along the coast from San Diego south to Cabo San Lucas and from whaling stations from Crescent City in northern California south to San Ignacio Lagoon. During the same period, vessels targeting right and bowhead whales in the Gulf of Alaska, Sea of Okhotsk, and the Western Arctic would take the odd gray whale if neither of the more desirable two species were in In December 1857, Charles Scammon, in the brig Boston, along with his schooner-tender Marin, entered Laguna Ojo de Liebre (Jack-Rabbit Spring Lagoon) or later known as Scammon's Lagoon (by 1860) and found one of the gray's last refuges. He caught 20 whales. He returned the following winter (1858–59) with the bark Ocean Bird and schooner tenders A.M. Simpson and Kate. In three months, he caught 47 cows, yielding 1,700 barrels (270 m3) of oil. In the winter of 1859-60, Scammon, again in the bark Ocean Bird, along with several other vessels, performed a similar feat of daring by entering San Ignacio Lagoon to the south where he discovered the last breeding lagoon. Within only a couple of seasons, the lagoon was nearly devoid of Between 1846 and 1874, an estimated 8,000 gray whales were killed by American and European whalemen, with over half having been killed in the Magdalena Bay complex (Estero Santo Domingo, Magdalena Bay itself, and Almejas Bay) and by shore whalemen in California and Baja California. This, for the most part, does not take into account the large number of calves injured or left to starve after their mothers had been killed in the breeding lagoons. Since whalemen primarily targeted these new mothers, several thousand deaths should probably be added to the total. Shore whaling in California and Baja California continued after this period, until the early 20th century. A second, shorter, and less intensive hunt occurred for gray whales in the eastern North Pacific. Only a few were caught from two whaling stations on the coast of California from 1919 to 1926, and a single station in Washington (1911–21) accounted for the capture of another. For the entire west coast of North America for the years 1919 to 1929, some 234 gray whales were caught. Only a dozen or so were taken by British Columbian stations, nearly all of them in 1953 at Coal Harbour. A whaling station in Richmond, California, caught 311 gray whales for "scientific purposes" between 1964 and 1969. From 1961 to 1972, the Soviet Union caught 138 gray whales (they originally reported not having taken any). The only other significant catch was made in two seasons by the steam-schooner California off Malibu, California. In the winters of 1934-35 and 1935–36, the California anchored off Point Dume in Paradise Cove, processing gray whales. In 1936, gray whales became protected in the United As of 2001, the Californian gray whale population had grown to about 26,000. As of 2011, the population of western Pacific (seas near Korea, Japan, and Kamchatka) gray whales was an estimated OCEAN POLLUTION - Baleen whales suffer from ingesting plastic in the man-made soups that fester in the Atlantic, Indian and Pacific oceans. This is so not just for the fact trash resembles their food, but because they gulp large amounts of water when feeding. In August 2000, a Bryde’s whale was stranded near Cairns, Australia. The stomach of this whale was found to be tightly packed with six square meters of plastic rubbish, including supermarket bags, food packages, and fragments of trash bags. In April 2010, a gray whale that died after stranding itself on a west Seattle beach was found to have more than 20 plastic bags, small towels, surgical gloves, plastic pieces, duct tape, a pair of sweat pants, and a golf ball, not to mention other garbage contained in its stomach. Plastic is not digestible, and once it finds its way into the intestines, accumulates and clogs the intestines. For some whales, the plastic does not kill the animal directly, but cause malnutrition and disease, which leads to unnecessary suffering until death. Whales are not the only victims to our trash. It is estimated that over one million birds and 100,000 marine mammals die each year from plastic debris. In September 2009, photographs of albatross chicks on Midway Atoll were brought to the public’s eye. These nesting chicks were fed bellies full of plastic by their parents who soar over vastly polluted oceans collecting what looks to them to be food. This diet of human trash kills tens of thousands of albatross chicks each year on Midway because of starvation, toxicity, and choking. We can all do our part by limiting our use of plastic products such as shopping bags, party balloons, straws, and plastic bottles. Be a frugal shopper and recycle! The North Atlantic population may have been hunted to extinction in the 18th century. Circumstantial evidence indicates whaling could have contributed to this population's decline, as the increase in whaling activity in the 17th and 18th centuries coincided with the population's disappearance. A. B. Van Deinse points out the "scrag whale", described by P. Dudley in 1725, as one target of early New England whalers, was almost certainly the gray whale. In his 1835 history of Nantucket Island, Obed Macy wrote that in the early pre-1672 colony, a whale of the kind called "scragg" entered the harbor and was pursued and killed by the settlers. Gray whales (Icelandic sandlægja) were described in Iceland in the early 17th Gray whales have been granted protection from commercial hunting by the International Whaling Commission (IWC) since 1949, and are no longer hunted on a large scale. Limited hunting of gray whales has continued since that time, however, primarily in the Chukotka region of northeastern Russia, where large numbers of gray whales spend the summer months. This hunt has been allowed under an "aboriginal/subsistence whaling" exception to the commercial-hunting ban. Antiwhaling groups have protested the hunt, saying the meat from the whales is not for traditional native consumption, but is used instead to feed animals in government-run fur farms; they cite annual catch numbers that rose dramatically during the 1940s, at the time when state-run fur farms were being established in the region. Although the Soviet government denied these charges as recently as 1987, in recent years the Russian government has acknowledged the practice. The Russian IWC delegation has said that the hunt is justified under the aboriginal/subsistence exemption, since the fur farms provide a necessary economic base for the region's native population. Currently, the annual quota for the gray whale catch in the region is 140 per year. Pursuant to an agreement between the United States and Russia, the Makah tribe of Washington claimed four whales from the IWC quota established at the 1997 meeting. With the exception of a single gray whale killed in 1999, the Makah people have been prevented from hunting by a series of legal challenges, culminating in a United States federal appeals court decision in December 2002 that required the National Marine Fisheries Service to prepare an Environmental Impact Statement. On September 8, 2007, five members of the Makah tribe shot a gray whale using high powered rifles in spite of the decision. The whale died within 12 hours, sinking while heading out to As of 2008, the IUCN regards the gray whale as being of "Least Concern" from a conservation perspective. However, the specific subpopulation in the northwest Pacific is regarded as being "Critically Endangered". The northwest Pacific population is also listed as endangered by the U.S. government’s National Marine Fisheries Service under the U.S. Endangered Species Act. The IWC Bowhead, Right and Gray Whale subcommittee in 2011 reiterated the conservation risk to western gray whales is large because of the small size of the population and the potential anthropogenic In their breeding grounds in Baja California, Mexican law protects whales in their lagoons while still permitting whale watching. Gray whales are protected under Canada's Species at Risk Act which obligates Canadian governments to prepare management plans for the whales and consider the interests of the whales when permitting Gray Whale migrations off of the Pacific Coast were observed, initially, by Marineland of the Pacific in Palos Verdes, California. The Gray Whale Census, an official Gray Whale migration census that has been recording data on the migration of the Pacific Gray Whale has been keeping track of the population of the Pacific Gray Whale since 1985. This census is the longest running census of the Pacific Gray Whale. Census keepers volunteer from December 1 through May, from sun up to sun down, seven days a week, keeping track of the amount of Gray Whales migrating through the area off of Los Angeles. Information from this census is listed through the American Cetacean Society of Los Angeles (ACSLA). According to the Government of Canada's Management Plan for gray whales, threats to the eastern North Pacific population of gray whales Increased human activities in their breeding lagoons in Mexico The threat of toxic spills Impacts from fossil fuel exploration and extraction Western gray whales are facing, the large-scale offshore oil and gas development programs near their summer feeding ground, as well as fatal net entrapments off Japan during migration, which pose significant threats to the future survival of the The substantial near-shore industrialization and shipping congestion throughout the migratory corridors of the western gray whale population represent potential threats by increasing the likelihood of exposure to ship strikes, chemical pollution, and general disturbance (Weller et al. Offshore gas and oil development in the Okhotsk Sea within 20 km of the primary feeding ground off northeast Sakhalin Island is of particular concern. Activities related to oil and gas exploration, including geophysical seismic surveying, pipelaying and drilling operations, increased vessel traffic, and spills, all pose potential threats to western gray whales. Disturbance from underwater industrial noise may displace whales from critical feeding habitat. Physical habitat damage from drilling and dredging operations, combined with possible impacts of oil and chemical spills on benthic prey communities also warrants The whale feeds mainly on benthic crustaceans, which it eats by turning on its side (usually the right, resulting in loss of eyesight in the right eye for many older animals) and scooping up sediments from the sea floor. It is classified as a baleen whale and has baleen, or whalebone, which acts like a sieve, to capture small sea animals, including amphipods taken in along with sand, water and other material. Mostly, the animal feeds in the northern waters during the summer; and opportunistically feeds during its migration, depending primarily on its extensive fat reserves. Calf gray whales drink 50 to 80 US gallons (190 to 300 l) of their mothers' 53% fat milk per The main feeding habitat of the western Pacific subpopulation is the shallow (5–15 m depth) shelf off northeastern Sakhalin Island, particularly off the southern portion of Piltun Lagoon, where the main prey species appear to be amphipods and isopods. In some years, the whales have also used an offshore feeding ground in 30-35 m depth southeast of Chayvo Bay, where benthic amphipods and cumaceans are the main prey species. Some gray whales have also been seen off western Kamchatka, but to date all whales photographed there are also known from the Piltun area. Each October, as the northern ice pushes southward, small groups of eastern gray whales in the eastern Pacific start a two- to three-month, 8,000–11,000-kilometer (5,000–6,800 mi) trip south. Beginning in the Bering and Chukchi seas and ending in the warm-water lagoons of Mexico's Baja peninsula and the southern Gulf of California, they travel along the west coast of Canada, the United States and Mexico. The western gray whale summers in the Okhotsk Sea, mainly off the northeastern coast of Sakhalin Island (Russian Federation). There are also occasional sightings off the eastern coast of Kamchatka (Russian Federation) and in other coastal waters of the northern Okhotsk Sea,. Its migration routes and wintering grounds are poorly known, the only recent information being from occasional records on both the eastern and western coasts of Japan and along the Chinese coast. The calving grounds are unknown but may be around Hainan Island, this being the southwestern end of the known Traveling night and day, the gray whale averages approximately 120 km (75 mi) per day at an average speed of 8 kilometers per hour (5 mph). This round trip of 16,000–22,000 km (9,900–14,000 mi) is believed to be the longest annual migration of any mammal. By mid-December to early January, the majority are usually found between Monterey and San Diego, often visible from shore. The whale watching industry provides ecotourists and marine mammal enthusiasts the opportunity to see groups of gray whales as they migrate. By late December to early January, eastern grays begin to arrive in the calving lagoons of Baja. The three most popular lagoons are Laguna Ojo de Liebre (formerly known in English as Scammon's Lagoon after whaleman Charles Melville Scammon, who discovered the lagoons in the 1850s and hunted the grays), San Ignacio, and Magdalena. These first whales to arrive are usually pregnant mothers looking for the protection of the lagoons to bear their calves, along with single females seeking mates. By mid-February to mid-March, the bulk of the population has arrived in the lagoons, filling them with nursing, calving and mating gray whales. Throughout February and March, the first to leave the lagoons are males and females without new calves. Pregnant females and nursing mothers with their newborns are the last to depart, leaving only when their calves are ready for the journey, which is usually from late March to mid-April. Often, a few mothers linger with their young calves well into May. By late March or early April, the returning animals can be seen from Everett, Washington, to Puget Sound to Canada. A population of about 200 gray whales stay along the eastern Pacific coast from Canada to California throughout the summer, not making the farther trip to Alaskan waters. This summer resident group is known as the Pacific Coast feeding group. Because of their size and need to migrate, gray whales have rarely been held in captivity, and then only for brief periods of time. In 1972, a three-month-old gray whale named Gigi (II) was captured for brief study by Dr. David W. Kenney, and then released near San In January 1997, the newborn baby whale J.J. was found helpless near the coast of Los Angeles, California, 4.2 m (14 ft) long and 800 kilograms (1,800 lb) in weight. Nursed back to health in SeaWorld San Diego, she was released into the Pacific Ocean on March 31, 1998, 9 m (30 ft) long and 8,500 kilograms (19,000 lb) in mass. She shed her radio transmitter packs three days later. American Cetacean Society Center for Whale Research Cetacean Society International Coast Watch Society International Fund for Animal Welfare International Whaling Commission Interspecies Communication Inc. National Marine Mammal Laboratory Natural Resources Defense Council Ocean Defense International Save the Whales Sea Shepherd Conservation Society Solar Metro Online Society for Marine Mammalogy U.S. Citizens Against Whaling Whale Center of New England Cline Photo Workshops - Tours Baja whales CH - GOC- MEXICO - INDIAN SEA - PACIFIC GULF - SEA CHINA - PLASTIC OCEANS - SEA LEVEL RISE - UNEP
<urn:uuid:c98d4ac7-f6ea-4cde-b4f2-b14136ade1f3>
CC-MAIN-2022-33
http://www.solarnavigator.net/animal_kingdom/cetaceans/grey_whale.htm
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573760.75/warc/CC-MAIN-20220819191655-20220819221655-00699.warc.gz
en
0.952202
5,687
3.796875
4
- A Generation Gap in Behaviors and Values. Younger adults attach far less moral stigma than do their elders to out-of-wedlock births and cohabitation without marriage. They engage in these behaviors at rates unprecedented in U.S. history. Nearly four-in-ten (36.8%) births in this country are to an unmarried woman. Nearly half (47%) of adults in their 30s and 40s have spent a portion of their lives in a cohabiting relationship. - Public Concern over the Delinking of Marriage and Parenthood. Adults of all ages consider unwed parenting to be a big problem for society. At the same time, however, just four-in-ten (41%) say that children are very important to a successful marriage, compared with 65% of the public who felt this way as recently as 1990. - Marriage Remains an Ideal, Albeit a More Elusive One. Even though a decreasing percentage of the adult population is married, most unmarried adults say they want to marry. Married adults are more satisfied with their lives than are unmarried adults. - Children Still Vital to Adult Happiness. Children may be perceived as less central to marriage, but they are as important as ever to their parents. As a source of adult happiness and fulfillment, children occupy a pedestal matched only by spouses and situated well above that of jobs, career, friends, hobbies and other relatives. - Cohabitation Becomes More Prevalent. With marriage exerting less influence over how adults organize their lives and bear their children, cohabitation is filling some of the vacuum. Today about a half of all nonmarital births are to a cohabiting couple; 15 years ago, only about a third were. Cohabiters are ambivalent about marriage just under half (44%) say they to want marry; a nearly equal portion (41%) say they aren’t sure. - Divorce Seen as Preferable to an Unhappy Marriage. Americans by lopsided margins endorse the mom-and-dad home as the best setting in which to raise children. But by equally lopsided margins, they believe that if married parents are very unhappy with one another, divorce is the best option, both for them and for their children. - Racial Patterns are Complex. Blacks are much less likely than whites to marry and much more likely to have children outside of marriage. However, an equal percentage of both whites and blacks (46% and 44%, respectively) consider it morally wrong to have a child out of wedlock. Hispanics, meantime, place greater importance than either whites or blacks do on children as a key to a successful marriage, even though they have a higher nonmarital birth rate than do whites. - Survey Sample and Methods. These findings are from a telephone survey conducted from February 16 through March 14, 2007 among a randomly-selected, nationally representative sample of 2,020 adults. Americans believe that births to unwed women are a big problem for society, and they take a mixed view at best of cohabitation without marriage. Yet these two nontraditional behaviors have become commonplace among younger adults, who have a different set of moral values from their elders about sex, marriage and parenthood, a new Pew Research Center Survey finds. This generational values gap helps to explain the decades-long surge in births to unmarried women, which now comprise nearly four-in-ten (37%) births in the United States as well as the sharp rise in living together without getting married, which, the Pew survey finds, is something that nearly half of all adults in their 30s and 40s have done for at least a portion of their lives. But this generational divide is only part of a more complex story. Americans of all ages, this survey finds, acknowledge that there has been a distinct weakening of the link between marriage and parenthood. In perhaps the single most striking finding from the survey, just 41% of Americans now say that children are “very important” to a successful marriage, down sharply from the 65% who said this in a 1990 survey. Indeed, children have fallen to eighth out of nine on a list of items that people associate with successful marriages — well behind “sharing household chores,” “good housing,” “adequate income,” “happy sexual relationship,” and “faithfulness.” Back in 1990, when the American public was given this same list on a World Values Survey, children ranked third in importance. The new Pew survey also finds that, by a margin of nearly three-to-one, Americans say that the main purpose of marriage is the “mutual happiness and fulfillment” of adults rather than the “bearing and raising of children.” In downgrading the importance of children to marriage, public opinion both reflects and facilitates the upheavals in marital and parenting patterns that have taken place over the past several decades. In the United States today, marriage exerts less influence over how adults organize their lives and how children are born and raised than at any time in the nation’s history. Only about half of all adults (ages 18 and older) in the U.S. are married; only about seven-in-ten children live with two parents; and nearly four-in-ten births are to unwed mothers, according to U.S. Census figures. As recently as the early 1970s, more than six-in-ten adults in this country were married; some 85% of children were living with two parents; and just one-birth-in-ten was to an unwed mother. Americans take a dim view of these trends, the Pew survey finds. More than seven-in-ten (71%) say the growth in births to unwed mothers is a “big problem.” About the same proportion — 69% — says that a child needs both a mother and a father to grow up happily. Not surprisingly, however, attitudes are much different among those adults who have themselves engaged in these nontraditional behaviors. For example, respondents in the survey who are never-married parents (about 8% of all parents) are less inclined than ever-married parents to see unmarried childbearing as bad for society or morally wrong. They’re also less inclined to say a child needs both a mother and father to grow up happily. Demographically, this group is more likely than ever-married parents to be young, black or Hispanic,1 less educated, and to have been raised by an unwed parent themselves. There is another fast-growing group — cohabiters — that has a distinctive set of attitudes and moral codes about these matters. According to the Pew survey, about a third of all adults (and more than four-in-ten adults under age 50) have, at some point in their lives, been in a cohabiting relationship with a person to whom they were not married. This group is less likely that the rest of the adult population to believe that premarital sex is wrong. They’re less prone to say that it’s bad for society that more people are living together without getting married. Demographically, this group is more likely than the rest of the adult population to be younger, black, and secular rather than religious. But while this survey finds that people in nontraditional marital and parenting situations tend to have attitudes that track with their behaviors, it does not show that they place less value than others on marriage as a pathway to personal happiness. To the contrary, both the never-married parents as well as the cohabiters in our survey tend to be more skeptical than others in the adult population that a person can lead a complete and fulfilled life if he or she remains single. This may reflect the fact that never-married parents as well as cohabiters tend to be less satisfied with their current lives than is the rest of the population. For many of them, marriage appears to represent an ideal albeit an elusive, unrealized one. Along these same lines, the survey finds that low income adults are more likely than middle income or affluent adults to cite the ability to meet basic economic needs (in the form of adequate income and good housing) as a key to a successful marriage. Adults with lower socioeconomic status — reflected by either education or income levels — also are less likely than others to marry, perhaps in part because they can’t meet this economic bar. And it’s this decline in marriage that is at the heart of the sharp growth in nonmarital childbearing. This trend has not been primarily driven — as some popular wisdom has it — on an increase in births to teenage mothers. To the contrary, those rates have been falling for several decades. Rather the sharp increase in nonmarital births is being driven by the fact that an ever greater percentage of women in their 20s, 30s and older are delaying or forgoing marriage but having children. The Pew survey was conducted by telephone from February 16 through March 14, 2007 among a randomly selected, nationally-representative sample of 2,020 adults. It has a margin of sampling error of 3 percentage points. The survey finds that while children may have become less central to marriage, they are as important as ever to their parents. Asked to weigh how important various aspects of their lives are to their personal happiness and fulfillment, parents in this survey place their relationships with their children on a pedestal rivaled only by their relationships with their spouses — and far above their relationships with their parents, friends, or their jobs or career. This is true both for married and unmarried parents. In fact, relatively speaking, children are most pre-eminent in the lives of unwed parents. The survey also finds that Americans retain traditional views about the best family structure in which to raise children. More than two-thirds (69%) say that a child needs both a mother and father to grow up happily. This question has been posed periodically over the past quarter century,2 and — even as the percentage of children who live with both a mother and father has dropped steadily during this time period — public opinion has remained steadfastly in favor of a home with a mom and a dad. In keeping with these traditional views, the public strongly disapproves of single women having children. Among the various demographic changes that have affected marriage and parenting patterns in recent decades — including more women working outside the home; more people living together without getting married; more first marriages at a later age; and more unmarried women having children — it’s the latter trend that draws the most negative assessments by far. Two-thirds (66%) of all respondents say that single women having children is bad for society, and nearly as many (59%) say the same about unmarried couples having children. No other social change we asked about in this particular battery drew a thumbs-down from more than half of respondents. While the public strongly prefers the traditional mother-and-father home, this endorsement has some clear limits. By a margin of 67% to 19%, Americans say that when there is a marriage in which the parents are very unhappy with one another, their children are better off if the parents get divorced. Similarly, by a margin of 58% to 38%, more Americans agree with the statement that “divorce is painful, but preferable to maintaining an unhappy marriage” than agree with the statement that “divorce should be avoided except in an extreme situation.” Thus, public attitudes toward divorce and single parenting have taken different paths over the past generation. When it comes to divorce, public opinion has become more accepting.3 When it comes to single parenting, public opinion has remained quite negative. The oddity is that rates of divorce, after more than doubling from 1960 to 1980, have declined by about a third in recent decades, despite this greater public acceptance. On the other hand, the rates of births to unwed mothers have continued to rise, despite the steadfast public disapproval. Some 37% of all births in the U.S. in 2005 were to an unwed mother, up from just 5% in 1960. This rapid growth is not confined to the U.S. Rates of births to unwed mothers also have risen sharply in the United Kingdom and Canada, where they are at about the same levels as they are in the U.S. And they’ve reached even higher levels in Western and Northern European countries such as France, Denmark and Sweden. Public Opinion by Demographic Groups The group differences in public opinion on these matters tend to be correlated with age, religion, race and ethnicity, as well as with the choices that people have made in their own marital and parenting lives. There are some, but not many, differences by gender. Here is a rundown of the key differences by group. Age, Religiosity and Political Conservatism As noted above, the Pew survey finds that older adults — who came of age prior to the social and cultural upheavals of the 1960s — are more conservative than younger and middle-aged adults in their views on virtually all of these matters of marriage and parenting. Thus, some of the overall change in public opinion is the result of what scholars call “generational replacement.” That is, as older generations die off and are replaced by younger generations, public opinion shifts to reflect the attitudes of the age cohorts that now make up the bulk of the adult population. Even among the younger generations (ages 18 to 64), however, our survey finds substantial differences in attitudes that fall along the fault lines of religion and ideology rather than age. White evangelical Protestants and people of all faiths who attend religious services at least weekly hold more conservative viewpoints on pretty much the whole gamut of questions asked on the Pew survey. This is true across all age groups. For example, white evangelical Protestants are more likely than other religious groups to consider premarital sex morally wrong. They are more likely to consider the rise in unmarried childbearing and cohabitation bad for society and more likely to agree that a child needs both a mother and father to be happy. They also are more likely to say legal marriage is very important when a couple plans to have children together or plans to spend the rest of their lives together. Further, white evangelical Protestants are more likely than white mainline Protestants to say that divorce should be avoided except in extreme circumstances and to consider it better for the children when parents remain married, though very unhappy with each other. In sum, white evangelical Protestants have a strong belief in the importance of marriage and strong moral prescriptions against premarital sex and childbearing outside of marriage. The pattern is the same among those of any faith who attend religious services more frequently, compared with less frequent attendees. And it is the same for political conservatives compared with their more moderate or liberal counterparts. Race and Ethnicity The racial and ethnic patterns in public opinion on these matters are more complex. Blacks and Hispanic are more likely than whites to bear children out of wedlock. And yet these minority groups, our survey finds, also are more inclined than whites to place a high value on the importance of children to a successful marriage. Indeed, they place higher value than whites do on the importance of most of the ingredients of a successful marriage that this survey asked about — especially the economic components. But blacks and Hispanics are less likely than whites to be married. One possible explanation to emerge from this survey is that many members of these minority groups may be setting a high bar for marriage that they themselves cannot reach, whether for economic or other reasons. As noted above, there are sharp generational differences in views about the morality of unwed parenting. However, there is no significant difference on this front by race or ethnicity; blacks, Hispanics and whites are about equally likely to say it is wrong for unmarried women to have children. There are small differences along racial and ethnic lines when it comes to evaluating the impact on society of the growing numbers of children born out of wedlock. Hispanics are somewhat less negative about this phenomenon than are whites and blacks, between whom there is no statistically significant difference. When it comes to the relationship between marriage and children, Hispanics again stand out. They are more inclined than either whites or blacks to consider having and raising children to be the main purpose of marriage (even so, however, a majority of Hispanics say that adult happiness and fulfillment is the main purpose of a marriage). Also, Hispanics — more so than either blacks or whites — consider children “very important” for a successful marriage. But when considering a broader range of characteristics of a successful marriage, it is whites who stand apart. They are much less likely than either blacks or Hispanics to consider adequate income, good housing and children to be “very important” to a successful marriage. And they are somewhat less likely to rate various measures of compatibility (see chart) as being important as well. To some degree all these racial and ethnic differences reflect the differing socioeconomic circumstances of whites, blacks and Hispanics. People with higher incomes and education levels — regardless of their race and ethnicity — tend to rate these various characteristics as less important to marriage than do people with a lower socio-economic status. When it comes to views about divorce, whites and, especially, Hispanics are more likely than blacks to say that divorce is preferable to maintaining an unhappy marriage. However, about two-thirds of all three groups say that it is better for the children if their very unhappy parents divorce rather than stay together. Views about cohabitation are similar for blacks and whites, while Hispanics are a bit less negative about the impact of cohabitation on society. But the similarities between blacks and whites masks divisions of opinion within each group. Among whites, the difference of opinion between generations is particularly sharp — with 55% of whites ages 50 and older saying that living together is bad for society, compared with 38% among younger whites, a difference of 17 percentage points. The comparable difference between older and younger blacks is just 9 percentage points. Among older blacks and whites, the balance of opinion is tilted in the negative direction. For younger whites (ages 18 to 49), a plurality hold a neutral assessment of the impact on society of couples living together without marrying. Among younger blacks, opinion about cohabitation is more divided; 48% of this group considers living together bad for society while 45% take a neutral position and 5% say it is good for society. To some degree, views about cohabitation reflect differing moral assessments of premarital sex. Blacks are more likely than whites and Hispanics to say that premarital sex is always or almost always morally wrong — and this is true even after group differences in age are taken into account. Those who consider premarital sex wrong also tend to consider cohabitation bad for society, while those who say premarital sex is not wrong or is only wrong in some circumstances are more likely to say the cohabitation trend makes no difference for society. When it comes to marital and parenting behaviors (as opposed to attitudes), a number of racial and ethnic patterns stand out. More than eight-in-ten white adults in this country have been married, compared with just seven-in-ten Hispanic adults and slightly more than half (54%) of all black adults. Among blacks, there is a strong correlation between frequent church attendance, moral disapproval of premarital sex and the tendency to marry. Among whites (who marry at much higher rates) this relationship is not as strong. Among those who have ever been married, blacks (38%) and whites (34%) are more likely than Hispanics (23%) to have been divorced. Blacks also are somewhat more likely than whites or Hispanics to have cohabited without marriage. But all three groups, this survey finds, are equally likely to have had children. Blacks and Hispanics are more likely than whites to have children out of wedlock. For all groups, this behavior also is strongly correlated with lower educational attainment. For blacks and Hispanics (more so than for whites), frequent church attendance correlates negatively with the likelihood of being an unwed parent. The Pew survey finds a great deal of common ground between men and women on issues surrounding marriage and parenting. There are some small differences, however. While men and women are about equally likely to see unmarried parenting as a problem for society, men are a bit more negative than women about unmarried parenting when no male partner is involved in raising the children. Similarly, men are a little more likely than women to believe that children need both a mother and father to be happy. Women, on the other hand, are a bit more likely than men to consider divorce preferable to maintaining an unhappy marriage; they also believe more strongly than men that divorce is the better option for children when the marriage is very unhappy. On other matters — such as the main purpose of marriage or the characteristics of a successful marriage — there are few differences. Education and Income College-educated adults and high-income adults marry at higher rates and divorce at lower rates than do those with less education and income. They are also less likely to have children outside of marriage.4 However, despite the sharp differences by socio-economic status in marital and parenting behaviors, there are only minor differences by socio-economic status in values and attitudes about marriage and parenting. Adults with higher incomes and more education tend to be slightly less inclined than others to say that premarital sex and nonmarital births are always morally wrong. The college educated also are slightly less inclined than the less educated to say it is very important for couples to legally marry if they plan to spend their lives together. Similarly, those with a college education are a little more likely to say that a man or woman can lead a complete and happy life if he or she remains single. There are no more than minimal differences by education or income when it comes to views about the impact on society of unmarried childbirths and of cohabitation. The Pew survey finds some strong correlations between the kinds of family arrangements that respondents experienced growing up and their own behaviors in adulthood. For example, among respondents who are themselves products of parents who never married, about a third (32%) are themselves never-married parents. By comparison, just 5% of the general adult population are products of never-married parents. Family background in childhood plays a smaller role, however, in predicting adult attitudes (as opposed to behaviors) about whether unmarried parenting is bad for society and morally wrong. Once age differences are taken into account, those whose parents never married are just a bit less negative than those whose parents married and never divorced about the impact of unmarried childbearing on society. Respondents with parents who divorced are just as likely as other respondents to take the position that divorce is painful but preferable to maintaining an unhappy marriage. Similarly, among people ages 18 to 49, the now grown children of divorce hold about the same views as those who grew up in a traditional-married-parent arrangement on whether divorce is better for children than parents staying in an unhappy marriage. On the other hand, those respondents whose parents divorced are less likely than other respondents to believe that a child needs a home with both a mother and a father to grow up happily. Moral Beliefs, Attitudes and Behaviors There are close relationships between behaviors, attitudes and moral beliefs when it comes to the subjects of unwed parenting and cohabitation, the Pew survey finds. For example, those who have fewer moral reservations about premarital sex and are positive or neutral about the impact of living together on society also are more likely to have lived with a partner themselves. Similarly, those who are positive or neutral about the social impact of unmarried parenting and less likely to consider it morally wrong are also more likely to be in this situation themselves. It is not possible from this survey to disentangle which came first — the moral beliefs, the attitudes, or the behaviors — but it is clear they tend to go hand-in-hand. Statistical analysis of these survey findings shows that having less education and being black or Hispanic are traits associated with being a never-married parent. Attending religious services less often also is associated with being an unmarried parent, particularly among blacks and Hispanics. On the other side of the coin, those who believe that having children without being married is wrong are less likely to be a never-married parent. Also, those who consider the rise in unmarried parents bad for society are less likely to be unmarried parents. A statistical analysis of factors correlated with ever having lived with a partner outside of marriage shows that cohabiters are younger, more likely to be black, and, after controlling for other demographic factors, less likely to be Hispanic. They are also less likely to attend religious services frequently. There is a strong relationship between moral beliefs about premarital sex and cohabitation history; those who consider premarital sex always wrong are less likely to have cohabited than others. They are also less likely to have cohabited than those who say living together is bad for society — suggesting that the more powerful stigma against cohabitation comes from concerns about morality rather than from concerns about social consequences. A different pattern emerges when looking at differences between married people who have — and haven’t — been divorced. Here, the demographic and attitudinal factors do little to predict the probability of experience with divorce. There are a few exceptions, however. Catholics are bit less likely than members of other religious groups to have been divorced. And there is a modest correlation between having been divorced and believing that divorce is better for the children than maintaining a very unhappy marriage. But in the main, experience with divorce cuts across all demographic subgroups more evenly than does experience either with unmarried parenting or with cohabitation. The belief that divorce is preferable to maintaining an unhappy marriage is widely shared by both those who have and have not been divorced. Read the full report for more details.
<urn:uuid:ee583f26-e29a-41f1-8113-fd9adff7dfef>
CC-MAIN-2022-33
https://www.pewresearch.org/social-trends/2007/07/01/as-marriage-and-parenthood-drift-apart-public-is-concerned-about-social-impact/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572198.93/warc/CC-MAIN-20220815175725-20220815205725-00296.warc.gz
en
0.970067
5,337
2.9375
3
The World Health Organization estimates that one billion people worldwide lack safe drinking water and that 1.6 million people, mostly young children, die yearly from related diarrheal illnesses. The majority of this disease burden falls on developing countries, especially on urban fringes, in remote farming villages and Indigenous communities1. Several technologies have been developed and deployed in communities without access to safe public drinking water to treat water in the home. The most well studied methods of point-of-use (POU) sterilization include chlorination with safe storage, combined coagulant-chlorine disinfection systems, ultraviolet radiation through clear plastic container (SODIS), ceramic filtration and biosand filtration2. Controversy exists regarding which of these technologies is superior, with some experts believing that solutions should vary between communities based on environmental and cultural and social considerations3. Characteristics of a practical and effective POU technology include: (i) the ability to produce sufficient quantities of microbiologically safe drinking water in a reasonably short period of time; (ii) the ability to treat water from different sources that may have high turbidity and organic content; (iii) low cost to implement, operate and replace and; (iv) it maintains effective and high post-implementation use levels after deployment in the field2. Since they were first installed in Nicaragua in 1996 approximately 80 000 biosand filters (BSFs) have been put in use into use in 20 countries world-wide. A detailed description of the BSF can be found elsewhere4. In brief, these filters clean water by a combination of straining, adsorption and biological activity of the so called 'schmutzdecke', an accumulation of organic and inorganic charged compounds created on the sand column5. Log10 reductions of 0.5-4.0 for bacteria, virus and protozoa in filtered water have been reported6, with filter performance varying with maturity, dosing conditions, flow rate, pause time between doses, grain size, filter bed contact time and other design and operation factors7. Filter performance is most commonly monitored by reductions in colony forming units (CFUs) of Escherichia coli, an indicator organism. This BSF project was primarily funded by the Newton San Juan del Sur Sister City Project (http://www.newtonsanjuan.org) and the Conservation Food and Health Foundation as a component of a parasite eradication program. The BSFs were manufactured locally of concrete and filled with 'virgin' sand from the Montastepe volcano. The sand was hand sifted, washed and chlorine sterilized according to procedures recommended by the Centre for Affordable Water and Sanitation Technology (http://www.cawst.org/en/resources/pubs/category/12-biosand-filter-project-implementation). Manufacturing costs were US$60 per filter exclusive of delivery to the homes. A pilot project in 2007 introduced 21 BSFs in the Papaturro community of Nicaragua. The program was expanded by an additional 220 filters in 2008 and 360 filters in 2009. Filters were made available to families on an ad-hoc basis. The only requirements were that the households were located close enough to a dirt road to allow filter delivery on a flat bed truck. Recipient families did not pay for the filters and were not required to contribute effort in the production of the filters. Household members were required to sign a document stating that they would adhere to recommended filter use practices that were taught to them by the Newton Sister City Staff. A support team of brigadistas made occasional visits to households to reinforce filter best practices. In August 2008 the authors conducted a preliminary visit to the communities studied in this report to pilot the methods and procedures. Poor performance by a small nonrandom sample of BSFs studied in this feasibility demonstration project resulted in a request for a more detailed assessment of BSF performance by one of the funding sources (Newton Sister City Project). This study was conducted in July and August 2009 in response to that request. This filter performance project was conducted by individuals who were independent of the funding sources and manufacturers of the filters. The goal of the project was to conduct an independent assessment of the performance of the filters in use in the communities. Emphasis was thus placed on laboratory measurement of filter performance. The cost of this study was approximately US$10,000 and covered housing, food, transportation (exclusive of airfare to Nicaragua) and laboratory expenses. This project was funded by both private donations and a grant from the University of Michigan. All filters studied were in surrounding villages of San Juan del Sur, Rivas Department, Nicaragua. Most households were small farms with limited animal husbandry consisting primarily of cattle and pigs. Water for virtually all of the homes came from wells that were approximately 6-9 m (20-30 feet) deep. Generally, wells were not intentionally situated in areas that protected them from animal or human waste. Homes were typically two or three rooms with dirt floors and no plumbing. Virtually every home had its own well although some sharing was apparent. Testing was done in July and August, months in the early wet season. Samples were taken from homes in the following communities: El Toro, Venado, Saragosa, Barbudos, Pueblo Nuevo, La Rejega, Carrizal, Bernardino, La Cuesta, Nevada, El Oro, Collado and Ojochal (filters for the last two communities were funded independently of the Newton Sister City Project). This evaluation was conducted with the approval of the Institutional Review Board of the University of Michigan. Verbal informed consent was obtained. Confidentiality is preserved by anonymous reporting of results. The study team consisted of two North American volunteers who were on-site for the entire study period and provided training for supplemental volunteers who worked for periods of 2 weeks each. A local translator and driver were hired for US$50 weekly. Questionnaires were administered in Spanish by the translator or a bilingual volunteer when available. This study was conducted on a convenience sample of 199 homes from 13 communities. These communities were selected by the translator and driver based on familiarity with the region and BSF project. Selection of homes in each community was not random but rather a generally successful attempt was made to visit all homes in each community that had received a filter. Of the 199 homes visited, laboratory data is presented on the 154 where the BSFs were reported to be in use by household members. Descriptive statistics are used to demonstrate bacterial contamination of water at the source, the filter spout and in the storage bucket. The communities visited were within a 90 min driving radius of the town of San Juan del Sur on the Pacific coast of southern Nicaragua. Homes with filters were identified by memory (driver and translator) and by questioning community members. Based on availability and willingness brigadistas accompanied the study team on some visits. One unannounced visit was made to each home. The daily routine was to arrive at a village in the early morning and visit approximately 12 homes prior to returning to San Juan del Sur by mid-afternoon to begin laboratory work. The team worked Monday to Friday collecting and analyzing samples. In addition to the paid translator and driver, the study team consisted of at least two volunteers who collected water samples and recorded responses to the questionnaire. Visits were generally conducted in 15 to 30 min. Occupants of virtually all households (150/154) responded to the questionnaire and allowed for filter inspection and for water samples to be taken. The questionnaire (Table 1) consisted of 11 questions with yes/no responses. Water samples were obtained from three sources for each home. First a sample was taken from the drawing bucket of each family's well. This bucket of water was then poured into the BSF and a second sample was obtained from the filter spout after approximately 1 min of flow. A final sample was taken from the storage bucket of the household (if available). Filter flow rates, water turbidity and Ph were not measured. All samples were collected into standard 100 mL 'Whirl-Pak' sample bags (NASCO; Atkinson, WI, USA) and placed in a cooler on ice. All samples were filtered and plated the evening of collection day and read in 24 hours. Table 1: Questionnaire response data A laboratory area consisting of a work bench, cabinets, sink, refrigerator and writing space was established in the kitchen and dining area of the volunteer's rented apartment. All laboratory supplies were purchased in the USA and transported to Nicaragua. Briefly, 100 mL samples of water were vacuum filtered through 0.45 µ Millipore membranes using an electric vacuum pump. Membranes were then placed on Bio-Rad Rapid'E. coli 2 Agar (http://www.rapidmicrobiology.com/news/1027h13.php) and placed in a portable incubator for 24 hours. The culture medium was prepared weekly according to the manufactures recommendations and stored in a refrigerator. The filter apparatus and flasks were alcohol sterilized between samples. Testing confirmed that this method led to effective sterilization and did not interfere with recovery of organisms from the subsequent filtration. The E. coli colonies were identified by their characteristic purple colony color on this medium. Colony counts were performed in duplicate by different observers and averaged. Counts were repeated and consensus reached if there was more than 15% disagreement. Results were recorded as CFUs per 100 mL water. The membrane filtration laboratory procedures followed US EPA standard 16038. Quality assurance was assessed by performing 10% of experiments in duplicate and including 'blanks' of sterile water daily. To assure consistency, one author (MF) was present and participated in all water sample collection and laboratory work. All sets of samples (well, filter spout, storage bucket) were analyzed at the same time using the same batch of culture medium. Results were entered into a Microsoft Excel spreadsheet. The questionnaire was administered by the translator (or bilingual volunteer when present) to the person who was primarily responsible for using and maintaining the filter. This was typically the female head of household. When this person was not available questions were answered by any household member willing to do so. Results were collected directly onto the questionnaire form and then entered into an Excel spreadsheet. The BSFs were in use in 154 of the 199 households visited (77%). Filters had been in use for approximately 12 months on average. Sixteen of the 154 filters that were said to be in use by household members had moist sand but with no standing water in them. Remaining filters had approximately 5 cm of water above the level of the sand. No attempt was made to enter the homes and inspect the filters of the 45 families who claimed to have discontinued their use. Water samples were obtained and tested only in the 154 households where the BSFs were in use at the time of the visit. Laboratory and collection accidents resulted in loss of 3 well-water samples. Sixteen of the 154 households did not have storage buckets from which to obtain and test water. Complete sets (well, filter spout and storage vessel water) were thus available for 88% of households tested. The number of CFUs of E. coli per 100 mL of water obtained at the source (well), filter spout and storage bucket is shown (Table 2). Water from all wells contained in excess of 10 CFUs of E. coli per 100 mL. The filter efficiency (percent reduction of E. coli ) was calculated (Formula 1): Similarly, the overall process efficiency (percent reduction) in E. coli , reflecting the improvement in water purity from source to storage vessel, was calculated (Formula 2): Results are shown (Table 3). Colony forming units per 100 mL in sterile blanks ranged from 0 to 2 and the coefficient of variation for repeated experiments was 6.5%. Although biosand filtration reduced CFUs in 74% of households in which it was used, in only 5 cases (3%) did filtered water have no detectable E coli CFUs, the stringent target level of purity recommended by the WHO. In 26 cases (17%) CFUs were reduced to levels <10 CFUs/100 mL. The medium filter efficiency was 80% and the overall program efficiency was 48%, indicating frequent recontamination of filtered water in storage vessels. Colony counts of less than 10 per 100 mL were found in only 3 of 135 storage vessels (2%) tested. Recalculation of filter efficacy rates excluding16 filters without standing water had little effect on the overall filter efficacy rates (data not shown). Table 2: Colony forming units of Escherichia coli per 100 mL found in well, filter spout and storage vessel water Table 3: Filter and overall efficiency rates The primary source of water for all households was a well, the majority of which (84%) were in the immediate vicinity of farm animals. Virtually all subjects interviewed stated that they were pleased with their BSF, that they used it every day, that it improved the taste of their water and resulted in improved family health. Approximately one-third of respondents reported occasional consumption of unfiltered water and a similar proportion did not sterilize their storage vessel with chlorine. Ten percent of households reported that at least one member of their family had had a diarrheal episode in the last 2 weeks. Data was also collected on the frequency of visits by brigadistas to reinforce filter best practices with 71% of the 154 households reporting that they had been visited by a brigadista (program staff) within 3 months of the unannounced study visit. After an average of 12 months of use, 45 of 199 households (23%) visited stated that they were no longer using their BSFs. Reasons cited for not using filters included: no access to replacement sand (n = 18), broken filter or missing parts (9), infestation with ants (5), poor tasting water (3), apathy (4), and reason not given (6). Among these 45 households, 60% cited no or insufficient contact with brigadistas as contributing to their lack of use of the BSF. The finding that 23% of households were no longer using their filters after 1 year is consistent with findings reported in the literature3. In a study of ceramic water filters Brown reported that in rural Cambodia filter use declined at the rate of approximately 2% per month (24% per year) after installation and training9. Important determinants in maintaining the rate of filter use were cash investment in the technology by the household and use of surface water as a primary drinking water source. Neither of these conditions was met in the communities studied in this report. Results of the questionnaires were surprising and apparently inconsistent. Virtually all respondents noted that the health of their family had improved with use of the filter despite a third of subjects occasionally drinking non filtered water and laboratory results that suggest 41 of 151 filters had no reduction in E. coli CFU counts. Additionally, households with filters in use reported diarrhea rates of 10% within the preceding 2 weeks, whereas households without filters in use reported a rate of 2% (1 out of 45, data not shown). Fear of losing their filters, a concern directly expressed by some owners, may have influenced them to give 'correct' responses to questions. For example, the majority of respondents (102/150) claimed to be cleaning their storage vessel with chlorine as directed although recontamination of water in their storage buckets was evident and no chlorine was found in their homes at the time of the visit. The median filter efficacy of 80% (Table 2) is similar to the 83% reduction found in the Dominican Republic10 but is lower than rates reported elsewhere4. The overall program efficiency (well to storage vessel) of 48% reflects the documented problem of recontamination of water in storage vessels due to inadequate cleaning4. That filtered water was found to have higher CFUs than source water in 26% of households was surprising. Possible explanations include highly contaminated water resident in the filter from prior use or bacterial re-growth in stagnant water if the filer had not been used for a prolonged period of time. The latter hypothesis suggests inaccurate reporting of filter use frequency by household members. Typically E. coli does not to proliferate in water6 but the authors are not aware of data confirming that E. coli colony counts do not rise in stagnant filter water that is tested after having sat unused for prolonged periods of time. Such data would be useful in explaining these results and would suggest that intermittent filter use might result in higher levels of contamination due to incubation in the filter and pose an important health risk. The purpose of this study was to assess the performance characteristics of BSF in the field. This was accomplished by a one-time cross sectional study assessing BSF reduction in E. coli CFUs for as many filers as could be identified and reached in the communities. The important findings of a modest filter efficiency rate and high rate of water recontamination combined with a filter non-use rate in 23% of households suggest non-compliance with filter best practices. One consideration is that the perceived value of the biosand filter to its users is related more to the social status afforded by ownership than its effect on the reduction in the burden of disease in the family and community. Although BSF are believed to reduce diarrheal episodes by 50%2 it is possible that in the communities studied the baseline health impact of consuming unfiltered water and this modest improvement afforded by filter use is an insufficient motivator for most individuals. In effect, the association between filter use and health improvement may be subtle and difficult to link10. Indeed, 'clear links and consistent relationships have not been established between household levels of E. coli in drinking water and diarrheal disease risks'9. Implicit in this is that levels of E. coli in source water is imperfectly associated with diarrheal disease burden, implying that in this region E. coli CFUs may not be an appropriate indicator species. The authors acknowledge the support of a University of Michigan, International Institute Fellowship Grant and private donations. 1. Kosek M, Bern C, Guerrant RL. The global burden of diarrhoeal disease, as estimated from studies published between 1992 and 2000. Bulletin of the World Health Organization 2003; 81(3): 197-204. 2. Sobsey M , Stauber C, Casanova L, Brown JM, Elliott MA. Point of Use Household Drinking Water Filtration: A Practical, Effective Solution for Providing Sustained Access to Safe Drinking Water in the Developing World. Environmental Science and Technology 2008; 42: 4261-4267. 3. Lantagne D, Meierhofer R, Allgood G, McGuigan KG, Quick R. Comment on 'Point of Use Household Drinking Water Filtration: A Practical, Effective Solution for Providing Sustained Access to Safe Drinking Water in the Developing World'. Environmental Science and Technology 2009; 43(3): 968-969. 4. Duke WF, Nordin RN, Baker D, Mazumder A . The use and performance of bioSand filters in the Artibonite Valley of Haiti: a field study of 107 households. Rural and Remote Health 6: 570. (Online) 2006. Available: http://www.rrh.org.au/journal/article/570 (Accessed 27 March 2010). 5. Hijnen WA, Schijven JF, Bonné P, Visser A, Medema GJ. Elimination of viruses, bacteria and protozoan oocysts by slow sand filtration. Water Science and Technology. 2004; 50(1): 147-154. 6. World Health Organization. WHO guidelines for drinking-water quality, 3rd edn. Geneva: WHO, 2004; 143. 7. Stauber CE, Elliott MA, Koksal F, Ortiz GM, DiGiano FA, Sobsey MD. Characterisation of the biosand filter for E. coli reductions from household drinking water under controlled laboratory and field use conditions. Water Science and Technology 2006; 54(3): 1-7. 8. United States Environmental Protection Agency. Method 1603: Escherichia coli (E. coli) in Water by Membrane Filtration Using Modified membrane-Thermotolerant Escherichia coli Agar (Modified mTEC). (Online) 2002. Available: http://www.epa.gov/microbes/1603sp02.pdf (Accessed 3 August 2010). 9. Brown J, Proum S, Sobsey MD. Sustained use of a household-scale water filtration device in rural Cambodia. Journal of Water and Health 2009; 7(3): 404-412. 10. Stauber CE, Ortiz GM, Loomis DP, Sobsey MD. A Randomized Controlled Trial of the Concrete Biosand Filter and Its Impact on Diarrheal Disease in Bonao, Dominican Republic. American Journal of Tropical Medicine and Hygeine 2009; 80(2): 286-293.
<urn:uuid:43bc840a-eb5b-4458-af99-cc16498cdab5>
CC-MAIN-2022-33
https://www.rrh.org.au/journal/article/1483
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00499.warc.gz
en
0.967637
4,386
3.4375
3
A new way of understanding and regulating privacy may be necessary to protect individuals’ sensitive information, as journalists and other data practitioners increasingly head to online social networks data to study and explain social phenomena. Currently, most online social networks (OSNs) let their users customize their privacy settings, allowing them to hide or reveal information such as their gender and location. As we explain in the following article, the inherent characteristics of OSNs, through the practice of predictive analytics, allow them to bypass individuals’ privacy choices. This means that the way individuals’ privacy is protected - and technically guaranteed - requires fundamental changes. We believe that solving this problem is becoming increasingly important, as the data generated by OSNs is considered extremely valuable by companies as well as by academic researchers and journalists, as it provides information about human behavior and the functioning of society. What are online social networks? As researchers Boyd and Ellison explained in 2007, OSNs have three fundamental characteristics that distinguish them from other online data repositories (e.g. forums and other types of websites): - User profiles are public or semi-public - They contain a network structure that connects people to each other - The connections are visible to both the connected users and the network as a whole Crucially, the public visibility of connections that allows OSNs to compromise individual privacy, as we will explain below. Before we delve deeper into OSNs, it is important to understand what privacy is and what expectations are associated with it. A common definition of the term is given by Westin in 1970, who describes it as ”the ability for people to determine for themselves when, how, and to what extent information about them is communicated to others”. In many countries, privacy is understood as a fundamental citizen right. Privacy International reports that there are around 130 countries worldwide granting the right to privacy to their citizens. Above individual countries however, international and cross-border unions and political entities recognize privacy as a human right (e.g. the UN’s Declaration of Human Rights (UDHR) 1948, Article 12) . In many countries, the right to privacy explicitly extends to, and otherwise indirectly encompasses, the right to personal data protection. This essentially indicates the need to handle personal data with care, and the unlawfulness of exposing individual’s personal data without their explicit consent. Why are OSNs a risk to individual privacy? Consider the example of a person using a credit card service: as a purchase takes place, personal information of the individual such as the purchased service or product, time, location, etc. are collected. This process takes place and is in isolation from the actions of other customers using credit card services. Instead, OSNs are infused with the social interaction of individuals connected in the virtual space. With respect to privacy, it is the connection of individuals in the network that acts as a source of personal exposure. OSNs contain large amounts of data and have an increasing number of users. Those two features of OSNs have made them a popular source of data, particularly in light of the emerging practice of predictive analytics. Predictive analytics describes the extraction of data and its consequent mining, seeking for patterns and information that is generated using statistical and mathematical techniques, such as community detection, dimensionality reduction and social network analysis (Mishra and Silakari, 2012). There are three key elements of OSNs that make them the ideal playground for predictive analytics: - social network features - user-generated textual content - location-based information. It is in these three areas that risks to individual personal data safety originate. Social network features A widely studied phenomena in social science is that of homophily, also described as “birds of feather flock together”. Homophily describes a pattern where those individuals in a social network who are connected by ties are on average more similar to one another than to those with whom no ties are shared (McPherson et al., 2001). Homophily is found in the digital world just as much as in the real world. How this translates to OSNs is that individuals are significantly more similar to their connections than to their non-connections (Gayo-Avello et al., 2011). This finding has been at the heart of predictive analytics, and it carries with itself enormous consequences for individual privacy. For example, in a study (Mislove et al. 2010) OSN data is mined and the information about a person’s peers is used to predict the characteristics of that given individual. Following this study, more research emerged that presented statistical models able to draw inferences from social ties to the personal attributes of an individual, all with high levels of certainty in prediction (e.g. Gayo- Avello et al., 2011; Sarigol et al., 2014). But there is more to it. In another research, Zhelva et al. (2009), showed that personal attributes can also be inferred from group-level structures in OSNs. Using group-level classification algorithms, their statistical model is capable of discovering group-structures within friendship networks and respectively identifying groups an individual belongs to. What this means is that the publicly observable similarity of group members allows to infer attributes of individuals in the group who have chosen to instead hide their personal information. And yet another approach exists. This time, it is the diversity in the social ties, rather than homophily, that drives the prediction. A study shows that diversity in social ties can be used to infer a person’s romantic partner. Compared to the methods above, the authors develop a more nuanced understanding of ties by measuring tie strength and further tie-related characteristics and use those to predict yet another tie-related feature. But what if you did not want to disclose your relationship to the OSN? This study shows that your preference does not matter. User-generated textual content Language is another tool for OSN-based inferencing. Research in sociolinguistics shows that the way we speak, in the vocabulary we use, the topics we bring up, and even in our usage of punctuation, tend to correlate by means of individual-level attributes (Labov, 1972; Coates, 1996; Macauley, 2005). Language serves the purpose of communicating to others just as it operates as a tool for status display, and proof of belonging and identification with particular groups. Also, we are likely to express shared identities with others throughout common expressions or slang, may those be ethnicity, religion, gender, or social class. This means that language is predictable, and thus can be used for prediction of personal characteristics. The availability of publicly available text uploaded by users in OSNs creates powerful datasets for the development of classifier-algorithms and models that place individuals in groups by their linguistic choices and style. A study by Rao et al. (2010) uses a large Twitter dataset and machine learning to train a classification algorithm and successfully infer user characteristics such as gender, age, political orientation, and religion. The last dimension of OSNs for inference is geographic information. From the home address to the precise location of a person in the present moment, unveiling such private details of an individual means accessing information that can be used to learn about a person’s whereabouts or deliver more effective advertising, exploiting personalization of content and timeliness of advertisement delivery. The sensitivity of geographical personal information is recognized by companies too, reflected in Foursquare’s homepage slogan: “With uncompromising accuracy, accessibility, scale, and respect for consumer privacy, Foursquare is the location platform the world trusts” (Foursquare, 2020). A study by Pontes et al. (2012) avails of data from Foursquare, Google+ and Twitter to infer an individual’s home location by using publicly available geographic information uploaded by the users, as well as the users’ friends disclosed home location. The developed model had an accuracy of 74%, and for a smaller subset of users the home residence was inferred within a six kilometers radius with accuracy of 60% for Foursquare and Twitter, and just around 10% for Google+. Further research has reversed the mechanism and used available geographic information to infer individuals’ social relations. Using Flickr data, Crandall et al. (2010) built an inference model of social ties based on geographic co-occurrences. Their research answers the question: when two people appear in nearby locations a given number of times, what is the likelihood that they know each other? The research shows that it takes very few co-occurrences to infer the underlying social network structure of individuals. Why does this matter? The task of inferring individual private characteristics carries with itself ethical concerns. Particularly, research using tools to bypass users’ privacy choices and unmask users’ characteristics for the purpose of ”advertising, personalization, and recommendation” (Rao et al., 2010:44), requires careful considerations. In their paper, Rao et al. (2010) state as a justification for the study the interest of inferring attributes users have chosen to keep private, but how can stripping individuals of their protected identity be justified when it is deliberately undisclosed, particularly in the aim of delivering them targeted advertising? The lack of ethical considerations by Rao et al. (2010), particularly the risk of compromising an individual’s privacy, is concerning. Their research provides an example of predictive analytics where the inferential practice is done without (i) an ethical framework posing boundaries to the study and (ii) a clear indication of how subjects are protected and defended in their right to privacy. One issue with predictive analytics is that the lines around informed consent can become blurry. Informed consent has been a pillar of the moral framework guiding research ever since the Nuremberg Code of 1947. In Big Data ethics, a central challenge is defining informed consent. What type of practices are subjects informed about? Is publicly accessible information ethically acceptable to use for research? Does the same apply when it is used to create new information about an individual and his connections? In informed consent, transparency is crucial, and the subjects of an inferential study should always know that their data is and can be used that way. Another chilling factor is the fact that OSN companies themselves do not disclose what they do to generate advertising revenue or improve their platforms, but predictive analytics seems to play a role. In recent years, the controversial practice of building shadow profiles on the part of OSNs has been at the center of an ethical debate. Shadow profiles describe the usage of predictive models by OSNs to collect personal information a user does not disclose. These are maintained privately by OSN providers aside of user agreement, permission, or terms of service. The first mention of shadow profiles was with Facebook, in 2013(Blue, 2013), which had collected data of phone numbers from mobile phone books of their users. Thereby, Facebook was able to infer phone numbers of individuals who did not directly disclose these but were stored in peers’ phone books. There are yet other implicit risks. Insights such as the classification of individuals using language into different genders, political groups and religion, can be inappropriately used by information holders. An example of this is provided by the controversial study carried out by AI researchers Kosinsky and Wang (2017), which used the profile pictures of members of an online dating community to predict sexual orientation. The findings were quite remarkable, as the authors were able to predict with 91% accuracy whether a male member defined himself as heterosexual or gay, while for women the figure sat at 83%. The implication of such study are substantial: classifying individuals by their biological and facial traits can lead to discrimination by actors opposing homosexuality, may those be an employer or a government. What can we do about this? Once you understand that predictive analytics can pose serious threats to privacy, especially in the hands of revenue-oriented companies or naïve practitioners, it may feel like the options to safeguard individual privacy are limited. A solution would be to educate individuals on OSNs and inference (Zheleva and Getoor 2009). This is meant to enable them to make better choices in their privacy personalization settings. For example, knowing that groups’ homogeneity makes the prediction of personal attributes highly accurate, individuals could use this information to reflect on their group characteristics, and if concerned with their privacy, try to diversify their group properties. But this is quite complicated to achieve: groups evolve overtime, as individuals change and add information continuously in their personal profiles, altering the predictable outcomes for any given individual in the network. Also, an education in privacy and inference cannot be understood as a one-off session, as new models and new tools are constantly generated. Another problem is that the insights we are aware of are limited to those publicly available, making the solution limited. As researchers started to understand that it is not entirely up to oneself to shape personal privacy in social networks, they initiated discussing the appropriateness of the concept of individual privacy within the context of digital platforms. The debate centered around the fact that the decision of disclosing personal information is not governed by individuals anymore but shifts towards a collective level. This shift has significant implications for privacy and policy (Sarigol, 2014). Privacy as we know it in OSNs falls into a model known as access control, where to achieve privacy users individually decide what to disclose and to whom. Researchers looking at Westin’s definition of privacy stress that the functions of privacy do not take place in isolation. Instead, relational features of privacy are crucial (Cohen, 2012), as privacy is not a binary state but instead contextual within networks. The definition of privacy as we know it is increasingly outdated. A new privacy paradigm known as networked privacy (see Garcia 2019) takes this into account. The researchers behind it (Marwick and Boyd 2014) highlight the importance of individual agency in governing one’s own privacy but stress social network structures undermine this agency. Networked privacy thus focusses on giving the individuals the ability to control the information that arises from the social network structure and flows within it. Practically, this is described as equipping individuals with knowledge and authority to ”shaping the context in which information is being interpreted” (Marwick and Boyd, 2014:1063). Currently, it seems like making individuals aware and in control of their network may be the only way to ensure their privacy wishes are fulfilled. As OSNs and the appeal for their data are here to stay, a shift towards networked privacy seems like a necessary solution to defend individuals’ right to privacy while maintaining the funcionality of OSN. The work cited in our article is listed here below: - Acquisti, Alessandro. ”Privacy in electronic commerce and the economics of immediate gratification.” In Proceedings of the 5th ACM conference on Electronic commerce, pp. 21-29. 2004. - Agrawal, Divyakant, Ceren Budak, Amr El Abbadi, Theodore Georgiou, and Xifeng Yan. ”Big data in online social networks: user interaction analysis to model user behavior in social networks.” In International Workshop on Databases in Networked Information Systems, pp. 1-16. Springer, Cham, 2014. - Backstrom, Lars, and Jon Kleinberg. ”Romantic partnerships and the dispersion of social ties: a network analysis of relationship status on facebook.” In Proceedings of the 17th ACM conference on Computer supported cooperative work social computing, pp. 831-841. 2014. - Bannerman, Sara. ”Relational privacy and the networked governance of the self.” Information, Communication Society 22, no. 14 (2019): 2187-2202. - Bagrow, James P., Xipei Liu, and Lewis Mitchell. ”Information flow reveals prediction limits in online social activity.” Nature Human Behaviour 3, no. 2 (2019): 122-128. - Blue, Violet. ”Anger mounts after Facebooks shadow profiles leak in bug”. 2013. https://www.zdnet.com/article/anger-mounts-after-facebooks-shadow-profiles- leak-in-bug/ - Boyd, Danah, and Kate Crawford. ”Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon.” Information, communication society 15, no. 5 (2012): 662-679. - Boyd, Danah M., and Nicole B. Ellison. ”Social network sites: Definition, history, and scholarship.” Journal of computer-mediated communication 13, no. 1 (2007): 210-230. - Coates, Jennifer. ”Women talk: Conversation between women friends.” (1996): 265- 268. - Cohen, Julie E. Configuring the networked self: Law, code, and the play of everyday practice. Yale University Press, 2012. - Crandall, David J., Lars Backstrom, Dan Cosley, Siddharth Suri, Daniel Hutten- locher, and Jon Kleinberg. ”Inferring social ties from geographic coincidences.” Pro- ceedings of the National Academy of Sciences 107, no. 52 (2010): 22436-22441. - Debatin, Bernhard, Jennette P. Lovejoy, Ann-Kathrin Horn, and Brittany N. Hughes. ”Facebook and online privacy: Attitudes, behaviors, and unintended consequences.” Journal of computer-mediated communication 15, no. 1 (2009): 83-108. - DiMaggio, Paul. ”Culture and cognition.” Annual review of sociology 23.1 (1997): 263-287. - Duhigg, Chris. ”How Companies Learn Your Secrets”. New York Times. (2012). https://www.nytimes.com/2012/02/19/magazine/shopping-habits.html - Erd˝os, D´ora, Rainer Gemulla, and Evimaria Terzi. ”Reconstructing graphs from neighborhood data.” ACM Transactions on Knowledge Discovery from Data (TKDD) 8.4 (2014): 1-22. - Garcia, David. ”Leaking privacy and shadow profiles in online social networks.” Sci- ence advances 3, no. 8 (2017): e1701172. - Gayo Avello, Daniel. ”All liaisons are dangerous when all your friends are known to us.” In Proceedings of the 22nd ACM conference on Hypertext and hypermedia, pp. 171-180. 2011. - Horv´at, Em¨oke-A´gnes, Michael Hanselmann, Fred A. Hamprecht, and Katharina A. Zweig. ”One plus one makes three (for social networks).” PloS one 7, no. 4 (2012): e34740. - Kim, Myunghwan, and Jure Leskovec. ”The network completion problem: Inferring missing nodes and edges in networks.” In Proceedings of the 2011 SIAM International Conference on Data Mining, pp. 47-58. Society for Industrial and Applied Mathemat- ics, 2011. - Kokolakis, Spyros. ”Privacy attitudes and privacy behaviour: A review of current research on the privacy paradox phenomenon.” Computers and security 64 (2017): 122-134. - Labov, William. Language in the inner city: Studies in the Black English vernacular. No. 3. University of Pennsylvania Press, 1972. - Laney, Doug. ”3D data management: Controlling data volume, velocity and variety.” META group research note 6, no. 70 (2001): 1. - R. K. Macaulay. Talk that counts: Age, Gender, and Social Class Differences in Discourse. Oxford University Press, 2005. - Marwick, Alice E., and Danah Boyd. ”Networked privacy: How teenagers negotiate context in social media.” New media & society 16.7 (2014): 1051-1067. - Mercken, Liesbeth, Christian Steglich, Philip Sinclair, Jo Holliday, and Laurence Moore. ”A longitudinal social network analysis of peer influence, peer selection, and smoking behavior among adolescents in British schools.” Health Psychology 31, no. 4 (2012): 450. - Mishra, Nishchol, and Sanjay Silakari. ”Predictive analytics: A survey, trends, appli- cations, oppurtunities & challenges.” International Journal of Computer Science and Information Technologies 3, no. 3 (2012): 4434-4438. - Mislove, Alan, Bimal Viswanath, Krishna P. Gummadi, and Peter Druschel. ”You are who you know: inferring user profiles in online social networks.” In Proceedings of the third ACM international conference on Web search and data mining, pp. 251-260. 2010. - Mondal, Mainack, Peter Druschel, Krishna P. Gummadi, and Alan Mislove. ”Be- yond access control: Managing online privacy via exposure.” In Proceedings of the Workshop on Useable Security, pp. 1-6. 2014. - Pangburn, DJ. Even This Data Guru Is Creeped Out By What Anonymous Location Data Reveals About Us. (2017). https://www.fastcompany.com/3068846/how-your- location-data-identifies-you-gilad-lotan-privacy - McPherson, Miller, Lynn Smith-Lovin, and James M. Cook. ”Birds of a feather: Homophily in social networks.” Annual review of sociology 27.1 (2001): 415-444. - Pontes, Tatiana, Gabriel Magno, Marisa Vasconcelos, Aditi Gupta, Jussara Almeida, Ponnurangam Kumaraguru, and Virgilio Almeida. ”Beware of what you share: Infer- ring home location in social networks.” In 2012 IEEE 12th International Conference on Data Mining Workshops, pp. 571-578. IEEE, 2012. - Rao, Delip, David Yarowsky, Abhishek Shreevats, and Manaswi Gupta. ”Classifying latent user attributes in twitter.” In Proceedings of the 2nd international workshop on Search and mining user-generated contents, pp. 37-44. 2010. - Richterich, Annika. The big data agenda: Data ethics and critical data studies. University of Westminster Press, 2018. - Sarigol, Emre, David Garcia, and Frank Schweitzer. ”Online privacy as a collective phenomenon.” In Proceedings of the second ACM conference on Online social net- works, pp. 95-106. 2014. - Wang, Yilun, and Michal Kosinski. ”Deep neural networks are more accurate than humans at detecting sexual orientation from facial images.” Journal of personality and social psychology 114, no. 2 (2018): 246. - Zheleva, Elena, and Lise Getoor. ”To join or not to join: the illusion of privacy in social networks with mixed public and private user profiles.” In Proceedings of the 18th international conference on World wide web, pp. 531-540. 2009.
<urn:uuid:a0b8ae6d-ed0a-4a92-991b-8f3d87a537ef>
CC-MAIN-2022-33
http://compscjournalism.org/projects/networked_privacy/networked_privacy/privacy.html
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573197.34/warc/CC-MAIN-20220818124424-20220818154424-00699.warc.gz
en
0.894928
4,929
3.078125
3
The Earth looks like a perfect sphere, but down here on the surface we see that there are mountains, rivers, oceans, glaciers, all kinds of features with different densities and shapes. Scientists can map this produce a highly detailed gravity map of our planet. And it turns out, this is very useful for other worlds too. NASA’s twin lunar orbiting GRAIL (Gravity Recovery and Interior Laboratory) spacecraft christened Ebb and Flow have kicked off their science collection phase aimed at precisely mapping our Moon’s gravity field, interior composition and evolution, the science team informed Universe Today. “GRAIL’s science mapping phase officially began Tuesday (March 6) and we are collecting science data,” said Maria Zuber, GRAIL principal investigator of the Massachusetts Institute of Technology in Cambridge, to Universe Today. “It is impossible to overstate how thrilled and excited we are !” “The data appear to be of excellent quality,” Zuber told me. GRAIL’s goal is to provide researchers with a better understanding of how the Moon, Earth and other rocky planets in the solar system formed and evolved over its 4.5 billion years of history. NASA’s Dawn spacecraft is currently mapping the gravity field of Asteroid Vesta in high resolution from low orbit. Despite more than 100 missions to the Moon there is still a lot we don’t know about the Moon says Zuber, like why the near side is flooded with magma and smooth and the back side is rough, not smooth and completely different. The formation-flying spacecraft will make detailed science measurements from lunar orbit with unparalleled precision to within 1 micron – the width of a human red blood cell – by transmitting Ka-band radio signals between each other and Earth to help unlock the mysteries of the Moon’s deep interior. “We’ve worked on calibrating the alignment of the Ka-band antennae to establish the optimal alignment. We’ve verified the data pipeline and are spending a lot of time working with the raw data to make sure that we understand its intricacies,” Zuber explained. The washing-machine sized probes have been flying in tandem around the Moon since entering lunar orbit in back to back maneuvers over the New Year’s weekend. Engineers have spent the past two months navigating the spaceship duo into lower, near-polar and near-circular orbits with an average altitude of 34 miles (55 kilometers), that are optimized for science data collection, and simultaneously checking out the spacecraft systems. Ebb and Flow were launched to the Moon on September 10, 2011 aboard a Delta II rocket from Cape Canaveral, Florida and took a circuitous 3.5 month low energy path to the moon to minimize the overall costs. The Apollo astronauts reached the Moon in just 3 days. I asked Zuber to describe the team’s activities putting the mirror image probes to work peering to the central core of our nearest neighbor in unprecedented detail. “Last Wednesday (Feb. 29) we achieved the science orbit and on Thursday (March 1) we turned the spacecraft to ‘orbiter point’ configuration to test the instrument and to monitor temperatures and power.” “When we turned on the instrument we established the satellite-to-satellite radio link immediately. All vital signs were nominal so we left the spacecraft in orbiter point configuration and have been collecting science data since then. At the same time, we’ve continued performing calibrations and monitoring spacecraft and instrument performance, such as temperatures, power, currents, voltages, etc., and all is well,” said Zuber. Measurements gathered over the next 84 days will be used to create high-resolution maps of the Moon’s near side and far side gravitational fields that are 100 to 1000 times more precise than ever before and that will enable researchers to deduce the internal structure and composition of our nearest neighbor from the outer surface crust down to the deep hidden core. As one satellite follows the other, in the same orbit, they will perform high precision range-rate measurements to precisely measure the changing distance between each other. As they fly over areas of greater and lesser gravity caused by visible features such as mountains, craters and masses hidden beneath the lunar surface, the distance between the two spacecraft will change slightly. “GRAIL is great. Everything is in place to get science data now,” said Sami Asmar, a GRAIL co-investigator from NASA’s Jet Propulsion Lab in Pasadena, Calif. “Soon we’ll get a very high resolution and global gravity map of the Moon.” The data collected will be translated into gravitational field maps of the Moon that will help unravel information about the makeup of the Moon’s core and interior composition. GRAIL will gather three complete gravity maps over the three month mission which is expected to conclude around May 29. If the probes survive a solar eclipse in June and if NASA funding is available, then they may get a bonus 3 month extended mission. NASA sponsored a nation-wide student contest for America’s Youth to choose new names for the twin probes originally known as GRAIL A and GRAIL B. 4th graders from the Emily Dickinson Elementary School in Bozeman, Montana submitted the winning entries -Ebb and Flow. The new names won because they astutely describe the probes movements in orbit to collect the science data. The GRAIL twins are also equipped with a very special camera dubbed MoonKAM (Moon Knowledge Acquired by Middle school students) whose purpose is to inspire kids to study science. By having their names selected, the 4th graders from Emily Dickinson Elementary have also won the prize to choose the first target on the Moon to photograph with the MoonKAM cameras, which are managed by Dr Sally Ride, America’s first female astronaut. “MoonKAMs on both Ebb and Flow were turned on Monday, March 5, and all appears well, Zuber said. “The Bozeman 4th graders will have the opportunity to target the first images a week after our science operations begin.” A classroom of America’s Youth from an elementary school in Bozeman, Montana submitted the stellar winning entry in NASA’s nationwide student essay contest to rename the twin GRAIL lunar probes that just achieved orbit around our Moon on New Year’s Eve and New Year’s Day 2012 “Ebb” & “Flow” – are the dynamic duo’s official new names and were selected because they clearly illuminate the science goals of the gravity mapping spacecraft and how the Moon’s influence mightily affects Earth every day in a manner that’s easy for everyone to understand. “The 28 students of Nina DiMauro’s class at the Emily Dickinson Elementary School have really hit the nail on the head,” said GRAIL principal investigator Prof. Maria Zuber of the Massachusetts Institute of Technology in Cambridge, Mass. “We asked the youth of America to assist us in getting better names.” “We chose Ebb and Flow because it’s the daily example of how the Moon’s gravity is working on the Earth,” said Zuber during a media briefing held today (Jan. 17) at NASA Headquarters in Washington, D.C. The terms ebb and flow refer to the movement of the tides on Earth due to the gravitational pull from the Moon. “We were really impressed that the students drew their inspiration by researching GRAIL and its goal of measuring gravity. Ebb and Flow truly capture the spirit and excitement of our mission.” Ebb and Flow are flying in tandem around Earth’s only natural satellite, the first time such a feat has ever been attempted. As they fly over mountains, craters and basins on the Moon, the spaceships will move back and forth in orbit in an “ebb and flow” like response to the changing lunar gravity field and transmit radio signals to precisely measure the variations to within 1 micron, the width of a red blood cell. The breakthrough science expected from the mirror image twins will provide unprecedented insight into what lurks mysteriously hidden beneath the surface of our nearest neighbor and deep into the interior. The winning names from the 4th Graders of Emily Dickinson Elementary School were chosen from essays submitted by nearly 900 classrooms across America with over 11,000 students from 45 states, Puerto Rico and the District of Columbia, Zuber explained. The students themselves announced “Ebb” and “Flow” in a dramaric live broadcast televised on NASA TV via Skype. “We are so thrilled that our names were chosen and excited to share this with you. We can’t believe we won! We are so honored. Thank you!” said Ms. DiMauro as the very enthusiastic students spelled out the names by holding up the individual letters one-by-one on big placards from their classroom desks in Montana. Watch the 4th Grade Kids spell the names in this video! Until now the pair of probes went by the rather uninspiring monikers of GRAIL “A” and “B”. GRAIL stands for Gravity Recovery And Interior Laboratory. The twin crafts’ new names were selected jointly by Prof. Zuber and Dr. Sally Ride, America’s first woman astronaut, and announced during today’s NASA briefing. NASA’s naming competition was open to K-12 students who submitted pairs of names and a short essay to justified their suggestions. “Ebb” and “Flow” (GRAIL A and GRAIL B) are the size of washing machines and were launched side by side atop a Delta II booster rocket on September 10, 2011 from Cape Canaveral, Florida. They followed a circuitous 3.5 month low energy path to the Moon to minimize the fuel requirements and overall costs. So far the probes have completed three burns of their main engines aimed at lowering and circularizing their initial highly elliptical orbits. The orbital period has also been reduced from 11.5 hours to just under 4 hours as of today. “The science phase begins in early March,” said Zuber. At that time the twins will be flying in tandem at 55 kilometers (34 miles) altitude. The GRAIL twins are also equipped with a very special camera dubbed MoonKAM (Moon Knowledge Acquired by Middle school students) whose purpose is to inspire kids to study science. “GRAIL is NASA’s first planetary spacecraft mission carrying instruments entirely dedicated to education and public outreach,” explained Sally Ride. “Over 2100 classrooms have signed up so far to participate.” Thousands of middle school students in grades five through eight will select target areas on the lunar surface and send requests for study to the GRAIL MoonKAM Mission Operations Center in San Diego which is managed by Dr. Ride in collaboration with undergraduate students at the University of California in San Diego. By having their names selected, the 4th graders from Emily Dickinson Elementary have also won the prize to choose the first target on the Moon to photograph with the MoonKam cameras, said Ride. Zuber notes that the first MoonKAM images will be snapped shortly after the 82 day science phase begins on March 8. Cheers erupted after the first of NASA’s twin $496 Million Moon Mapping probes entered orbit on New Year’s Eve (Dec. 31) upon completion of the 40 minute main engine burn essential for insertion into lunar orbit. The small GRAIL spacecraft will map the lunar interior with unprecedented precision to deduce the Moon’s hidden interior composition. “Engines stopped. It’s in a great initial orbit!!!! ” NASA’s Jim Green told Universe Today, just moments after verification of a successful engine burn and injection of the GRAIL-A spacecraft into an initial eliptical orbit. Green is the Director of Planetary Science at NASA HQ and was stationed inside Mission Control at NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, Ca (see photos below). “Pop the bubbly & toast the moon! NASA’s GRAIL-A spacecraft is in lunar orbit,” NASA tweeted shortly after verifying the critical firing was done. “Burn complete! GRAIL-A is now orbiting the moon and awaiting the arrival of its twin GRAIL-B on New Year’s Day.” The firing of the hydrazine fueled thruster was concluded at 5 PM EST (2 PM PST) today, Dec. 31, 2011 and was the capstone to a stupendous year for science at NASA. “2011 was definitely the best year ever for NASA Planetary Science,” Green told me today. “2011 was the “Year of the Solar System”. “GRAIL-A is in a highly elliptical polar orbit that takes about 11.5 hours to complete.” “We see about the first eight to ten minutes of the start of the burn as it heads towards the Moon’s southern hemisphere, continues as GRAIL goes behind the moon and the burn ends about eight minutes or so after it exits and reappears over the north polar region.” “So we watch the beginning of the burn and the end of the burn via the Deep Space Network (DSN). The same thing will be repeated about 25 hours later with GRAIL-B on New Year’s Day [Jan 1, 2012],” Green explained. The orbit is approximately 56-miles (90-kilometers) by 5,197-miles (8,363-kilometers around the moon. The probe barreled towards the moon at 4400 MPH and skimmed to within about 68 miles over the South Pole. “My resolution for the new year is to unlock lunar mysteries and understand how the moon, Earth and other rocky planets evolved,” said Maria Zuber, GRAIL principal investigator at the Massachusetts Institute of Technology in Cambridge. “Now, with GRAIL-A successfully placed in orbit around the moon, we are one step closer to achieving that goal.” Zuber witnessed the events in Mission Control along with JPL Director Charles Elachi (see photos). The mirror twin, known as GRAIL-B, was less than 30,000 miles (48,000 km) from the moon as GRAIL A achieved orbit and closing at a rate of 896 mph (1,442 kph). GRAIL-B’s insertion burn is slated to begin on New Year’s Day at 2:05 p.m. PST (5:05 p.m. EST) and will last about 39 minutes. GRAIL-B is about 25 hours behind GRAIL-A, allowing the teams enough time to rest and prepare, said David Lehman, GRAIL project manager at JPL. “With GRAIL-A in lunar orbit we are halfway home,” said Lehman. “Tomorrow may be New Year’s everywhere else, but it’s another work day around the moon and here at JPL for the GRAIL team.” Engineers will then gradually lower the tandem flying satellites into a near-polar near-circular orbital altitude of about 34 miles (55 kilometers) with an average separation of about 200 km. The 82 day science phase will begin in March 2012. “GRAIL will globally map the moon’s gravity field to high precision to deduce information about the interior structure, density and composition of the lunar interior. We’ll evaluate whether there even is a solid or liquid core or a mixture and advance the understanding of the thermal evolution of the moon and the solar system,” explained GRAIL co-investigator Sami Asmar to Universe Today. Asmar is from JPL. New names for the dynamic duo may be announced on New Year’s Day. Zuber said that the winning names of a student essay contest drew more than 1000 entries. The GRAIL team is making a major public outreach effort to involve school kids in the mission and inspire them to study science. Each spacecraft carries 4 MoonKAM cameras. Middle school students will help select the targets. “Over 2100 Middle schools have already signed up to participate in the MoonKAM project,” Zuber told reporters. “We’ve had a great response to the MoonKAM project and we’re still accepting applications.” MoonKAM is sponsored by Dr. Sally Ride, America’s first female astronaut. The first images are expected after the science mission begins in March 2012. The GRAIL twins blasted off from Florida on September 10, 2011 for a 3.5 month low energy path to the moon so a smaller booster rocket could be used to cut costs. In less than three days, NASA will deliver a double barreled New Year’s package to our Moon when an unprecedented pair of science satellites fire up their critical braking thrusters for insertion into lunar orbit on New Year’s Eve and New Year’s Day. NASA’s dynamic duo of GRAIL probes are “GO” for Lunar Orbit Insertion said the mission team at a briefing for reporters today, Dec. 28. GRAIL’s goal is to exquisitely map the moons interior from the gritty outer crust to the depths of the mysterious core with unparalled precision. “GRAIL is a Journey to the Center of the Moon”, said Maria Zuber, GRAIL principal investigator from the Massachusetts Institute of Technology (MIT) in Cambridge at the press briefing. This newfound knowledge will fundamentally alter our understanding of how the moon and other rocky bodies in our solar system – including Earth – formed and evolved over 4.5 Billion years time. After a three month voyage of more than 2.5 million miles (4 million kilometers) since launching from Florida on Sept. 10, 2011, NASA’s twin GRAIL spacecraft, dubbed Grail-A and GRAIL-B, are now on final approach and are rapidly closing in on the Moon following a trajectory that will hurl them low over the south pole and into an initially near polar elliptical lunar orbit lasting 11.5 hours. As of today, Dec. 28, GRAIL-A is 65,860 miles (106,000 kilometers) from the moon and closing at a speed of 745 mph (1,200 kph). GRAIL-B is 79,540 miles (128,000 kilometers) from the moon and closing at a speed of 763 mph (1,228 kph). The lunar bound probes are formally named Gravity Recovery And Interior Laboratory (GRAIL) and each one is the size of a washing machine. The long-duration trajectory was actually beneficial to the mission controllers and the science team because it permitted more time to assess the spacecraft’s health and check out the probes single science instrument – the Ultra Stable Oscillator – and allow it to equilibrate to a stable operating temperature long before it starts making the crucial science measurements. The duo will arrive 25 hours apart and be placed into orbit starting at 1:21 p.m. PST (4:21 p.m. EST) for GRAIL-A on Dec. 31, and 2:05 p.m. PST (5:05 p.m. EST) on Jan. 1 for GRAIL-B, said David Lehman, project manager for GRAIL at NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, Calif. “The GRAIL A burn will last 40 minutes and the GRAIL-B burn will last 38 minutes. One hour after the burn we will know the results and make an announcement,” Lehman explained. The thrusters must fire on time and for the full duration for the probes to achieve orbit. The braking maneuver is preprogrammed and done completely automatically. Over the next few weeks, the altitude of the spacecraft will be gradually lowered to 34 miles (55 kilometers) into a near-polar, near-circular orbit with an orbital period of two hours. The science phase will then begin in March 2012. “So far there have been over 100 missions to the Moon and hundreds of pounds of rock have been returned. But there is still a lot we don’t know about the Moon even after the Apollo lunar landings,” explained Zuber. “We don’t know why the near side of the Moon is different from the far side. In fact we know more about Mars than the Moon.” GRAIL’s science collection phase will last 82 days. The two spacecraft will transmit radio signals that will precisely measure the distance between them to within a few microns, less than the width of a human hair. As they orbit in tandem, the moons gravity will change – increasing and decreasing due to the influence of both visible surface features such as mountains and craters and unknown concentrations of masses hidden beneath the lunar surface. This will cause the relative velocity and the distance between the probes to change. The resulting data will be translated into a high-resolution map of the Moon’s gravitational field and also enable determinations of the moon’s inner composition. The GRAIL mission may be extended for another 6 months if the solar powered probes survive a power draining and potentially deadly lunar eclipse due in June 2012. Engineers would significantly lower the orbit to an altitude of barely 15 to 20 miles above the surface to gain even further insights into the lunar interior. The twin probes are also equipped with 4 cameras each – named MoonKAM – that will be used by middle school students to photograph student selected targets. The MoonKAM project is led Dr. Sally Ride, America’s first woman astronaut as a way to motivate kids to study math and science. JPL manages the GRAIL mission for NASA. Stay tuned for Universe Today updates amidst the News Year’s festivities. Student Alert ! – Here’s your once in a lifetime chance to name Two NASA robots speeding at this moment to the Moon on a super science mission to map the lunar gravity field. They were successfully launched from the Earth to the Moon on September 10, 2011. Right now the robots are called GRAIL A and GRAIL B. But, they need real names that inspire. And they need those names real soon. The goal is to “capture the spirit and excitement of lunar exploration”, says NASA – the US Space Agency. NASA needs your help and has just announced an essay writing contest open to students in Grades K – 12 at schools in the United States. The deadline to submit your essay is November 11, 2011. GRAIL stands for “Gravity Recovery And Interior Laboratory.” The rules state you need to pick two names and explain your choices in 500 words or less in English. Your essay can be any length up to 500 words – even as short as a paragraph. But, DO NOT write more than 500 words or your entry will be automatically disqualified. Learn more about the GRAIL Essay Naming Contest here: The GRAIL A and B lunar spaceships are twins – just like those other awe inspiring robots “Spirit” and “Opportunity” , which were named by a 10 year old girl student and quickly became famous worldwide and forever because of their exciting science missions of Exploration and Discovery.They arrive in Lunar Orbit on New Year’s Day 2012. And there is another way that students can get involved in NASA’s GRAIL mission. GRAIL A & B are both equipped with four student-run MoonKAM cameras. Students can suggest targets for the cameras. Then the cameras will take close-up views of the lunar surface, taking tens of thousands of images and sending them back to Earth. “Over 1100 middle schools have signed up to participate in the MoonKAM education and public outreach program to take images and engage in exploration,” said Prof. Maria Zuber of MIT. Prof. Zuber is the top scientist on the mission and she was very excited to announce the GRAIL Essay Naming contest right after the twin spaceships blasted off to the Moon on Sep 10, 2011 from Cape Canaveral in Florida. What is the purpose of GRAIL ? “GRAIL simply put, is a ‘Journey to the Center of the Moon’,” says Dr. Ed Weiler, NASA Associate Administrator of the Science Mission Directorate in Washington, DC. “It will probe the interior of the moon and map its gravity field by 100 to 1000 times better than ever before. We will learn more about the interior of the moon with GRAIL than all previous lunar missions combined. Precisely knowing what the gravity fields are will be critical in helping to land future human and robotic spacecraft. The moon is not very uniform. So it’s a dicey thing to fly orbits around the moon.” “There have been many missions that have gone to the moon, orbited the moon, landed on the moon, brought back samples of the moon,” said Zuber. “But the missing piece of the puzzle in trying to understand the moon is what the deep interior is like.” So, what are you waiting for. Start thinking and writing. Students – You can be space explorers too ! NASA’s Gravity Recovery and Interior Laboratory (GRAIL) moon mapping twins and the mighty Delta II rocket that will blast the high tech physics experiment to space on a lunar science trek were magnificently unveiled in the overnight darkness in anticipation of a liftoff that had originally been planned for the morning of Sept. 8. Excessively high upper level winds ultimately thwarted Thursday’s launch attempt. NASA late today has just announced a further postponement by another day to Saturday Sept. 10 to allow engineers additional time to review propulsion system data from Thursday’s detanking operation after the launch attempt was scrubbed to Friday. Additional time is needed by the launch team to review the pertinent data to ensure a safe blastoff of the $496 Million GRAIL mission. There are two instantaneous launch opportunities at 8:29:45 a.m. and 9:08:52 a.m. EDT at Cape Canaveral, eight minutes earlier than was planned on Sept. 8. The weather forecast for Sept. 10 still shows a 60 percent chance of favorable conditions for a launch attempt. Despite a rather poor weather prognosis, the heavy space coast cloud cover had almost completely cleared out in the final hours before launch, the surface winds were quite calm and we all expected to witness a thunderous liftoff. But measurements from weather balloons sent aloft indicated that the upper level winds were “red” and violated the launch criteria. As the launch gantry was quickly retracted at Launch Complex 17B on Sept. 7, the Delta was bathed in xenon spotlights that provided a breathtaking light show as the service structure moved a few hundred feet along rails. The cocoon like Mobile Service Tower (MST) provides platforms to access the rocket at multiple levels to prepare the vehicle and spacecraft for flight. The MST also protects the rocket from weather and impacts from foreign debris. The Delta II rocket stands 128 feet tall and is 8 feet in diameter. The first stage liquid and solid rocket fueled engines will generate about 1.3 million pounds of thrust. During the Terminal Countdown, the first stage is fueled with cryogenic liquid oxygen and highly refined kerosene (RP-1). GRAIL is an extraordinary first ever journey to the center of the moon that will — with its instruments from orbit — peer into the moons interior from crust to core and map its gravity field by 100 to 1000 times better than ever before. The mission employs two satellites flying in tandem formation some 50 km in near circular polar orbit above the lunar surface. GRAIL A and B will perform high precision range-rate measurements between them using a Ka-band instrument. The mission will provide unprecedented insight into the formation and thermal evolution of the moon that can be applied to the other rocky planets in our solar system: Mercury, Venus, Earth and Mars. After a 3.5 month journey to the moon, the probes will arrive about a day apart on New Year’s Eve and New Year’s Day 2012 for an 82 day science mapping phase as the moon rotates three times beneath the GRAIL orbit. Another American rocket Era is about to end. The venerable Delta II rocket, steeped in history, will fly what is almost certainly its final mission from Cape Canaveral. And it will do so quite fittingly by blasting twin satellites to the moon for NASA on a unique path for a truly challenging mission to do “extraordinary science”. On Sept. 8, the most powerful version of the Delta II, dubbed the Delta II Heavy, is slated to launch NASA’s duo of GRAIL lunar mappers on an unprecedented science mission to unlock the mysteries of the moons deep interior. There are two instantaneous launch windows at 8:37:06 a.m. and 9:16:12 a.m. EDT lasting one second each. “GRAIL simply put, is a journey to the center of the moon,” said Ed Weiler, NASA Associate Administrator of the Science Mission Directorate in Washington,DC at a pre-launch briefing for reporters on Sept. 6. “It will probe the interior of the moon and map its gravity field by 100 to 1000 times better than ever before. We will learn more about the interior of the moon with GRAIL than all previous lunar missions combined.” GRAIL will depart Earth from Space Launch Complex 17B (SLC-17B) at Cape Canaveral Air Force Station, Florida, which is also the last scheduled use of Pad 17B. “Trying to understand how the moon formed, and how it evolved over its history, is one of the things we’re trying to address with the GRAIL mission,” says Maria Zuber, principal investigator for GRAIL from the Massachusetts Institute of Technology. “But also, (we’re) trying to understand how the moon is an example of how terrestrial planets in general have formed.” “GRAIL is a mission that will study the inside of the moon from crust to core,” Zuber says. So far there have been 355 launches of the Delta II family, according to NASA’s Delta II Launch Manager Tim Dunn. The Delta II is built by United Launch Alliance. “GRAIL is the last contracted Delta II mission to be launched from Complex 17. And it will be the 356th overall Delta to be launched. Complex 17 at the Cape has a proud heritage of hosting 258 of those 355 total Delta launches to date. Hypergolic propellants have been loaded onto the 2nd stage after assessing all the preparations for the rocket, spacecraft, the range and facilities required for launch. “The Launch Readiness Review was successfully completed and we can proceed with the countdown,” said Dunn. The Delta II Heavy is augmented with nine larger diameter ATK solid rocket motors. The Mobile Service Tower will be rolled back from the Delta II rocket tonight, starting at about 10:30 p.m. EDT depending on the weather. The weather forecast for launch remains very iffy at a 60% percent chance of “NO GO” according to NASA and Air Force officials. A launch decision will be made tomorrow morning Sept. 8 right after the weather briefing but before fueling begins at 6:30 a.m. The weather forecast for rollback of the Mobile Service Tower tonight remains generally favorable. There is a 40% chance of a weather issue at 10:30 p.m. which drops to 30% after midnight. Tower rollback can be pushed back about 2 hours without impacting the countdown, says NASA. Weather remains at 60% NO GO in case of a 24 hour delay but improves over the weekend. The team has about 42 days time in the launch window. After entering lunar orbit, the two GRAIL spacecraft will fly in a tandem formation just 55 kilometers above the lunar surface with an average separation of 200 km during the three month science phase. Stay tuned to Universe Today for updates overnight leading to liftoff at 8:37 a.m. See my photo album from a recent tour of Launch Complex 17 and the Mobile Service Tower NASA’s powerful lunar mapping duo of GRAIL spacecraft are now poised for liftoff in just one weeks time on Thursday, Sept. 8. Mission managers held a Flight Readiness Review on Wednesday (Aug.31) and gave a tentative approval to begin fueling the Delta II rockets second stage on Sept. 2 and 3 after evaluating all issues related to the rocket, launch pad and payloads. Launch preparations are proceeding on schedule towards an early morning lift off from the seaside Space Launch Complex 17B (SLC-17B) at Cape Canaveral Air Force Station, Florida. There are two instantaneous launch windows at 8:37:06 a.m. and 9:16:12 a.m. EDT lasting one second each. “Launch vehicle and spacecraft closeouts will begin on Tuesday, and the Launch Readiness Review is also scheduled for Tuesday morning,” NASA KSC spokesman George Diller told Universe Today. “This morning’s launch countdown dress rehearsal went fine.” “Delta II 2nd stage fueling has been rescheduled for Friday and Saturday [Sept. 2 and 3]. Last evening a software error was found in the launch processing system data base. ULA (United Launch Alliance) decided they would like to look for any additional errors before the fueling begins. There is no impact to the launch date and currently no work is scheduled on Sunday or on Labor Day,” said Diller. The launch period extends through Oct. 19, with liftoff occurring approximately four minutes earlier each day in case of a delay. The flight plan was designed to avoid a pair of lunar eclipses occurring on December 10th, 2011 and June 4th 2012 which would interfere with the missions operations and science. The team is keeping a close watch on the weather as the season’s next Atlantic Ocean storm heads westwards. Katia has just been upgraded to Hurricane status and follows closely on the heels of the continuing vast destruction, misery and deaths caused by Hurricane Irene earlier this week. “The preliminary weather forecast is favorable for launch day as long as the wind remains out of the west as is currently forecast for that time of the morning,” Diller told me. The twin probes known as GRAIL-A and GRAIL-B (Gravity Recovery and Interior Laboratory) were encapsulated inside the clamshell like payload fairing on Aug. 23 The nearly identical spacecraft are mounted side by side and sit atop the Centaur upper stage. The fairing shields the spacecraft from aerodynamic pressures, friction and extreme heating for the first few minutes of flight during ascent through the Earth atmosphere. This Delta II Heavy booster rocket is the most powerful version of the Delta II family built by ULA. The booster’s first stage is augmented with larger diameter solid rocket motors. GRAIL was processed for launch inside at the Astrotech payload processing facility in Titusville, Fla. See my GRAIL spacecraft photos from inside the Astrotech clean room facilities here. “The GRAIL spacecraft inside the handling can departed Astrotech and arrived at the launch pad, SLC-17B on Aug. 18” said Tim Dunn, NASA’s Delta II Launch Director in an interview with Universe Today. “The spacecraft was then hoisted by crane onto the Delta II launch vehicle and the spacecraft mate operation was flawlessly executed by the combined ULA and NASA Delta II Team.” An Integrated Systems Test (IST) of the mated booster and payload was completed on Aug. 22 The dynamic duo will orbit the moon in a tandam formation just 50 kilometers above the lunar surface with an average separation of 200 km. During the 90 day science phase the goal is to determine the structure of the lunar interior from crust to core and to advance understanding of the thermal evolution of the moon. GRAIL-A & GRAIL-B will measure the lunar gravity field with unprecedented resolution up to 100 times improvement on the near side and 1000 times improvement for the far side.
<urn:uuid:a591f317-9489-4f95-a9af-cd8bfb5154f7>
CC-MAIN-2022-33
https://www.universetoday.com/tag/gravity-mapping/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573623.4/warc/CC-MAIN-20220819035957-20220819065957-00700.warc.gz
en
0.930896
7,735
3.453125
3
This course was published in the February 2014 issue and expires February 28, 2017. The author has no commercial conflicts of interest to disclose. This 2 credit hour self-study activity is electronically mediated. After reading this course, the participant should be able to: - Define the betel nut and how it is prepared for chewing. - Describe the effects of betel quid use on the oral cavity. - Explain the possible mechanisms of the caries inhibiting effect of betel quid. - Identify the signs and symptoms of oral submucous fibrosis. - Discuss the dental professional’s role in the clinical management of the betel quid user. Betel nut is the fruit of the areca palm tree. The nut, which is the seed found within the fruit, appears mottled brown with gray-white markings. Users describe it as having a slightly bitter taste and report that it is helpful for digestive and oral health, as well as for facilitating bowel movements, increasing stamina, and improving concentration.1 Betel nuts can be chewed alone, but many cultures mix it with a combination of ingredients. Once mixed together, the term most often used to describe this masticatory drug is betel quid.2 Betel nuts can be combined with catechu gum, which is produced from the sap of the catechu tree; menthol; sandal oil; spices, such as cloves, anise seed, cinnamon, and nutmeg; and finely pounded gold or silver metal.2 These ingredients are then mixed with a calcium hydroxide paste (traditionally referred to as slaked lime) and tobacco and wrapped in a betel leaf (Figure 1 and Figure 2).2 The leaf packet is then sucked on or chewed for 15 minutes to 20 minutes (Figure 3). When it has been chewed thoroughly, the user spits it out.3 The compound produces feelings of euphoria or relaxation, or creates a burst of energy.4 Arecoline, the active ingredient in betel nut, when mixed with calcium hydroxide paste, produces a substance that stimulates the central nervous system.3,5 The stimulated central nervous system induces the feeling of euphoria and may cause increased perspiration and tear production, pupillary constriction, and diarrhea.2 Paan, masala, gutka, supari, puwak, gua, mak, pinang, and daka are other names used to describe the betel nut, which is chewed throughout large parts of India, Bangladesh, the Federal Republic of Malanasia, East Africa, and the South Pacific.2,4 Approximately 200 million people chew quid regularly,2,3 making it likely that oral health professionals will encounter patients who engage in this habit. Its use is more common in rural areas,6 and has also gained popularity in parts of Thailand, Indonesia, and the Philippines.7 Through immigration, the habitual chewing of the betel nut has made its way to the United States. The second highest number of legal immigrants to the US comes from India.7 More immigrants from Asian Indian and South Asian subgroups, such as Pakistanis and Bangladeshis, are also coming to the US.7 These populations are mostly concentrated in Northeast metropolitan areas, such as New York, Northern New Jersey, Long Island, and Philadelphia, followed by Chicago and large cities in California.7 Betel nut and the compounds added to it for chewing are easily found in these areas. Two sachets of betel quid are sold for approximately $1. The betel quid’s low cost has lead to increased popularity, especially among young people.7,8 Global availability allows individuals to import part of their homeland, and continued use may be viewed as a way to keep in touch with their culture.8 Individuals who frequently visit their native countries often bring betel quid back to the US. Betel quid is not labeled a controlled substance in the US, but imported betel nuts can be confiscated by US Customs and Border Protection officers based on possible violations of food, agricultural, or medicinal regulations. However, this is rare. Strongly intertwined with social customs, cultural rituals, and religious practices, betel quid chewing is an ancient tradition2,4 that may be unfamiliar to Western health and dental professionals. Betel nut is considered the fourth most addictive substance in the world after tobacco, alcohol, and caffeine.9,10 Participants in a study that examined whether betel nut usage could lead to addiction reported repeated use despite knowledge of harm and multiple attempts at abstinence.10 Withdrawal symptoms are also experienced, such as craving, anxiety, fatigue, and sadness.10 In some regions, betel quid chewing begins as early as age 7 among both girls and boys.4,6 Habitual users engage in betel quid chewing during all waking hours.4 Some mothers report giving their infants premasticated quids.4 Betel quid chewing stains the teeth, gingiva, and oral mucosa (Figure 4 and Figure 5). The color of the stain varies from deep red to black, depending on its preparation and the longevity of use.2 Chronic users also develop betel chewer’s mucosa, a condition characterized by deep red or brownish-red discoloration of the oral mucosa with wrinkled incrustations that can be scraped off.5,11 This localized lesion is associated with the site of the betel quid placement, usually the buccal mucosa.5 The oral mucosa tends to desquamate or peel with loose detached tags of tissue also evident.11 In some cultures and societies, this change in color is considered a sign of beauty.3 Though some chewers try to reduce the amount of discoloration with intense toothbrushing, the substances used in its preparation will still produce significant discoloration if regular professional dental care is not obtained.5 This stain naturally adheres to pits and fissures, calculus, rough filling surfaces, and naturally rough areas of the teeth.3,5 Consistent chewing of betel quid often produces severe wear on the incisal and occlusal tooth surfaces.5 The degree of attrition is dependent on the consistency of the betel quid and how often and for how long it is chewed.5 Chronic betel quid users may also experience root fractures due to constant mastication and the burden the habit places on the teeth.5 The ongoing flexion, compression, and tension that betel quid chewing places on the cervical area of the tooth can result in abfraction, and, eventually, loss of cervical tooth structure.12 Despite the severe attrition that betel users experience, their risk of caries is reduced.5,13 Explanations for this inverse relationship range from an increase in salivary flow and anticariogenic substances in the betel quid to the high pH of the quid that neutralizes the acid in the oral cavity. Table 1 lists possible mechanisms of the caries-inhibiting effects of betel quid. In vitro studies have shown that the active ingredient in betel nut—arecoline—inhibits the growth of attachment in human-cultured periodontal fibroblasts.5,14 Therefore, betel nut may be cytotoxic to periodontal fibroblasts and may exacerbate pre-existing periodontal diseases, as well as hinder periodontal reattachment.5,14 Studies have also shown that betel quid users experience more loss of periodontal attachment and increased calculus formation than nonusers.5 ORAL SUBMUCOUS FIBROSIS Epidemiologic studies have discovered that long-term betel quid chewing increases the risk of oral submucous fibrosis (OSF).5,15 A chronic, progressive condition of the oral cavity,16 OSF?is characterized by fibrosis of the mucosal lining of the upper digestive tract, oral cavity, and lamina propria. OSF affects 2.5 million people.15 The exact mechanism and etiology are unclear; however, when buccal mucosal cells are exposed to the active ingredient in betel nut, there is an increased accumulation of collagen, along with a reduction of collagenase, which then results in an inability to break down the excess collagen.16 Histological findings of OSF show an increase in fibroblasts and a cross-linking of fibers.16 The retromolar areas, buccal mucosa, soft palate, tongue, and lips are frequently affected.15 The tongue stiffens and exhibits limited protrusion and papillary atrophy.15 Patients with OSF often present with a past history of pain and sensitivity to spicy foods.15 Other signs and symptoms include an increase in saliva, altered taste, vesicle formation on the palate, and a nasalized voice.15 A marble-like appearance of the oral mucosa that can either be localized or diffuse is also an early sign of OSF.15 This chronic disease results in a limited opening of the oral cavity, which is associated with stiffening of the oral mucosa by fibrous bands.5,15 Patients with OSF often experience difficulty in eating, swallowing, and speaking.3,15 Mucosal leukoplakia may accompany OSF, and is regarded as a precancerous condition with a 7.6% malignancy transformation rate.3,17 The calcium hydroxide component of the betel quid and arecoline—the active ingredient in betel nut—may be related to the development of OSF.2 The tobacco content in betel quid places the user at risk for oral cancer. Oral cancer is the sixth most common malignancy in the world.18 More common than cancers of the liver, brain, and bone, oral cancer kills approximately one person per hour.18 Betel quid use coupled with tobacco is listed as a carcinogen by the International Agency for Research on Cancer.2 Betel quid chewing associated squamous cell carcinomas occur along surfaces where the betel quid is held, specifically the lateral borders of the tongue and the buccal mucosa.3 Oral squamous cell carcinomas may occur independently or in association with OSF.3 Studies show that the incidence of oral cancer is increased in regions where betel quid is chewed.19 India has the largest betel quid chewing population in the world, as well as an increased prevalence of OSF and oral squamous cell carcinomas.3,4 In Southeast Asia, 30% of oral cancers are caused by the habitual chewing of betel quid containing tobacco.2 Reports show that betel nut and tobacco act synergistically to produce oral cancer.2 The prognosis of oral cancers is dependent on its timeframe of identification; therefore, early detection and treatment are key. Patients who present with deep-red stains on the teeth, gingiva, and oral mucosa should be educated about eliminating betel quid chewing and increasing oral self-care to prevent recurrence of stain.20 Effective toothbrushing twice a day, in addition to regular dental visits, are recommended to reduce the extrinsic stain accumulation caused by betel quid chewing.21,22 Dental hygienists should carefully examine the teeth to document the position and distribution of the stain, roughened enamel surfaces, enamel defects, and attrition, as well as plaque and calculus accumulation.22 Areas of severe attrition and abfraction should be evaluated for fractures and necessary treatment.12 Extrinsic staining caused by this habit should be treated with a thorough dental prophylaxis after scaling is complete.20 Both power scalers and hand scalers can remove betel nut staining.21 Due to the potential of pitting and damage to the enamel surface, stain removal should be completed systematically and carefully.20 Areas where cementum and dentin are exposed should not be treated with the rubber cup polisher because of the fragility of the enamel surface in betel quid chewers.20 For patients with OSF, there are few treatment options. Medical/surgical interventions and/or physical therapy may be helpful.9 Surgery focuses on relieving the stiffening of the oral cavity by incising the fibrosis bands, while physical therapy focuses on alleviating muscle tension.16 Nonsurgical methods of managing OSF include the use of corticosteroids to relieve early symptoms, proteolytic enzymes to decrease collagen formation, and vitamin therapy.16 Due to the increased risk of oral cancer, as well as the risk of OSF becoming malignant, betel quid users must receive routine oral cancer screenings and extensive documentation of soft tissue lesions. Teaching patients how to perform a monthly oral cancer self-examination is important for early detection and favorable outcomes.23 Table 2 provides details on performing an oral cancer self-examination. Cessation of betel quid chewing should be strongly encouraged, and patients should be assisted with quitting.2,3 No clear directive has been given on a cessation program for betel quid chewing; however, because of its addictive properties and the presence of tobacco, traditional tobacco cessation models are appropriate for this patient population. The use of nicotine gum can address nicotine dependency, as well as replace the associated masticatory habit.8 Use of the 5As (Ask, Advise, Assess, Assist, Arrange) is a common strategy in tobacco cessation—though this model of behavioral change is aimed at individuals who are ready to quit and it may not be helpful for those who are in the precontemplation or contemplation stage of changing the addictive behavior.24 The Transtheoretical Model for Readiness to Change may be more appropriate for use in eliciting behavior changes among betel quid users (Table 3). This model is a stage-based theory of how behavior change occurs.24 It encompasses five stages: precontemplation—patient does not believe that his or her habit is a problem or refuses to consider cessation; contemplation—patient recognizes that the habit is a problem and expresses a desire to quit the addictive behavior; preparation—the patient makes specific plans for cessation of the addictive habit; action—patient stops the addictive habit; and maintenance—the patient continues to employ methods and strategies to stay free of the addictive habit.25 In the US, education, prevention, and treatment efforts focus on conventional tobacco use. As such, expanded research addressing interventions for betel quid chewing cessation is needed.7 Dental hygienists’ active involvement in educating the community on the dangers of betel quid chewing, and encouraging policy changes that regulate the use and sale of purified preserved betel nut preparations to minors, is extremely important in decreasing the potential explosion of oral cancer in this growing population.7 US census reports from 2010 estimate that 40 million US residents, or 13% of the total population, are foreign-born.26 The possibility of treating a betel quid chewer is extremely likely. Once users are identified, their oral condition should be monitored and education and assistance in quitting betel quid use should be provided. The author would like to thank dental hygiene student Syed Hossain at Eugenio María de Hostos Community College for inspiring her to write this article. Hossain also provided the accompanying photos. - Lingappa A, Nappalli D, Sujatha G, Prasad S. Areca nut: To chew or not to chew? Available at: ejournalofdentistry.com/articles/e-JOD3 BC4F9E2-1D5E-4659-A0C3-DAD2ABA83528.pdf. Accessed January 21, 2014. - Nelson BS, Heischober B. Betel nut: a common drug used by naturalized citizens from India, Far East Asia, and the South Pacific islands. Ann Emerg Med. 1999;34:238–243. - Norton SA. Betel: consumption and consequences. J Am Acad Dermatol. 1998;38:81–88. - Gupta PC, S. Global epidemiology of areca nut usage. Addict Biol. 2002;7:77–83. - Trivedy CR, Craig G, Warnakulasuriya S. The oral health consequences of chewing areca nut. Addict Biol. 2002;7:115–125. - Wang S, Tsai C, Huang S, Hong Y. Betel nut chewing and related factors in adolescent students in Taiwan. Public Health. 2003;117:339–345. - Changrani J, Gany FM. Paan and gutka in the United States: an emerging threat. Journal of Immigrant Health. 2005;7(2):103–108. - Winstock A. Areca nut-abuse liability, dependence and public health. Addict Biol. 2002;7:133–138. - Kerr A, Warnakulasuriya S, Mighell A, et al. A systematic review of medical interventions for oral submucous fibrosis and future research opportunities. Oral Dis. 2011;17:4–57. - Benegal V, Rajkumar RP, Muralidharan K. Does areca nut use lead to dependence? Drug Alcohol Depend. 2008;97:114–121. - Zain RB, Ikeda N, Gupta PC, et al. Oral mucosal lesions associated with betel quid, areca nut and tobacco chewing habits: consensus from a workshop held in Kuala Lumpur, Malaysia, November 25-27, 1996. J Oral Pathol Med. 1999;28:1–4. - Kim JJ, Karastathis D. Dentinal hypersensitivity management. In: Darby ML, Walsh MM, eds. Dental Hygiene Theory and Practice. 3rd ed. St. Louis: Saunders; 2010:726. - Möller IJ, Pindborg JJ, Effendi I. The relation between betel chewing and dental caries. Eur J Oral Sci. 1977;85:64–70. - Chang MC, Kuo MY, Hahn LJ, Hsieh CC, Lin SK, Jeng JH. Areca nut extract inhibits the growth, attachment, and matrix protein synthesis of cultured human gingival fibroblasts. J Periodontol. 1998;69:1092–1097. - Cox SC, Walker DM. Oral submucous fibrosis. a review. Aust Dent J. 1996;41:294–299. - Anusushanth A, Sitra G, Dineshshankar J, Sindhuja P, Maheswaran T, Yoithapprabhunath T.Pathogenesis and therapeutic intervention of oral submucous fibrosis. J Pharm Bioall Sci. 2013;5:85–88. - Sinor PN, Gupta PC, Murti PR, et al. A case-control study of oral submucous fibrosis with special reference to the etiologic role of areca nut. J Oral Pathol Med. 1990;19:94–98. - Gurenlian JR. Screening for oral cancer. Available at: cdeworld.com/courses/20000-Screening_for_Oral_Cancer#. Accessed January 21, 2014. - Petersen PE. The World Oral Health Report 2003: continuous improvement of oral health in the 21st century—the approach of the WHO Global Oral Health Programme. Community Dent Oral Epidemiol.2003;31:3–23. - Dolan JJ. Management of extrinsic and intrinsic stains. In: Darby LM, Walsh MM, eds. Dental Hygiene Theory and Practice. 3rd ed. St. Louis: Saunders; 2010:511. - Prathap S, Rajesh H, Boloor VA, Rao AS. Extrinsic stains and management: a new insight. Journal of Academia and Industrial Research. 2013;1(8):435. - Hattab FN, Qudeimat MA, al-Rimawi HS. Dental discoloration: an overview. J Esthet Dent. 1999;11:291–310. - Oral and Maxillofacial Surgeons Head, Neck and Oral Cancer. Available at: myoms.org/procedures/head-neck-and-oral-cancer. Accessed January 21, 2014. - Aveyard P, Massey L, Parsons A, Manaseki S, Griffin C. The effect of transtheoretical model based interventions on smoking cessation. Soc Sci Med. 2009;68:397–403. - Mallin R. Smoking cessation: integration of behavioral and drug therapies. Am Fam Physician. 2002;65:1107–1122. - Grieco EM, Acosta YD, de la Cruz P, et al. The Foreign-Born Population in the United States: 2010. Available at: census.gov/prod/ 2012pubs/acs-19.pdf. Accessed January 21, 2014. From Dimensions of Dental Hygiene. February 2014;12(2):65–69.
<urn:uuid:72f2416b-29b4-4a73-b7fc-c37b7019aab5>
CC-MAIN-2022-33
https://dimensionsofdentalhygiene.com/article/the-dangers-of-betel-quid-chewing/
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573104.24/warc/CC-MAIN-20220817183340-20220817213340-00498.warc.gz
en
0.886517
4,466
2.859375
3